id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
255395497 | pes2o/s2orc | v3-fos-license | Completion of the continuum of maternity care and associated factors among women who gave birth in the last 6 months in Chelia district, West Shoa zone, Ethiopia: A community-based cross-sectional study
Background The continuum of maternity care is a continuity of care that a woman receives during pregnancy, childbirth, and the postpartum period from skilled providers in a comprehensive and integrated manner. Despite existing evidence regarding maternal healthcare services discretely, the continuum of maternity care and its associated factors are not well-known in Ethiopia. Objective This study assessed the completion of the maternity continuum of care and associated factors among women who gave birth 6 months prior to the study in the Chelia district. Methods A community-based cross-sectional study with a stratified random sampling technique was conducted among 428 mothers at 10 randomly selected kebeles. Pretested and structured questionnaires were used to collect data. Bi-variable and multivariable logistic regression analyzes were performed to identify associated factors. Adjusted odds ratio with its 95% confidence interval was used to determine the degree of association, and statistical significance was declared at a p-value of <0.05. Results In this study, 92 (21.5%) mothers completed the continuum of maternity care. Secondary and above education of mothers (AOR = 4.20, 95% CI:1.26–13.97), ≤30 min spent on walking by foot (AOR = 4.00, 95% CI: 1.67–9.58), using an ambulance to reach health facility (AOR = 3.68, 95% CI: 1.23–11.06), para ≥5 mothers (AOR = 0.21, 95% CI: 0.05–0.90), planned pregnancy (AOR = 3.29, 95% CI: 1.02–10.57), attending pregnant women's conference (AOR = 13.96, 95% CI: 6.22–31.30), early antenatal care booking (AOR = 3.30, 95% CI: 1.54–7.05), accompanied by partners (AOR = 3.64, 95% CI: 1.76–7.53), and informed to return for postnatal care (AOR = 3.57, 95% CI: 1.47–8.70) were the factors identified. Conclusion In this study, completion of the maternity continuum of care was low. Therefore, appropriate strategic interventions that retain women in the continuum of maternity care by targeting those factors were recommended to increase the uptake of the continuum of maternity care.
Background: The continuum of maternity care is a continuity of care that a woman receives during pregnancy, childbirth, and the postpartum period from skilled providers in a comprehensive and integrated manner. Despite existing evidence regarding maternal healthcare services discretely, the continuum of maternity care and its associated factors are not well-known in Ethiopia.
Objective: This study assessed the completion of the maternity continuum of care and associated factors among women who gave birth months prior to the study in the Chelia district.
Methods: A community-based cross-sectional study with a stratified random sampling technique was conducted among mothers at randomly selected kebeles. Pretested and structured questionnaires were used to collect data. Bi-variable and multivariable logistic regression analyzes were performed to identify associated factors. Adjusted odds ratio with its % confidence interval was used to determine the degree of association, and statistical significance was declared at a p-value of < . .
Introduction
Access and use of maternity care services during pregnancy, childbirth, and the postnatal period from skilled providers are essential for the survival and wellbeing of the mother and newborn (1). An uptake of maternity care services in a continuum of care approach has a fundamental role in the reduction of maternal and neonatal morbidity and mortality and saves mothers and babies. The key components of maternity care services are antenatal (ANC), skilled birth attendance (SBA), and postnatal care (PNC) (2).
The maternity continuum of care is the continuity of care received by women during pregnancy, childbirth, and the postpartum period in comprehensive and integrated ways from a skilled provider (1,3,4). It is the package of high-impact maternal and child survival interventions along the continuum of care (5) and a strategy designed to monitor the quality of care provided for women and newborns (6). It is a simple, cost-effective, and low-technology intervention approach that can significantly reduce most of the preventable maternal and neonatal mortalities and maximizes the potential of women and neonates to enjoy the highest achievable level of health (1,7,8). The continuum of maternity care gives emphasis on two key dimensions which are the time and place or level of care. The time dimension highlights the importance of linkages among the packages of maternity care service provision over the time during pregnancy, childbirth, and the postpartum period (9). The place or level of care dimension includes the home, primary, secondary, and tertiary levels of care in healthcare deliveries (9)(10)(11). Globally, in 2017, a total of 2,95,000 women died from pregnancy and childbirth-related complications; of which, the majority of the deaths (86%) were accounted by South Asia and sub-Saharan Africa. Similarly, sub-Saharan Africa alone accounted for roughly two-thirds (66%) of global maternal deaths (12). Ethiopia is one of the countries with a high alert maternal mortality ratio of 401 deaths per 1,00,000 live births and accounted for about 5% of global maternal deaths (10, 12). Most of these deaths occur during childbirth and the days before the end of the first week after childbirth (11), due to the leading causes such as hemorrhage, hypertensive disorders during pregnancy, and sepsis (13,14), and more than 80% of them are preventable through appropriate maternity care services during pregnancy, childbirth, and the postpartum period in a continuum of care manner (6,13,15).
While substantial progress has been made, Ethiopia still needs to go about six and more than two-fold faster to achieve the 2030 global target of maternal and neonatal mortality reduction to less than 70 maternal deaths per 1,00,000 and less than 12 neonatal deaths per 1,000 live births, respectively (12,16). Evidence from low-and middle-income countries (LMICs) showed that completion of the maternity continuum of care was reported ranging from 5 to 75% from the studies conducted in Cambodia and Nepal, respectively (17,18). In Ethiopia, the completion of the maternity continuum of care was reported as ranging from 6.56 to 67.8% (19,20). Despite this evidence, the continuum of maternity care among mothers within 6 months of delivery was not addressed (21)(22)(23)(24)(25).
Although a number of efforts were attempted so far to address the status of maternal healthcare service utilization discretely, there is a dearth of evidence regarding the prevalence and factors associated with the completion of the continuum of maternity care among mothers who gave birth 6 months prior to the study in Ethiopia in general and in Oromia regional state in specific including the study area. Therefore, this study aimed to assess the completion of the maternity continuum of care and associated factors among women who gave birth in the last 6 months in the Chelia district.
Study design, setting, and population
A community-based cross-sectional study design was conducted in Chelia district, West Shoa zone, Oromia regional All mothers who gave birth 6 months prior to the study were the source population, whereas all mothers who gave birth 6 months prior to the study in randomly selected kebeles were the study population. Moreover, all mothers who were booked for antenatal care in the last pregnancy and found between the first week and 6 months after delivery were included in the study. However, mothers who fulfilled inclusion criteria but came from other areas after receiving any one of the maternity care services and those mothers who were critically ill during the data collection period were excluded from the study.
Sample size determination
The sample size for this study was calculated for both specific objectives and compared, and the maximum sample size was taken. First, a single population proportion formula was used by considering the following assumptions: the proportion (P) of maternity continuum of care completion of 21.6%, which was taken from the previous study (26), 95% confidence interval at the critical value of Zα/2 = 1.96, 5% of absolute precision, and a design effect of 1.5. Therefore, with a 10% non-response rate, the minimum sample size for the first specific objective was 430. Similarly, the sample size was calculated for the determinant factors (19, 21,27) to check for the adequacy of sample size by using 95% CI, power 80%, and ratio 1 ( Table 1). By comparing the calculated sample size, 430 were taken as the final sample size for the study.
Sampling procedure and techniques
A two-stage stratified random sampling technique was employed for the selection of study subjects. In the first stage, 10 kebeles (two from urban and eight from rural strata) were selected by simple random sampling technique from a total of 20 kebeles in the district. In the second stage, the lists of households with eligible mothers were prepared from family folders found at a health post in collaboration with Health Extension Workers (HEWs) and used as a sampling frame. Then, the sample size was proportionally allocated to each selected kebele according to their population size, and a simple random sampling technique was applied to select households where eligible mothers were available. Finally, selected mothers were interviewed by data collectors at their homes with the guidance of the women's development army (WDA) from their respective villages. Mothers who were absent on the first day were revisited for the second time and absentees after two repeated visits were replaced by an immediate neighbor who fulfills inclusion criteria (Table 2).
Data collection tools and procedures
A structured interviewer-administered questionnaire was adapted from different literature (19, 21-23, 26-31). The tool was arranged into three sub-titles namely socio-demographic and socioeconomic characteristics, reproductive and obstetricrelated questions, and maternal healthcare service-related questions. It was initially developed in English and translated to Afan Oromo and then translated back to English by different language experts to ensure its consistency and accuracy. Data collection was conducted by four trained BSc Nurses who were recruited from outside of their catchment area. The data collection process was supervised by two health officers and a principal investigator during the data collection period.
Data quality management
Before data collection, 1-day training was given to data collectors and supervisors regarding the objective of the study, data collection procedures, and ethical considerations during data collection. Preceding the data collection, the tool was pretested on 5% of the calculated sample size. After pretesting, necessary modifications were considered. A clear explanation of the purpose and objective of the study was provided for all respondents before beginning an interview. Close supervision was carried out by supervisors and the principal investigator during the time of data collection. All filled questionnaires were checked for completeness and accuracy before data entry.
Data processing and analysis
Data were checked for completeness, coded, and entered into the computer using Epi-Data version 3.1 and exported to SPSS version 25 for analysis. Descriptive statistics and binary logistic regression (Bi-variable and multivariable logistic regression analyzes) model was done. Model fitness and multicollinearity effects between covariates were checked. From the bi-variable logistic regression analysis, variables having a pvalue of ≤0.25 were considered eligible for multivariable logistic regression analysis. Finally, adjusted odds ratios (AOR) with their 95% confidence intervals were estimated to identify the presence and strength of associations, and statistical significance was declared at a p-value of <0.05. Operational definition • The maternity continuum of care was considered complete and coded as "1" when a mother received the three maternal healthcare services along with the continuum of care pathway from a skilled provider. These include four or more antenatal care (ANC) visits, childbirth attended by a skilled birth attendant (SBA), and postnatal checkup within the first week after delivery at health facilities [excluding pre-discharge postnatal care] or home by a skilled provider in a continuum of the care pathway and otherwise considered as not complete the continuum of care and coded as "0" (15,21,(24)(25)(26)30). • A skilled provider is a professionally trained health worker which includes a medical doctor, midwife, nurse, and health officer (32).
Ethics statement
An ethical clearance letter with a Ref.
No. of PGC/167/2021 was obtained from the Institutional Review Board of Ambo University, College of Medicine and Health Sciences. Permission and a support letter to conduct the study were taken from the Chelia district health office. The purpose and importance of the study were explained, and finally, informed written consent was taken from all respondents. In addition, the confidentiality and privacy of the study participants were assured and respected.
Socio-demographic and economic characteristics
A total of 428 respondents yielding a response rate of 99.5% participated in the study. The mean age of the respondents was 28.9 (SD ± 4.871) years. One hundred and fifty-seven (36.7%) of the mothers attained a secondary and above level of education, whereas 103 (24.1%) of them have no formal education. More than half, 249 (58.2%), of the respondents spent more than 30 min walking on foot to reach the nearest health facility and 277 (73.6%) of them got the facility by walking on their feet for seeking maternity care services (Table 3).
Reproductive and obstetrics-related characteristics
All respondents who participated in the study believed that all women need to have maternity care services from skilled providers and birth preparedness and complication readiness plans. Three hundred and thirty-four (78%) and 252 (58.9%) mothers know an appropriate time to start the first antenatal care visit and the need to have four and more ANC visits, respectively. Two hundred and eighty-seven (67.1%) mothers mentioned three and more practices of birth preparedness and complication plans. However, nearly, three-fourth, 307 (71.7%), of them reported that they need to be confined at home for the first 6 weeks after childbirth (Table 4). More than three-fourth, 330 (77.1%), of the respondents got their first pregnancy at an age of 20 and more years. Sixtyone (14.3%) respondents were para five and more, and 52 (14.5%) mothers had experienced different types of obstetric complications during their previous reproductive lives. Among the respondents who participated in the study, 92 (21.5%) of them reported that the last pregnancy was unplanned and 116 (27.1%) respondents attended at least one session of a pregnant women's conference (Table 5).
. /fpubh. . Maternal healthcare service utilization Antenatal care service utilization Of the total of 428 respondents, 148 (34.6%) of them initiated the first ANC visit early in the first trimester (within 12 weeks of gestation) and 183 (42.8%) of them attended ANC-4+ visits. About three-quarters, 318 (74.3%) and 346 (80.8%), of the respondents were informed about pregnancy-related complications and birth preparedness and complication readiness plans, respectively. Only, 141 (32.9%) of the respondents were accompanied by their partners to the maternity care service center (Table 6).
Labor and delivery service utilization
Among the total 428 respondents, 352 (82.2%) of them had childbirths attended by skilled birth attendants. Among mothers
Postnatal care service utilization
Of the total of 428 respondents, nearly half, 213 (49.8), of them had at least one postnatal checkup in their last childbirth. However, only 139 (32.5%) mothers had at least one postnatal checkup within the first week of delivery. Most of the respondents were counseled for breastfeeding, 196 (92%), and postpartum family planning, 196 (92%). Among mothers who had postpartum checkups, 139 (65.3%) and 103 (48.4%) of their newborns got weight measurement and immunization services, respectively ( Table 8).
The majority of the respondents reported that they failed to have postnatal checkups due to being unaware of the need to have postnatal checkups, 171 (80.3%), and the 40-day rule of home confinement in their communities, 157 (73.7%). Moreover, 95 (44.6%) and 57 (26.8%) respondents claimed that the facility was too far and no access to transport to have a postnatal visit, respectively (Figure 1).
Completion of maternity continuum of care
Among all study participants, 183 (42.8%) had attended ANC-4+ visits and 167 (39%) of them retained in the continuum of care and received childbirth services from skilled birth attendants. However, only 92 (21.5%) mothers retained in the continuum of the care pathway and received at least one postnatal care service within the first week of delivery. Therefore, in this study, the overall completion of the maternity continuum of care was 21.5% (95% CI: 17.8-25.5).
Among the total 428 mothers who participated in the study, 336 (78.5%) of them were dropouts from the completion of the maternity continuum of care. Higher proportions of dropouts from the continuum of maternity care service utilization were observed at the completion of focused antenatal care visits and postnatal care service utilization (Figure 2).
Factors associated with the completion of the maternity continuum of care Place of residence, mother's and husband's level of education and employment status, time spent walking by foot to reach In multivariable logistic regression analysis, mothers' secondary and above level of education, time spent ≤30 min by walking on foot to reach a health facility, use of an ambulance as a means of transportation, being para 5 and more, having planned pregnancy, attendance of pregnant women's conference (PWC), early ANC booking within the first trimester, being accompanied by partners, and informed when to return for postnatal checkups before discharge were variables associated with the completion of maternity continuum of care at a p-value of <0.05.
Mothers who attained a secondary and above level of education were 4.20 times more likely to complete the maternity continuum of care compared to those mothers who have no formal education (AOR = 4.20, 95% CI: 1.26-13.97). Mothers who spent ≤30 min of walking to reach the nearest health facility were four times more likely to complete the maternity continuum of care compared to mothers expected to spend more than 30 min of walking to reach the nearest health facility (AOR = 4.00, 95% CI: 1.67-9.58). In addition, mothers who used an ambulance as a means of transportation to reach the health facility were 3.68 times more likely to complete the maternity continuum of care than those who did not use any of the vehicles but by walking on their feet (AOR = 3.68, 95% CI: 1.23-11.06).
This study also identified that para five and more mothers were 79% less likely to complete the maternity continuum of care compared to those mothers who have 1-2 children (AOR = 0.21, 95% CI: 0.05-0.90). In this study, mothers to whom the last pregnancy was planned were 3.29 times more likely to complete the maternity continuum of care than their counterparts (AOR = 3.29, 95% CI: 1.02-10.57). Mothers who attended pregnant women's conferences were 13.96 times more likely to complete the maternity continuum of care than those who did not attend any session (AOR = 13.96, 95% CI: 6.22-31.30). The odds of completing the maternity continuum of care among the mothers who started ANC visits in the first trimester were 3.30 times higher than their counterparts (AOR = 3.30, 95% CI: 1.54-7.05).
Moreover, mothers who were accompanied by their partners to health facilities during their antenatal care visits were 3.64 times more likely to complete the maternity continuum of care compared to their counterparts (AOR = 3.64, 95% CI: 1.76-7.53). Similarly, mothers who were informed when to return for postnatal care during childbirth services to a health facility had .
/fpubh. . 3.57 times higher odds of completing the maternity continuum of care than those never informed about the need to return to postnatal checkup while discharged after delivery (AOR = 3.57, 95% CI: 1.47-8.70) ( Table 9).
Discussion
In this study, the overall completion of the maternity continuum of care was found to be 21.5% (95% CI: 17.8-25.5). Mothers' educational status, time spent by walking on foot and means of transportation used to reach the nearest health facility, parity, having planned pregnancy, attending pregnant women's conference, time of antenatal care booking, partners accompany, and informing when to return for postnatal care before discharge were factors associated with the completion of maternity continuum of care.
The completion of the maternity continuum of care in the present study is 21.5% (95% CI: 17. 8-25.5). This finding revealed that a significant number of women dropped out from the continuum of maternity care. A higher proportion of women dropped out from the completion of the continuum of the maternity care pathway. This finding is in line with a study conducted in Nigeria (18.5%) (33), India (19%) (34), and the Dabat and Gondar Zuria districts of the Northern Gondar zone (21.6%) (26). However, it is higher than studies conducted in Cambodia (5%) (18), Ghana (7.9%) (35), Tanzania (10%) (25), Ethiopia (6.56% and 9.1%) (19, 28), Arba Minch Zuria district (9.7%) (30), Legambo district of South Wollo zone (11.2%) (36), and West Gojjam zone (12.1%) (29). The possible reason might be attributed to the time difference between the studies, and there could be an improvement in the accessibility of health services, which in turn improves the utilization of maternity care services.
However, the current finding is lower than the studies conducted in Zambia (38%) ( district (45%) (31), Motta town and Hulet Eji Enese district (47%) (27), and Debre Markos town (67.8%) (20). The possible explanation for the discrepancy might be due to the difference in maternity care services provision attributed to the difference in socio-demographic status, socioeconomic status, geographical barriers, availability, and accessibility of services and infrastructures between the countries and regions. The other reason might be the difference in the study area in which mothers from the current study were mostly rural dwellers, less educated, and had inadequate knowledge of the importance of having maternity care services in a continuum of care pathway. Moreover, the previous studies reported as maternity continuum of care was completed when mothers attended at least one ANC visit, had childbirth attended by SBA, and had at least one postpartum checkup within 6 weeks of delivery, and/or mothers attended ANC-4+ visits, had childbirth attended by SBA, and had one health checkup within 48 h or 6 weeks postpartum period (19-21, 26-31, 36). As a result, the estimation of an outcome variable might be increased when compared with this study which considered maternity continuum of care was completed for mothers who attended ANC-4+ visits, had childbirth attended by SBA, and had at least one health checkup either for themselves or for their newborns within the first week of the postpartum period. Evidence revealed that the educational attainment of mothers was found to have a significant association with the completion of the maternity continuum of care. Accordingly, this study revealed that those mothers who attained a secondary and above level of education were 4.20 times more likely to complete the maternity continuum of care compared to those mothers who did not attend school. This finding is consistent with the studies conducted in South Asia and sub-Saharan Africa (8), India (34), Pakistan (15,40), Lao Peoples' Democratic Republic (41), Ghana (35,42), Nigeria (33), Egypt (24), and Ethiopia (21,27,31), in which mothers who attained a secondary and above level of education had higher odds of completing maternity continuum of care than those mothers who never attended school. This could be explained by the fact that more educated women have better opportunities to be aware of and able to decide when and from where to receive the recommended maternal healthcare services to be utilized throughout their reproductive lives.
Another evidence showed that the time spent walking to reach the nearest health facility for maternal and neonatal healthcare services was found to have a significant association with the completion of the maternity continuum of care. Accordingly, the finding of this study revealed that mothers who traveled for ≤30 min on foot to arrive at the nearest health facility were 4.0 times more likely to complete the maternity continuum of care compared to those who expected to travel for more than 30 min. This finding is comparable with the studies conducted in Cambodia (18) and Motta town and Hulet Eji Enese district of Northeast Ethiopia (27). The possible explanation is due to the fact that access to health facilities will reduce the traveling time and costs for transportation to reach health facilities and then leads to better utilization of maternity continuum of care services.
In this study, mothers who used an ambulance as a means of transportation to reach the nearest health facility had 3.68 times more likely to complete the maternity continuum of care than those who got a health facility by walking on their feet. This finding is supported by studies conducted in Ghana and Dabat and Gondar Zuria districts of Northern Gondar zone, Ethiopia, in which mothers who used a car as a means of transportation to get health facilities for skilled delivery services were more likely to complete the maternity continuum of care than those who got on their foot (26,42). This might be explained by the fact that using any type of vehicle to get health facility will reduce the time spent on traveling to reach the nearest health facility and enhances skilled delivery service which is a key component of the maternity continuum of care services. Thus, using the ambulance to get to a health facility has a positive association with the completion of the maternity continuum of care.
In this study, parity was found to be significantly associated with the completion of the maternity continuum of care. The finding of this study revealed that mothers with para five and more were 79% less likely to complete the maternity continuum of care compared to para 1-2 mothers. This finding is consistent with the studies conducted in Pakistan and Egypt in which mothers with one or two birth orders had higher odds of completing the maternity continuum of care compared to their counterparts (15,24). The possible explanation might be due to the fact that pregnancy and having a child is a source of joy and a means to replace offspring across the global community and, thus, newly married couples and those mothers with low birth orders are motivated to seek and utilize the recommended maternity care services from skilled care providers.
Mothers having a planned pregnancy were 3.29 times more likely to complete the maternity continuum of care compared to their counterparts. This finding is consistent with the studies conducted in the Arba Minch Zuria district of Gamo zone and Enemay districts of Northwest Ethiopia in which mothers to whom the last pregnancy was planned had higher odds of completing the maternity continuum of care than their counterparts (30, 31). The reason might be due to the fact that women with planned and wanted pregnancies are psychologically well prepared and cautious about their pregnancy and thus interested in seeking and utilizing the recommended maternal and neonatal healthcare services compared to their counterparts.
The finding of this study showed that women who attended pregnant women's conferences had 13.96 times higher odds of completing the maternity continuum of care compared to their counterparts. This finding is consistent with the study conducted in the rural Libokemkem district of Northwest Ethiopia in which women who attended pregnant women's conferences had higher odds of utilizing maternal health services compared to their counterparts (43). The possible explanation might be due to the fact that women who attended a PWC during their pregnancy could be able to have adequate maternal and neonatal healthcare service information that was recommended for all pregnant women during pregnancy, childbirth, and the postpartum period from skilled providers and its merits and demerits of complete using and failing to do so compared with their counterparts.
This study also revealed women who were booked for ANC visits within the first 12 weeks of gestation for the last pregnancy were 3.30 times more likely to complete the maternity continuum of care than those mothers who were booked later on. This finding is supported by the studies conducted in Lao PDR (41), Tanzania (44), Motta town, and Hulet Eji Enese district of East Gojjam zone (27), Arba Minch Zuria district of Gamo zone (30), West Gojjam zone (29), and Debre Birhan town (21) in which women who were booked for ANC within the first trimester had a higher odds of completing maternity continuum of care than those booked late. The possible explanation might be due to the fact that women who booked the first ANC visit at the earlier gestational age would obtain greater opportunities of getting contact with skilled providers and adequate time for providers to communicate and enable the women to make adequate preparations and readiness to utilize the key maternity care components like skilled birth attendance and postpartum care.
The finding of this study also revealed that women who were accompanied by their partners while receiving maternity care services had 3.64 times more likely to complete the maternity continuum of care compared to those mothers who did not accompany skilled maternity care services by their partner. This finding is supported by the studies conducted in Bangladesh (45), Afghanistan (46), and Kenya (47) in which women who were accompanied by their partners to receive maternal and neonatal health services from skilled providers had higher odds of utilizing the next level of key maternity continuum of care components compared to their counterparts. The possible reason might be due to the synergetic effects of maternal healthcare service information and counseling provided to the couples could result in good awareness of the utilization of the key components of the maternity continuum of care services. Mothers who were informed when to return to health facilities or maternity care services centers before discharge were 3.57 times more likely to complete the maternity continuum of care compared to their counterparts. This is consistent with the analysis done from EDHS 2016 data and a study conducted in the Dabat and Gondar Zuria district of Northern Gondar zone in which women who were informed about pregnancy-related complications and got appropriate health education regarding maternity care services to be utilized during their pregnancy, childbirth, and postpartum period had higher odds of completing maternity continuum of care than their counterparts (26,28). The possible reason might be the health information and education regarding maternity care services and postpartum period danger signs provided to women during their pregnancy, childbirth, and postpartum period enhance their awareness and increase the need for seeking maternity care components compared to their counterparts. Generally, this study assessed the prevalence of the completion of the maternity continuum of care and identified factors associated with it among postpartum period mothers. This evidence might help program managers and healthcare providers to focus on identified factors to increase the uptake of the maternity continuum of care and, thus, contribute to the achievement of the global and national maternal mortality reduction plan.
This study has some limitations: First, it might be subjected to recall bias because the mothers may not recall all the services and advice they received. Second, since the study was cross-sectional, the temporal relationship between factors and outcome variables cannot be established. Finally, the incompleteness of the birth record at the health post by the health extension workers might lead the researchers to miss mothers during the sampling frame preparation.
Conclusion
An overall completion of the continuum of maternity care was low in the study area. Educational status of mothers, time spent by walking on foot and means of transportation used to reach the nearest health facility, parity, having planned pregnancy, attending pregnant women's conference, time of . /fpubh. . antenatal care booking, partners accompany, and informing when to return for postnatal care before discharge were factors associated with the completion of maternity continuum of care. Therefore, appropriate strategic interventions that retain women in the continuum of maternity care by targeting those factors were recommended to be in place by different stakeholders. Moreover, health offices should monitor the completion of the continuum of care as a performance measuring indicator as it is a priority agenda for quality service assurance and empower all healthcare providers including health extension workers and women development armies to help women to initiate early antenatal care visits, and conduct pregnant women's conference to increase uptake of the continuum of maternity care.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
The studies involving human participants were reviewed and approved by an ethical clearance letter with a Ref. No. of PGC/167/2021 was obtained from Institutional Review Board of Ambo University, College of Medicine and Health Sciences. The patients/participants provided their written informed consent to participate in this study.
Author contributions
TB, NW, and GG conceived study protocol, participated in study design, analysis, report writing, and drafted the manuscript. FW, DD, and GM involved in analysis, report writing, and drafted the manuscript. All authors have read and approved the final manuscript. | 2023-01-04T14:54:32.747Z | 2023-01-04T00:00:00.000 | {
"year": 2022,
"sha1": "0f25005946732bffe47a13647a2ad04afd1e243f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "0f25005946732bffe47a13647a2ad04afd1e243f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119642655 | pes2o/s2orc | v3-fos-license | Monodromy representations and surfaces with maximal Albanese dimension
We relate the existence of some surfaces of general type and maximal Albanese dimension to the existence of some monodromy representations of the braid group $\mathsf{B}_2(C_2)$ in the symmetric group $\mathsf{S}_n$. Furthermore, we compute the number of such representations up to $n=9$, and we analyze the cases $n \in \{2, \, 3, \, 4\}$. For $n=2, \, 3$ we recover some surfaces with $p_g=q=2$ recently studied (with different methods) by the author and his collaborators, whereas for $n=4$ we obtain some conjecturally new examples.
Introduction
The classification of surfaces S of general type with χ(O S ) = 1, i.e. p g (S) = q(S), is currently an active area of research, see for instance the survey paper [BaCaPi06]. For these surfaces, [Deb82, Théorème 6.1] implies p g ≤ 4, and the cases p g = q = 4 and p g = q = 3 are nowadays completely described, see [CaCiML98], [HP02], [Pir02].
Regarding the case p g = q = 2, a complete classification has been recently obtained when K 2 S = 4, see [CMP14], [BPS16]. In fact, these are surfaces on the Severi line K 2 S = 4χ(O S ); we refer the reader to the aforementioned papers and the references contained therein for a historical account on the subject and more details.
The purpose of this note is to show how monodromy representations of braid groups can be concretely applied to the fine classification of surfaces with p g = q = 2 and maximal Albanese dimension, allowing one to rediscover old examples and to find new ones.
The idea is to consider degree n, generic covers of Sym 2 (C 2 ), the symmetric square of a smooth curve of genus 2, simply branched over the diagonal δ. In fact, if such a cover exists, then it is a smooth surface S with χ(O S ) = 1, K 2 S = 10 − n, see Theorem 1. Furthermore, if p g (S) = q(S) = 2 then the Albanese variety Alb(S) is isogenous to the Jacobian variety J(C 2 ) (Proposition 7) and, for a general choice of C 2 , the surface S contains no irrational pencils (Proposition 8).
A group homomorphism B 2 (C 2 ) −→ S n satisfying the requirements above will be called a generic monodromy representation of B 2 (C 2 ), see Definition 3. By using the Computer Algebra System GAP4 (see [GAP4]) we computed the number of generic monodromy representations for 2 ≤ n ≤ 9, see Theorem 2. In particular, such a number is zero for n ∈ {5, 7, 9}, so there exist no generic covers in these cases, see Corollary 2. For the reader's convenience, we included an Appendix containing the short script.
The previous discussion can be now summarized as follows.
Theorem Let f : S −→ Sym 2 (C 2 ) be a generic cover of degree n and whose branch locus is the diagonal δ. Then S is a surface of maximal Albanese dimension with χ(O S ) = 1 and K 2 S = 10 − n. Moreover, if 2 ≤ n ≤ 9 then S is of general type. The isomorphism classes of generic covers of degree n are in bijective correspondence to generic monodromy representations ϕ : B 2 (C 2 ) −→ S n , up to conjugacy in S n . For 2 ≤ n ≤ 9, the corresponding number of representations is given in the table below: n 2 3 4 5 6 7 8 9 Number of ϕ 16 3 · 80 6 · 480 0 15 · 2880 0 28 · 172800 0 Such results are still far for being conclusive, for at least two reasons: (1) no attempt has been made here in order to compute the number of generic monodromy representations ϕ : B 2 (C 2 ) −→ S n for all values of n. In principle, our GAP4 script could do this, but in practice it is not efficient enough when n is big (the computation in the case n = 9 already took several hours). Furthermore, it would be desirable to extend our methods to non-generic representations, i.e. to non-generic covers of Sym 2 (C 2 ); (2) given a generic cover f : S −→ Sym 2 (C 2 ), corresponding to a generic monodromy representation ϕ : B 2 (C 2 ) −→ S n , it is at the moment not clear how to explicitly compute K 2 S , whereS denotes the minimal model of S: in fact, we know no general procedure to determine whether S contains some (−1)-curves, see Proposition 6.
These are interesting problems that we hope to address in the future.
Let us explain now how this paper is organized. In Section 1 we collect some preliminary results that are needed in the sequel of the work, namely the Grauert-Remmert extension theorem, the GAGA principle and their corollaries, Bellingeri's presentation for B 2 (C 2 ) and the classification of surfaces with χ(O S ) = 1 and maximal Albanese dimension. In Section 2 we prove our main results and we make a more detailed analysis of our covers in the cases n = 2, 3, 4. It turns out that for n = 2 and n = 3 we rediscover some examples recently studied (using different methods) by the author and his collaborators, see [PiPol16], [PolRiRo17]; on the other hand, for n = 4 we conjecture that our construction provides new examples of minimal surfaces with p g = q = 2, K 2 = 6 and maximal Albanese dimension, that we plan to investigate in a sequel of this work.
Notation and conventions. We work over the field C of complex numbers. By surface we mean a projective, non-singular surface S, and for such a surface K S denotes the canonical class, p g (S) = h 0 (S, K S ) is the geometric genus, q(S) = h 1 (S, K S ) is the irregularity and χ(O S ) = 1 − q(S) + p g (S) is the Euler-Poincaré characteristic.
We say that S is of maximal Albanese dimension if its Albanese map a S : S −→ Alb(S) is generically finite onto its image.
If C is a smooth curve, we write J(C) for the Jacobian variety of C.
The symbol S n stands for the symmetric group on n letters. Proposition 1 (Grauert-Remmert extension theorem). Let Y be a normal analytic space and Z ⊂ Y a closed analytic subspace such that U = Y − Z is dense in Y . Then any finite, unramified cover f • : V −→ U can be extended to a normal, finite cover f : X −→ Y , and such an extension is unique up to isomorphisms.
Proposition 2 (GAGA principle). Let X, Y be projective varieties over C, and X an , Y an the underlying complex analytic spaces. Then (1) every analytic map X an −→ Y an is algebraic; (2) every coherent analytic sheaf on X an is algebraic, and its algebraic cohomology coincides with its analytic one.
The two results above imply the following fact concerning extensions of covers of quasiprojective varieties. With a slight abuse of notation, we write X instead of X an .
Proposition 3. Let Y be a smooth, projective variety over C and Z ⊂ Y be a smooth, irreducible divisor. Set U = Y − Z. Then any finite, unramified analytic cover f • : V −→ U can be extended in a unique way to a finite cover f : X −→ Y, branched at most over Z. Moreover, there exists on X a unique structure of smooth projective variety that makes f an algebraic finite cover.
Proof. By Proposition 1, the cover f • : V −→ U can be extended in a unique way to a finite analytic cover f : X −→ Y . Such a cover corresponds to a coherent analytic sheaf of algebras over the projective variety Y ; now Proposition 2 implies that such a sheaf is algebraic, hence so are X and f .
Since X is normal and Y is smooth, by the purity theorem ( [SGA1, X.3.1]) the branch locus of f is either empty or coincides with the smooth irreducible divisor Z. In both cases, a local computation shows that X is a smooth scheme (see [EdJS10, Lemma 2.1]), then its underlying analytic space is a complex manifold, which is compact because it is a finite analytic cover of the compact manifold Y . Therefore X is a smooth complete scheme endowed with a finite map f : X −→ Y onto the projective scheme Y . If L is an ample line bundle on Y , by [Laz04, Proposition 1.2.13] it follows that f * L is an ample line bundle on X, so X is a smooth projective variety and we are done.
Corollary 1. Let Y be a smooth projective variety over C and Z ⊂ Y be a smooth, irreducible divisor. Then isomorphism classes of connected covers of degree n f : X −→ Y, branched at most over Z, are in bijection to group homomorphisms with transitive image up to conjugacy in S n . Furthermore, f is a Galois cover if and only if the subgroup im ϕ of S n has order n, and in this case im ϕ is isomorphic to the Galois group of f .
Chapter 8] we know that isomorphism classes of degree n, connected topological covers f • : V −→ U are in bijection to conjugacy classes of group homomorphisms of type (1), and that Galois covers are precisely those such that im ϕ has order n. Since U is a complex manifold, we can pull-back its complex structure to V , in such a way that f • becomes an analytic map. Then, by using Proposition 3, we can uniquely extend f • to a degree n, algebraic finite cover f : X −→ Y , branched at most over Z and such that X is smooth and projective. This completes the proof.
The group homomorphism ϕ is called the monodromy representation of the cover f , and its image im ϕ is called the monodromy group of f . By Corollary 1, if f is a Galois cover then the monodromy group of f is isomorphic to its Galois group.
Braid groups on Riemann surfaces
For more details on the results of this subsection, we refer the reader to [Bel04].
Definition 1. The braid group on k strings on C g is the group B k (C g ) whose elements are the braids based at P and whose operation is the usual product of paths, up to homotopies among braids.
It can be shown that B k (C g ) does not depend on the choice of the set P. Moreover, there is a group isomorphism where Sym k (C g ) denotes the k-th symmetric product of C g , namely the quotient of the product (C g ) k by the natural permutation action of the symmetric group S k , and δ stands for the big diagonal in Sym k (C g ), namely the image of the set We are primarily interested in the case g = k = 2.
Proposition 4. The braid group B 2 (C 2 ) can be generated by five elements a 1 , a 2 , b 1 , b 2 , σ subject to the eleven relations below: Proof. See [Bel04, Theorem 1.2], which provides a finite presentation for the general case B k (C g ).
Geometrically speaking, the generators of B 2 (C 2 ) in the statement of Proposition 4 can be interpreted as follows. The a i and the b i are the braids that come from the representation of the topological surface associated with C 2 as a polygon of 8 sides with the standard identification of the edges, whereas σ is the classical braid generator on the disk. In terms of the isomorphism (2), the generator σ corresponds to the homotopy class in Sym 2 (C 2 ) − δ of a topological loop that "winds once around δ".
Surfaces of general type with χ(O S ) = 1 and maximal Albanese dimension
Let us describe now surfaces with of general type with p g (S) = q(S) and maximal Albanese dimension.
Proposition 5. Let S be a minimal surface of general type with χ(O S ) = 1 and maximal Albanese dimension. Then we are in one of the following situations: (1) p g (S) = q(S) = 4, K 2 S = 8 and S = C 2 × C 2 , where C 2 and C 2 are smooth curves of genus 2. In this case Alb(S) J(C 2 ) × J(C 2 ) and a S : S −→ Alb(S) is the product of the Abel-Jacobi maps of C 2 and C 2 , hence it is an immersion; (2) p g (S) = q(S) = 3, K 2 S = 6 and S = Sym 2 (C 3 ), where C 3 is a smooth curve of genus 3. In this case a S : S −→ Alb(S) is birational and its image is a principal polarization. More precisely, if C 3 is not hyperelliptic then a S is an immersion (so its image is smooth), whereas if C 3 is hyperelliptic then a S contracts the unique (−2)-curve on S corresponding to the g 1 2 on C 3 (so the image of a S has a rational double point of type A 1 ); (3) p g (S) = q(S) = 3, K 2 S = 8 and S = (C 2 × C 3 )/Z 2 , where C 2 is a smooth curve of genus 2 with an elliptic involution τ 2 , whereas C 3 is a smooth curve of genus 3 with a free involution τ 3 and the cyclic group Z 2 acts freely on the product C 2 × C 3 via the involution τ 2 × τ 3 . Setting (4) p g (S) = q(S) = 2 and a S : S −→ Alb(S) is a generically finite, branched cover.
Monodromy representations of braid groups and surfaces
with p g = q = 2 2.1 Generic covers of Sym 2 (C 2 ) Let C 2 be a smooth curve of genus 2 and let Sym 2 (C 2 ) be its second symmetric product. The Abel-Jacobi map π : is birational, more precisely it is the blow-down of the unique rational curve E ⊂ Sym 2 (C 2 ), namely the (−1)-curve given by the unique g 1 2 on C 2 . We have δE = 6, because the curve E intersects the diagonal δ transversally at the six points corresponding to the six Weierstrass points of C 2 . Writing Θ for the numerical class of a theta divisor in J(C 2 ), it follows that the image D := π * δ ⊂ J(C 2 ) is an irreducible curve with an ordinary sextuple point and no other singularities, whose numerical class is 4Θ (see [PiPol16, Lemma 1.7]).
Using the terminology of [MaPi02], we can now give the following Definition 2. Let f : S −→ Sym 2 (C 2 ) be a connected cover of degree n branched over the diagonal δ, with ramification divisor R ⊂ S. Then f is called generic if where the restriction f | R : R −→ δ is an isomorphism and R 0 is an effective divisor over which f is not ramified.
Note that generic covers are never Galois, unless n = 2 (in which case f * δ = 2R). Since δ is smooth, the genericity condition in Definition 2 is equivalent to requiring that the fibre of f over any point of δ has cardinality n − 1; thus the restriction morphism f | R 0 : R 0 −→ δ is a cover of degree n − 2. Setting we infer Γ 2 = nδ 2 = −4n, Z 2 = nE 2 = −n, ΓZ = n(δE) = 6n. (3) then α is a generically finite cover of degree n, simply branched over the smooth locus of D and contracting the curve Z to the unique singular point of D. The case where Z is irreducible is illustrated in Figure 1 below. Theorem 1. Let f : S −→ Sym 2 (C 2 ) be a generic cover of degree n and whose branch locus is the diagonal δ. Then S is a surface of maximal Albanese dimension with Moreover, if 2 ≤ n ≤ 9 then S is of general type.
Proof. The canonical class of S is given by The curve R is smooth of genus 2, so the genus formula and (4) yield On the other hand, by using the projection formula we can write hence R 2 = −2. Thus can find K 2 S , in fact K 2 S = (Z + R) 2 = Z 2 + 2RZ + R 2 = −n + 12 − 2 = 10 − n. (5) Now we have to compute χ(O S ). Squaring both sides of 2R + R 0 = Γ yields Moreover, again by the projection formula we infer Combining (6) with (7), we get Therefore the effective curves R and R 0 are disjoint. This in turn allows us to compute c 2 (S); in fact, writing χ top for the topological Euler number and recalling that f | R 0 : R 0 −→ δ iś etale, by additivity we obtain By using (5) and (8) together with Noether formula, we get χ(O S ) = 1. The surface S is of maximal Albanese dimension because, by the universal property of the Albanese map, the surjective morphism α : S −→ J(C 2 ) factors through a S : S −→ Alb(S), so the image of a S has dimension 2. In particular we have q(S) ≥ 2. If 2 ≤ n ≤ 9 then S is irregular with K 2 S > 0, hence of general type by [Be82, Proposition X.1].
The values of n ∈ {2, . . . , 9} for which generic covers do exist will be given in Theorem 2. We have at the moment no general method to determine whether the surface S described in Theorem 1 is minimal or not. A partial result about locating its exceptional curves is the following Proposition 6. The curve Z = f * E is reducible for n > 4. Moreover, all (−1)-curves of S, if any, are components of Z.
Proof. By computing the arithmetic genus of Z, we obtain For n > 4 this quantity is negative, hence Z is reducible and the first claim follows. The second claim is an immediate consequence of the fact that the only rational curve in Sym 2 (C 2 ) is E.
Remark 1. In the cases n = 2 and n = 3 the curve Z is actually irreducible and S is minimal, see Subsections 2.2, 2.3. The irreducibility of Z for n = 4 is still an open problem, see Subsection 2.4.
Let us consider now the case q(S) = 2.
Proposition 7. Let S be as in Theorem 1, and assume in addition that q(S) = 2. Then Alb(S) is isogenous to J(C 2 ), more precisely there exists an isogeny β : Alb(S) −→ J(C 2 ) such that α = β • a S . In particular, if n is prime then a S coincides with α, up to automorphisms of J(C 2 ).
Proof. Since q(S) = 2, the Albanese variety Alb(S) is an abelian surface, so a S : S −→ Alb(S) is generically finite and, by the universal property, there is an isogeny β : Alb(S) −→ J(C 2 ) such that the following diagram commutes: In particular, deg β divides n. If n is prime, since S is not birational to an abelian surface (recall that χ(O S ) = 1) we get deg β = 1; this means that β is a birational morphism between abelian surfaces, hence an isomorphism.
Recall that an irrational pencil (or irrational fibration) on a smooth, projective surface is a surjective morphism with connected fibres over a curve of positive genus.
Proposition 8. Let S be as in Theorem 1 and assume that q(S) = 2. If φ : S −→ W is an irrational pencil on S, then g(W ) = 1. Moreover, the general surface S contains no irrational pencils at all.
Proof. We borrow the following argument from [PiPol16, Proposition 1.9]. Since q(S) = 2, we have either g(W ) = 1 or g(W ) = 2. The latter case must be excluded: otherwise, using the embedding W → J(W ) and the universal property, we would obtain a morphism of abelian surfaces Alb(S) −→ J(W ) with image isomorphic to the genus 2 curve W , contradiction. Then g(W ) = 1 and Alb(S) must be a non-simple abelian surface. On the other hand, by Proposition 7 we know that Alb(S) is isogenous to J(C 2 ), and the latter surface is simple for a general choice of the curve C 2 , see [Ko76, Theorem 3.1]. So, for a general choice of S, the Albanese variety Alb(S) is also simple and there are no irrational pencils on S.
We are now ready to apply the theory developed in Subsection 1.1 in order to produce generic covers f : S −→ Sym 2 (C 2 ).
Definition 3. A generic monodromy representation of the braid group B 2 (C 2 ) is a group homomorphism ϕ : B 2 (C 2 ) −→ S n with transitive image and such that ϕ(σ) is a transposition.
Generic covers and generic monodromy representations are related by the following Proof. The first part of the statement is an immediate consequence of Corollary 1 and isomorphism (2). The computation of number of monodromy representations with ϕ(σ) = (1 2) was done by using a short GAP4 script, that the reader can find in the Appendix. The total number of representations is obtained by multiplying such a number by the number of transpositions in S n , which is n(n − 1)/2.
As an immediate consequence of Theorem 2, we can now state the following non-existence result.
Corollary 2. Let n ∈ {5, 7, 9}. Then there exist no surfaces with p g = q = 2 whose Albanese map is a generically finite, degree n cover of J(C 2 ) simply branched over the smooth locus of the curve D.
Finally, let us describe the situation in more details when n ∈ {2, 3, 4}.
The case n = 2
In this case we are looking for generic monodromy representations Since B 2 (C 2 ) is generated by five elements a 1 , a 2 , b 1 , b 2 , σ and necessarily ϕ(σ) = (1 2), we immediately see that there are 2 4 = 16 possibilities for ϕ. The group S 2 is abelian, so there is no conjugacy relation to consider and we get sixteen isomorphism classes of double covers f : S −→ Sym 2 (C 2 ), branched over δ and with These covers correspond to the sixteen square roots of δ in the Picard group of Sym 2 (C 2 ); all of them give minimal surfaces by Proposition 6, since Z is a smooth, irreducible curve of genus 2. One cover coincides with the natural projection f : C 2 × C 2 −→ Sym 2 (C 2 ), in fact We claim that the remaining fifteen covers are surfaces with p g (S) = q(S) = 2, K 2 S = 8.
Indeed, otherwise, p g (S) = q(S) = 3 and S would belong to case (3) of Proposition 5, in particular it would admit an irrational pencil φ : S −→ W with g(W ) = 2, contradicting Proposition 8. By Proposition 7, the Albanese map of S is generically finite of degree 2 onto J(C 2 ). These surfaces are studied in [PolRiRo17, Section 2].
The case n = 3
In this case we are looking for generic monodromy representations up to conjugacy in S 3 . The output of our GAP4 script shows that if ϕ(σ) = (1 2) there are 80 different choices for ϕ, so the total number of monodromy representations is 3 · 80 = 240. For every such a representation we have im ϕ = S 3 .
The GAP4 script also shows that each orbit for the conjugacy action of S 3 on the set of monodromy representations consists of six elements, and consequently the orbit set has cardinality 240/6 = 40.
By Theorem 2, this implies that there are 40 isomorphism classes of generic covers f : S −→ Sym 2 (C 2 ) of degree 3 and branched over δ. For all of them, the surface S satisfies p g (S) = q(S) = 2, K 2 S = 7 and, by Proposition 7, its Albanese map is a generically finite cover of degree 3 onto J(C 2 ). These surfaces were studied in [PiPol16], where it is proved, with different methods, that they are all minimal (it turns out that Z is a smooth, irreducible curve of genus 1) and lie in the same deformation class. In fact, their moduli space is a connected, quasi-finite cover of degree 40 of M 2 , the coarse moduli space of curves of genus 2.
The case n = 4
In this case we are looking for generic monodromy representations ϕ : B 2 (C 2 ) −→ S 4 , up to conjugacy in S 4 .
The output of our GAP4 script shows that if ϕ(σ) = (1 2) there are 480 different choices for ϕ, so the total number of monodromy representations is 6 · 480 = 2880. For every such a representation we have im ϕ D 8 , the dihedral group of order 8.
The GAP4 script also shows that each orbit for the conjugacy action of S 4 on the set of monodromy representations consists of 12 elements, and consequently the orbit set has cardinality 2880/12 = 240.
By Theorem 2, this implies that there are 240 isomorphism classes of generic covers f : S −→ Sym 2 (C 2 ) of degree 4 and branched over δ. For all of them, the surface S satisfies χ(O S ) = 1, K 2 S = 6. We do not know whether the curve Z is irreducible or not. However, we conjecture that, at least for some of these covers, S is a minimal model with p g (S) = q(S) = 2, and this would provide new examples of surfaces with these invariants and maximal Albanese dimension. We will not try to develop this point here, planning to come back to the problem in a sequel of this paper. | 2017-07-12T08:45:12.000Z | 2017-06-02T00:00:00.000 | {
"year": 2017,
"sha1": "f6a0257d8fc697167943cff4b4e642254526e72e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1706.00817",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fbdc47de6a9241cfdcd651cb18624ac154b9ee00",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
236262691 | pes2o/s2orc | v3-fos-license | RELATIONSHIP OF AGE AND NUTRITION STATUS WITH WORK FATIGUE IN INPATIENT NURSES AT DR M YUNUS HOSPITAL, BENGKULU
DOI: https://doi.org/10.36720/nhjk.v10i1.224 Abstract Background: Nurse is a profession that cannot be separated from the problem of work fatigue. Fatigue due to work will have a negative impact on work such as decreased accuracy, skill, and even work productivity. If this happens to nurses, it can put the patient at risk, decreasing the quality of service. Work fatigue is caused by several factors, such as worker characteristics, namely age, sex, education, nutritional status, and work environment factors (Tarwaka, 2013). Nurses have a fairly heavy duty in assisting emergency services and nursing services continuously for 24 hours so that nurses sometimes eat irregularly which can cause nutritional conditions to be disturbed (Astuti, Ekawati, & Wahyuni, 2017). Objectives: The purpose of this study was to determine the relationship between age, nutritional status and subjective work fatigue in inpatient nurses at dr. M Yunus Bengkulu. Methods: The research design was an analytic survey with a cross sectional approach. The population of this study were 128 nurses in the inpatient room with total sampling technique. The data collection instrument is a work fatigue feeling questionnaire consisting of 17 questions adopted from (Setyawati, 2010) which has tested the validity and reliability in her book entitled at a glance about work fatigue, a nutritional status questionnaire by taking physical measurements of body weight and height to determine BMI, and age questionnaire with units of years. Data analysis used chi square test with 95% degree of error if p-value ≤ 0.05 indicates a significant relationship and if > 0.05 is not significant. Results: This study shows that there is no relationship between age and subjective work fatigue on nurses with a p-value = 0.107 > α = 0.05. And there is a relationship between nutritional status and subjective work fatigue on nurses with a p-value = 0.000 > α = 0.05. Conclusion: Work fatigue that occurs in nurses is expected to be an important concern, nurses should maintain a healthy body by consuming balanced nutritious food and diligent exercise and adequate rest, so as to reduce the risk of fatigue due to work.
INTRODUCTION
One of the occupational health and safety (K3) problems that workers often experience is work fatigue. The result of this work fatigue can cause work accidents. Fatigue is a condition of the body that is weakened in its activities (Budiono, et al, 2003). Too long working can cause the body to experience fatigue, which ultimately the body loses energy reserves. Routine work with psychological stress despite experience (Nurmianto, 2008).
According to the World Health Organization (WHO) in 2013, the number of world health workers was 43 million, this includes 9.8 million doctors, 20.7 million nurses / midwives, and around 13 million other health workers. According to the Indonesian Ministry of Health, the number of nursing personnel in Indonesia in 2018 was 354,218, so nurses are the largest number of health workers (Kemenkes RI, 2018).
According to deep Cucit (Uli, Modjo, & Turdinanto, 2018)Nursing is a profession which has a duty to prioritize the interests of patients over oneself. The population increases in line with the increasing number of patients so that nurses are required to work harder and longer to complete all allocated tasks.
According to the World Health Organization (WHO), after heart disease, the number 2 killer disease is a feeling of heavy fatigue (Gaol, Camelia, & Rahmiwati, 2018). Research (Dimkatni, Sumampouw Jufri, & Manampiring, 2020)conducted on 126 nurses who worked in the IGD, ICU, and inpatients at the Bitung Hospital and Budi Mulia Bitung Hospital showed that 66.9% experienced moderate fatigue and 1.7% experienced high fatigue. Research conducted on 153 nurses in the inpatient room of Bandar Lampung Hospital showed that the most experienced fatigue was 75.8% (Saftarina, Mayasari, & Vilia, 2016). Fatigue is a common complaint that affects nurse performance. About 20% of nurses have work fatigue symptoms (Wiyarso, 2018).
According to Robbins, internal factors such as gender, age, nutritional status, family dependents, education, working time and length of work and work environment are external factors (Setyawati, 2010). Age is one of the factors that can cause work fatigue. As the age increases, the condition of the body will decrease, such as decreased function of vision, hearing, memory, movement and even in making decisions. Therefore, in giving a job one must also consider a person's age (Tarwaka, 2013). According to Astuti et al, there is a relationship between age and work fatigue among nurses (Astuti et al., 2017).
Labor requires calories and nutrients to do work and maintain health conditions so that they can be productive at work (Budiono et al., 2003). According to Lestari et al, nurses who have abnormal nutritional status tend to experience fatigue and there is a relationship between nutritional status and work fatigue (Rahmawati & Afandi, 2019).
Dr. M. Yunus Bengkulu is a regional public hospital belonging to the provincial government. The conditions of patients who are hospitalized, both in terms of the number and types of diseases suffered are very diverse and demand high accuracy and thoroughness from the nurses. Based on this background, the researchers are interested in conducting this research.
Study Design
This research is a type of analytic survey research with a cross sectional approach.
Setting
The location of this research is in hospital of dr. M Yunus Bengkulu which was held in August 2020. This research was conducted during the Covid-19 pandemic, so researchers are required to apply the Covid-19 prevention protocol.
Research Subject
The subjects of this research were all nurses in the inpatient room of hospital of dr. M. Yunus Bengkulu, namely 128 people with a sampling technique that is total sampling.
Instruments
The data collection instrument is a work fatigue feeling questionnaire consisting of 17 questions adopted from (Setyawati, 2010) which has tested the validity and reliability in her book entitled at a glance about work fatigue, a nutritional status questionnaire by taking physical measurements of body weight and height to determine BMI. , and age questionnaire with units of years.
Data Analysis
Data analysis is a univariate analysis to determine the distribution and frequency of the independent variables (age and nutritional status) and the dependent variable (subjective work fatigue). Bivariate analysis to determine the relationship between age, nutritional status and subjective work fatigue. This study uses the chi square test with a degree of error of 95% if p-value ≤ 0.05 indicates a significant relationship and if > 0.05 is not significant. Analysis test using STATA version 11.
Ethical Consideration
This study has received approval from the Health Research Ethics Committee of dr. M. Yunus Bengkulu with letter number: No.21/KEPK-RSMY/VIII/2020. According the results of this study in the table 1, it showed that the 128 nurses who experience subjective work fatigue in the moderate category, 65 (50.7%) are nurses and 63 (49.2%) heavy categories are nurses. Nurses with age ≤ 35 years were 39 (30.5%) nurses and aged > 35 years were 89 (69.5%) nurses. Nurses with normal nutrition were 97 (75.8%) and nurses with abnormal nutrition were 31 (24.2%).
Analysis of the Relationship between Age and Nutrition Status with Work Fatigue
Based on the results data (table 2), it found that the 39 nurses whose age is ≤ 35 years there are 24 (18.75%) nurses who experience moderate subjective work fatigue and 15 (11.72%) nurses who experience severe subjective work fatigue. Meanwhile, there were 89 nurses whose age > 35 years there were 41 (32.03%) nurses who experienced moderate subjective work fatigue and 48 (37.50%) nurses who experienced severe subjective work fatigue. And also, out of 97 nurses with normal nutrition, there were 58 (59.8%) nurses who experienced moderate subjective work fatigue and 39 (40.2%) nurses experienced severe subjective work fatigue. Whereas 31 nurses with abnormal nutrition, 7 (22.6%) nurses experienced moderate subjective work fatigue and 24 (77.4%) nurses experienced severe subjective work fatigue.
DISCUSSION
Based on the results of research conducted on 128 nurses in the inpatient room of dr. M Yunus Bengkulu Hospital showed that the number of nurses > 35 years old was 69.53% of nurses. Increasing age will be followed by a decrease in VO2 max, visual acuity, hearing acuity, speed of distinguishing things, making decisions and short-term memory abilities. Thus the influence of age must always be taken into consideration in giving someone a job (Tarwaka, 2013).
Based on the research results, it was obtained that of the 39 nurses whose age was ≤ 35 years there were 24 (18.75%) nurses who experienced moderate subjective work fatigue and 15 (11.72%) nurses who experienced severe subjective work fatigue. Nurses who are ≤ 35 years of age but experience severe work fatigue because it is triggered by nurses getting work shifts at night with conditions having to fight drowsiness and also because there are several nurses with conditions of excess nutritional status. According to Dimkatni et al. (2020) the existence of work shifts in nurses can make nurses experience low sleep quality which in turn has an impact on work fatigue. According to Oksandi & Karbito (2020) nurses with poor nutritional status were 3.16 times more likely to experience fatigue than nurses with good nutritional status.
Meanwhile, there were 89 nurses whose age> 35 years there were 41 (32.03%) nurses who experienced moderate subjective work fatigue and 48 (37.50%) nurses who experienced severe subjective work fatigue. Of the 41 nurses who are > 35 years old but experience moderate subjective work fatigue because their workload is light and they have good relationships with co-workers so that while carrying out work feels easier.
Based on the results of the analysis using the chi square test, there was no relationship between age and work fatigue of nurses in the inpatient room of dr. M Yunus Hospital, Bengkulu. The results of this study are in line with the research conducted on the construction department employee nurses that there is no relationship between age and employee work fatigue (Gaol et al., 2018). According to Oksandi & Karbito (2020), workload can be a measure for workers for how long they can work without causing fatigue or work-related disruption. The higher the workload, the higher the risk of experiencing work-related fatigue. According to Setyawati (2010), mental coaching that takes place periodically and specifically can change the tendency for fatigue problems to arise in workers.
Most of the nurses had normal nutritional status, namely 97 (75.78%) nurses. Work nutrition is needed to maintain and improve the health status and strive for optimal workforce. Health and workability are closely related to a person's nutritional level (Suma'mur, 2014).
Based on the research results, it was obtained that out of 97 nurses with normal nutrition, there were 58 (45.31%) nurses who experienced moderate subjective work fatigue and 39 (30.47%) nurses experienced severe subjective work fatigue. Even though nurses are in good nutritional condition, they can experience severe subjective work fatigue due to the high work activity in the midst of the Covid-19 pandemic and also the condition of the nurses who are over 35 years old.
Whereas 31 nurses with abnormal nutrition, 7 (5.47%) nurses experienced moderate subjective work fatigue and 24 (18.75%) nurses experienced severe subjective work fatigue. Nurses with abnormal nutritional conditions can experience moderate subjective work fatigue because most of them get work shifts in the morning with shorter working hours compared to day shifts and night shifts. In line with research on nurses at Herna Hospital Medan, that nurses experience work fatigue on the morning shift because after finishing work they can rest and sleep at night (Nuraini, 2019). According to the results of research from Siregar & Wenehenubun (2019) nurses with the majority of women can experience work fatigue if they get night shifts with a long duration of time. According to Nurmianto (2008), working on night shifts can pose a big risk to health for workers such as sleep disorders, fatigue, heart disease, high blood pressure, gastrointestinal disorders.
Based on the results of the analysis using the chi square test, there is a relationship between nutritional status and work fatigue of nurses in the inpatient room of dr. M. Yunus Hospital, Bengkulu. Consuming a balanced nutrition is important so that the body condition remains healthy and avoids work fatigue, especially for nurses who are busy dealing with patients with various disease backgrounds. Nurses with abnormal nutritional conditions will have an impact on decreasing the ability and endurance of the body in doing work. According to research from Rahmawati & Afandi (2019) workers with inappropriate nutritional intake can feel tired when compared to workers with adequate nutritional intake.
In line with research on nurses and midwives at Puskesmas Mlati II, that nutritional status has a significant relationship with work fatigue (Ardiyanti, Wahyuni, & Jayanti, 2017). According to research from Lestari & Isnaeni (2020), the occurrence of work fatigue is related to the nutritional status of workers. As many as 38 people with abnormal nutritional conditions experienced more work fatigue than normal nutritional conditions.
CONCLUSION
Most of the nurses in the inpatient room were> 35 years old and the nutritional status was mostly normal. The results showed that there was no relationship between age and subjective work fatigue in nurses, and there was a relationship between nutritional status and subjective work fatigue in nurses.
SUGGESTIONS
It is expected that the inpatient nurse of dr. M Yunus Bengkulu to be able to manage his own working time by making the best use of the rest time available and stretching the muscles in between busy working hours. Nurses are also expected to be able to maintain a healthy body by consuming a balanced and diligent nutritional menu.
ACKNOWLEDGMENT
Our thanks go to the Directorate of Research and Community Service, Directorate General of Research and Development Strengthening Ministry of Research, Technology / National Research and Innovation Agency (KEMENRISTEK-BRIN RI) who have provided grants through the novice lecturer research program. Apart from that, we would also like to thank dr. M. Yunus Hospital, Bengkulu and other parties who have helped for the cooperation that has been given, so that this research can be carried out.
DECLARATION OF CONFLICTING INTEREST
This study has no conflict of interest.
AUTHOR CONTRIBUTION
Rina Aprianti: Data collection in the field, data analysis, data interpretation, preparation of research reports, compile the manuscript.
Susilo Wulan: Data collection in the field, data analysis, data interpretation, preparation of research reports, compile the manuscript.
Elza Wulandari: Data collection in the field, data analysis, data interpretation, preparation of research reports, compile the manuscript. | 2021-07-26T00:06:01.873Z | 2021-06-09T00:00:00.000 | {
"year": 2021,
"sha1": "938546175ad66de79fc57332f1b7ada9ba202f16",
"oa_license": "CCBYNC",
"oa_url": "https://ejournal-kertacendekia.id/index.php/nhjk/article/download/224/197",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ba43f8b929df0d8004074ba5db3e06d62047c4a2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
238039622 | pes2o/s2orc | v3-fos-license | The logit model as a way of integral assessment of oxidative stress induced by mechanical eye injury
The eyes are exposed to aggressive environmental influences. The blood-ophthalmic barrier is one of the resistance mechanisms serving to protect the body. Mechanical eye trauma violates the integrity of the hematoophthalmic barrier and induces oxidative stress on the background of general inflammatory process with cellular structures disturbances. The aim of our study was to investigate the peculiarities of free radical processes and antioxidant protection of the organism under induced oxidative stress by mechanical eye injury. The experiment was conducted on males of six months of age in an amount of 150. In blood, liver, brain, heart and skeletal muscle tissues we studied the state of enzyme indicators of oxidative stress in the dynamics in intact animals and in rats with violation of the blood-ophthalmic barrier by mechanical eye injury and different types of therapy. Based on the data obtained, we performed an integral assessment of oxidative homeostasis in injured animals using a logistic model. The resulting logistic regression equations allow modifying the redox processes in the body by applying biologically active compounds as additional therapy, but are of more fundamental than practical interest, as they illustrate the interrelationships of lipid peroxidation-antioxidant system enzymes in the tissues under study.
Introduction
The visual analyzer due to its structure is constantly exposed to aggressive environmental influences, primarily related to the spectral characteristics of sunlight, inducing increased formation of singlet oxygen forms, causing in turn the destruction of lipids, proteins, DNA cells [1][2][3]. The natural load on the visual analyzer is reinforced by insufficient nutrition, increased work intensity associated with increased exploitation of the visual organs, an increasing proportion of chronic metabolic diseases associated with ophthalmic pathologies [4,5].
The blood-ophthalmic barrier is one of the resistance mechanisms, serving to protect the body and prevent homeostasis disruption when the body is exposed to factors that can disrupt this balance. It is responsible for regulating the inflow into and out of the eye of various substances characteristic of normal and pathological metabolism, and also has an immune function, preventing the entry of microorganisms, antibodies and leukocytes [6,7,8]. Endothelial cells of the eye microcirculatory channel are the main element of the hemato-ophthalmic barrier and penetration of substances from blood into tissues and cells of the eye and back occurs through the dense cell membranes of the endothelium [9].
Mechanical trauma of the eye violates the integrity of the hematooophthalmic barrier and induces oxidative stress on the background of the general inflammatory process with disruption of cellular structures [10]. An additional factor aggravating this process is oxygen, which is necessary for cellular respiration. In the body there are always oxidative processes induced by free radicals and it is necessary for metabolism, respiration, immune reactions, but all this is balanced by repair processes thanks to endogenous and exogenous antioxidants [11,12]. When inflammation increases the number of free radicals and oxidation processes exceed the reduction reactions, which leads to increased destruction of not only injured cellular structures, but the whole and this disrupts the normal activity of the entire body.
To determine the intensity of oxidative stress, we studied the activities of catalase, superoxide dismutase, glutathione peroxidase and glutathione reductase, as well as concentrations of malonic dialdehyde and diene con. gates in blood serum, liver tissue, brain, heart tissue and skeletal muscle tissue.
Catalase is the first link of intracellular protection against reactive oxygen species and its main function is the neutralization of the anion radical O2-, hydroxyl radical, radicals of unsaturated fatty acids (lipoperic acid), the splitting of hydrogen peroxide formed during cellular respiration into molecular oxygen and water.
Superoxide dismutase is an endogenous free oxygen radical acceptor, it removes superoxide radicals and prevents the formation of other more dangerous free radicals: hydroxyl radical and singlet oxygen.
Glutathione peroxidase catalyzes the peroxide detoxification reaction without free radical formation, using reduced glutathione, γ-glutamylcysteinylglycine (GSH), as a hydrogen donor.
Glutathione reductase together with glutathione peroxidase form a closed antiperoxidase complex in which peroxidase neutralizes peroxides to hydrogen and water, while glutathione is oxidized and glutathione reductase reduces oxidized glutathione, turning it into a substrate for glutathione peroxidase activity.
Malonic dialdehyde is one of the end products of lipid peroxidation, whose transformations result in the formation of insoluble lipid-protein complexes -lipofuscin.
Diene conjugates are the primary product of oxidation in the body and their concentration can be used to judge about the intensification of free-radical processes in the body. They are toxic metabolites that have a damaging effect on lipoproteins, proteins, enzymes and nucleic acids [10,11]].
Thus, oxidative stress underlies the pathogenesis of many diseases, which emphasizes the importance of its assessment and finding ways to stop it [13,14].
The aim of our study was to investigate the peculiarities of free radical processes and antioxidant protection of the organism under induced oxidative stress by mechanical eye injury.
The main objectives of our work: To study the state of oxidative stress indicator enzymes (catalase (CAT), superoxide dismutase (SOD), glutathione peroxidase (GP), glutathione reductase (GR), malonic dialdehyde (MDA) and diene conjugates (DC)) in dynamics in intact animals and rats with a violation of the blood-ophthalmic barrier by mechanical eye injury and conduct an integral assessment of oxidative homeostasis in injured animals using a logistic model.
Materials and methods
The experiment was carried out on male mongrel rats of six months of age, weighing 220-240 g and numbering 150 animals. All animals were divided equally into 5 groups of thirty rats in each group. Group 1 rats were intact animals. Groups 2, 3, 4, and 5 were experimental, where all animals received penetrating wounds to both eyes. Group 2 animals were not treated for mechanical eye injury. Group 3 rats received standard therapy for eye injury, Group 4 animals received standard therapy with the addition of quercetin injections intraperitoneally, and Group 5 animals received only quercetin injections. Detailed methodology of the experiment is presented in our previously published work. The animals during the experiment were kept with free access to water and food on a standard vivarium diet [15].
Activity of catalase, SOD, GP and GR, as well as concentrations of MDA and DC were studied in blood, brain tissues, heart tissues, liver tissues and skeletal muscle tissue before the experiment, as well as on 1, 3, 5, 7 and 14 days of the experiment. Activity of catalase was determined by M.A. Korolyuk method. SOD activity was determined by V.S. Gurevich method. Activity of GP was determined by the method of V.M. Moin. GR activity was determined by oxidized glutathione accumulation. MDA concentration was determined according to Rogozhin V.V. method. Determination of DС concentration was performed by spectrometric method [15].
In accordance with ethical standards, rats were decapitated under ether anesthesia, blood was collected and brain, heart, liver, skeletal muscle tissue were extracted and homogenates were prepared from them [15].
For an integral assessment of homeostasis in rats we used coefficients of oxidative stress: coefficient expressing the ratio of catalase activity to SOD activity; antioxidantoxidant index (AOI) expressing the ratio of catalase activity to MDA concentration; ratio of MDC concentration to DC concentration and local antioxidant index (LAI) representing the ratio of the product of catalase and SOD activity to MDA concentration.
Results of the study
During the processing of the experimental data we found that the most significant changes in the activity and concentration of the oxidative stress marker enzymes in the tissues under study occurred on the 5th, 7th and 14th days of the experiment, and we decided to write down the logistic regression equations for each tissue for the above time periods.
For the fifth day, the logistic regression equation The logit-models obtained for each tissue under study allow predicting the shift of redox equilibrium in the rat organism by identifying and evaluating the most significant parameters of the organism's antioxidant status.
We also analyzed the dynamics of the number of coefficients in the logistic regression equations (Fig. 1). Mechanical trauma of the eye is a stimulant of oxidative processes in the body, as toxic products of autolysis of damaged tissues enter the blood and intensify oxidation and production of free radicals. This disturbs the redox equilibrium in the body, which is reflected in the state of lipid peroxidation system enzymes -antioxidants. At the same time, the activity of catalase, SOD, GP and GH decreases in tissues, but the concentration of MDA and DC increases. Tissues of brain, heart, liver and skeletal muscle tissue, as well as blood are no exception and react accordingly to mechanical trauma of the eye.
Application of different types of therapy simulates oxidative homeostasis of the tissues under study and calculation of indices allows to reveal insignificant fluctuations of the shaken redox equilibrium.
In general, the activities of lipid peroxidation-antioxidant enzymes in the studied rat tissues under oxidative stress caused by mechanical impact on the blood-ophthalmic barrier are more effectively stabilized by standard therapy of mechanical eye injury with the addition of quercetin as an injection.
Quercetin alone therapy for mechanical eye injury also affects enzyme dynamics in the tissues studied, promoting a return to physiological normality, but not as effectively as standard anti-inflammatory therapy alone or comprehensive therapy with quercetin supplementation. There is an opinion that the use of synthetic antioxidants to suppress oxidative stress can disrupt the signaling role of free radicals and thereby impair the body's adaptive capabilities. From these considerations, it is clear that a systematic study of natural protectors with antioxidant properties such as a-tocopherol, ascorbic acid, carotenoids, flavonoids, ubiquinol, neuropeptides is an urgent task.
Analysis of the effects of these compounds on the body under conditions of oxidative stress, the mechanism of their effect and specificity of their protective effect can form the basis for improving treatment protocols for many human pathologies.
Natural antioxidants can play an essential role in the prevention of diseases associated with oxidative damage, and the study of the mechanisms of the antioxidant system creates an opportunity to develop a new strategy for the prevention and treatment of these diseases.
Conclusions
The resulting logistic regression equations for the identification of the most important factors of oxidative stress in mechanical eye trauma in the dynamics allow modifying the redox processes in the body by applying biologically active compounds as adjunctive therapy, but are of more fundamental than practical interest, as they illustrate the interrelation of lipid peroxidation-antioxidant system enzymes in the tissues under study. | 2021-08-27T17:01:31.686Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "9ac89309e5e12c80fd7d99640a45e84ac66eadac",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/49/e3sconf_interagromash2021_12029.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "55a42ff7f3ca4c3e60d97c1b7566d10f172a077c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231841105 | pes2o/s2orc | v3-fos-license | An Optimized K-Edge Signal Extraction Method for K-Edge Decomposition Imaging Using a Photon Counting Detector
In K-edge decomposition imaging for the multienergy system with the photon counting detectors (PCDs), the energy bins significantly affect the intensity of the extracted K-edge signal. Optimized energy bins can provide a better K-edge signal to improve the quality of the decomposition images and have the potential to reduce the amount of contrast agents. In this article, we present the Gaussian spectrum selection method (GSSM) for the multienergy K-edge decomposition imaging which can extract an optimized K-edge signal by optimizing energy bins compared with the conventional theoretical attenuation selection method (TASM). GSSM decides the width and locations of the energy bins using a simple but effective model of the imaging system, which takes the degraded energy resolution of the detector and the continuous x-ray spectrum into consideration. Besides, we establish the objective function, difference of attenuation to relative standard deviation ratio (DAR), to determine the optimal energy bins which maximize the K-edge signal. The results show that GSSM gets a better K-edge signal than TASM especially at the lower concentration level of contrast agents. The new method has the potential to improve the contrast and reduce the amount of contrast agents.
INTRODUCTION
K-edge decomposition imaging has remarkable potential in some clinical applications like the x-ray oncology imaging for the breast and the abdomen [1,2]. At present, the K-edge decomposition imaging is commonly realized using traditional dual-energy devices [3,4]. There are four types of devices for dual-energy imaging: the sequence scan device, the dual-source device [5], the dual-layer detector device [6], and the fast kVp switching device [7]. The sequence scan device requires double exposures which increase the motion artifacts [8]. The dual-source device has a much more complex system and the images obtained have different phases. The dual-layer detector device can obtain images at the same phase in one exposure, but its energy resolution performance is relatively weak which goes against the quality of resulting images. The fast kVp switching device has a higher requirement of the imaging system and still has the problem of phase-matching. Besides, the energy mixing of the photons weakens the K-edge signal [9].
In recent years, the development of the photon counting detectors (PCDs) attracted much attention for multienergy imaging [10]. Multienergy imaging based on PCDs can obtain images within different energy bins in one exposure and thereby can solve the exiting problems of the dual-energy imaging to a certain extent [11,12]. Also, because of the energy resolving ability and adjustable energy thresholds of PCDs, it can obtain more precise spectrum information to improve the K-edge signal [13]. Therefore, multienergy imaging based on PCDs is one of the research focuses.
The energy bins used for the K-edge decomposition imaging based on the PCDs significantly affect the contrast of the processed images [14,15]. The energy bins decided by the conventional theoretical attenuation selection method (TASM) are widely used for K-edge decomposition imaging as shown in Figure 1. In this figure, the two energy bins are symmetrical on both sides of the theoretical K-edge position [16,17], but the limited energy resolution of the PCDs distorts the attenuation curve [18] which will lead to the deviation of the energy bins in TASM. Several physical effects of the PCD are responsible for the degradation of energy resolution, including Compton scattering [19], charge sharing [20], pulse pileup [21], and fluorescence emission [22]. Moreover, the continuous x-ray spectrum also flattens the K-edge signal [23]. The weakened K-edge signal can further influence the quality of the decomposition image and increase the amount of the contrast agents.
Some authors optimized the K-edge decomposition algorithm in previous studies [24][25][26] to guarantee the results of the decomposed images. Mang Feng et al. [27] and Ding et al. [28] optimized the x-ray spectrum which have the potential to get a better image contrast.
Another effective way to improve the quality of the K-edge decomposition imaging is optimizing the energy bins. Previous work was mostly focused on optimizing the width of the energy bins used for K-edge imaging, He et al. decided the energy bin width by signal difference to noise ratio (SDNR) next to the K-edge position [29]. Bo Meng et al. used the redescribed signal to noise ratio (SNR) to obtain the energy bin width next to the K-edge position [30]. Seung-Wan Lee et al. carried out some simulation work on optimizing the energy bins [31]. However, few studies considered the effects of the degraded energy resolution and the continuous x-ray spectrum on energy bins optimization which distort the attenuation curve and lead to a deviation of the energy bins. To take these negative factors into consideration, Silvia Pani et al. selected the energy bins by mapping the spectrum passing through the contrast agents [25], but the disadvantage of this approach is being too tedious for the practical application.
In this work, we propose the Gaussian spectrum selection method (GSSM) for multienergy imaging to increase the intensity of K-edge signal. It takes the degraded energy resolution and the continuous x-ray spectrum into consideration by modeling the imaging system and decides the optimal energy bins by the objective function, difference of attenuation to relative standard deviation ratio (DAR), proposed in this research. GSSM can obtain both the width and the locations of the optimized energy bins without the spectrum mapping process. The experimental results in Experimental Materials and Designs show that the decomposition image obtained by the GSSM has a higher quality than that obtained by the TASM.
METHOD
The method of this study to optimize the energy bins is based on the modeling of the multienergy imaging system. We estimate the influence of the energy resolution (R E ) with the Gaussian convolution to the theoretical mass attenuation curve and estimate the influence of the continuous x-ray spectrum distribution by calculating the equivalent mass attenuation coefficients of the energy bins. Finally, we establish the objective function DAR to determine the optimal energy bins which maximize the K-edge signal.
Imaging System Modeling
The mass theoretical attenuation curve of the material with K-edge changes to the shape shown in Figure 2 under the influence of R E ; this can be estimated using Gaussian convolution [18,32]. The theoretical mass attenuation coefficient data is acquired from the National Institute of Standards and Technology (NIST) [33] and is recorded as Att 0 (E). The spectrum of the x-ray tube, P 0 (E), is estimated by the simulation software SpekCalc [34,35]. The process of Gaussian convolution is described in where Att G (E) is the mass attenuation curve after Gaussian convolution and P G (E) is the spectrum after Gaussian convolution. C(E) is the Gaussian convolution kernel at the corresponding energy, which is determined by R E . C(E) has a complex form with energy on the overall mass attenuation curve, but at the K-edge position (E K ) it can be calculated as in (2). The R 0 in (2) is the energy resolution at the known energy position E 0 , and the E K is the K-edge energy position of the material. The influence of spectrum is expressed by the equivalent mass attenuation coefficient (Att eq (B)) of the energy bin in (3) [36,37], where B represents the corresponding energy bin:
Objective Function DAR
The quality of the decomposition image is determined by not only the contrast but also the noise level. This research proposes the objective function, the difference of attenuation to relative standard deviation ratio (DAR) to maximize the K-edge signal, and the meaning of DAR is shown in where ΔAtt eq represents the difference of equivalent mass attenuation coefficients between the left and the right energy bins of the K-edge. N represents the total noise level of these two images. They are mutually restrictive: a wide bin can reduce the noise level while weakening ΔAtt eq (shown in Figure 3). Conversely, a narrow bin can maintain the ΔAtt eq while increasing the noise level of the image. The detailed derivation of DAR is provided in Appendix 1. The final DAR expression is shown in where (ρd) is the mass thickness of the sample and B L and B R are the left and the right energy bins of K-edge. Att eq (B L ) and Att eq (B R ) are the equivalent mass attenuation coefficients of B L and B R , respectively.
The optimal solution B L and B R is expressed in (6). Since the difference of equivalent mass attenuation coefficients should be large enough in the contrast agents while it should be small enough in the background between the two energy bins, the solution space of the optimal energy bins is limited to a very small area: where E K is the energy at the K-edge position and n is an empirical constant and B L and B R represent the mid-values of B L and B R . In general, the optimal solution of (6) should be determined by analyzing the stationary points and the He ssian matrix of DAR. However, the analytic expressions of the derivatives for the DAR used in this research are difficult to get. Since the solution space is limited around the K-edge position, all energy bins in the solution space are calculated to determine the optimal solutions, which are corresponding to the maximum DAR. Take iodine as an example, where the left bin and right bin images S L G and S R G acquired by GSSM can be expressed as follows: And the left bin and right bin images S L T and S R T acquired by TASM can be expressed as follows: where the ω represents the energy bin width. The different energy bins in the TASM and the GSSM lead to different final decomposition image qualities.
Dual Energy Decomposition Algorithm
We also take the iodine contrast agents as an example. Considering the superposition of background and iodine, the images for the left and the right energy bins next to the K-edge can be expressed as in (9) [25]: where μ ρ L I , μ ρ R I and μ ρ L bg , μ ρ R bg are the mass attenuation coefficients of iodine and background for the left and the right energy bins, (ρd) I and (ρd) bg are the mass thicknesses of iodine and background, and S L 0 and S R 0 are incident photon numbers, respectively. The attenuation characteristics A of these two energy bins can be expressed as follows: The iodine information is calculated to get the iodineequivalent image with
EXPERIMENTAL MATERIALS AND DESIGNS
Imaging System The imaging system used in this research was a spectral microcomputed tomography (CT) prototype for small animals, which was independently developed by the Institute of High Energy Physics, Chinese Academy of Sciences. It can realize multienergy digital radiography (DR) and CT scanning by using the photon counting detector produced by XCounter. The sensor material of the detector is CdTe and its thickness was 0.75 mm. The effective area was 153.6*25.6 mm 2 . The detector has 1536*256 pixels with a single pixel area of 100*100 μm 2 . It has two energy thresholds and it can detect the photons with the energy range from 10 keV to 160 keV. The two energy thresholds determine the lower and higher boundary of an energy bin in one scanning. To acquire images with different energy bins, we use the multiple scanning procedure of the CT system. In our previous work, the energy resolution of the detector has been studied with several isotopes [38]. The R 0 and E 0 used in this article are 22.1% for 59.6 keV.
Equivalent Attenuation Coefficient Curves
In this research, the iodine mass attenuation coefficient curve was mapped to verify the availability of the model we proposed. Besides, the mass attenuation coefficient curve of gadolinium was mapped to test the accuracy of the model. The solutions with the concentration of 100 mg I/ml and 100 mg Gd/ml were loaded in PE centrifuge tubes with a volume of 1.5 ml. Considering the K-edge positions of iodine and gadolinium, the experimental conditions were set to 80 kVp, 70 μA for iodine and 90 kVp, 80 μA for gadolinium.
Comparison of GSSM and TASM in the Same Energy Bin Width
The phantom used in the experiment was made of polymethyl methacrylate (PMMA) as shown in Figure 4 with 6 holes filled with iodine contrast agents with different concentrations. The inner diameter of each hole is 1 mm. Concentrations of the iodine contrast agents are 0 mg I/ml, 20 mg I/ml, 25 mg I/ml, 50 mg I/ml, 75 mg I/ml, and 100 mg I/ml from right to left, respectively.
Comparison of Different Energy Bin Widths
The B L and B R for different energy bin widths were used for decomposition imaging to verify the optimal width of the energy bin selected by DAR. The phantom used here was the same as that in 3.3. The energy bin widths were set to 2 keV, 5 keV, 8 keV, and 13 keV for decomposition imaging.
Comparative Experiment
To illustrate the effectiveness and the superiority of GSSM compared with TASM, a comparative experiment was designed by adding a complex background to the phantom, which is made of nylon as shown in Figure 5. The nylon strips, with a thickness of 1 mm, are used as a distraction of the iodine contrast agents in the phantom (the concentrations from right to left are 10 mg I/ml, 15 mg I/ml, 25 mg I/ml, 50 mg I/ml, 75 mg I/ml, and 100 mg I/ml, respectively). The parameters of the two experiments are the same except for the energy bins.
Equivalent Attenuation Coefficient Curves
The measured attenuation characteristics curves of iodine and gadolinium were mapped by threshold scanning of the detector and converted to the equivalent attenuation coefficient. The comparisons with the results of GSSM are shown in Figure 6. The energy of the K-edge extreme points of iodine and gadolinium is shown in Table 1.
The results of GSSM are consistent on energy with the measured results; the errors of the extreme points are less than 3%. There is a difference in Att eq and this condition is more serious at lower energies. The possible reasons for this are Compton scattering and charge sharing; more additional photons at lower energies have been recorded which can result in an overestimation of photon number at low energy level.
Comparison of GSSM and TASM in the Same Energy Bin Width
The energy bin width used in this experiment was 10 keV. Corresponding DAR of B L and B R are shown in Figure 7.
The energy bins in TASM and GSSM are listed in Table 2. The decomposition images of these two methods are shown in Figure 8. We compute contrast to noise ratio (CNR) to evaluate the image quality, as shown in where m 1 and m 2 are the mean values of a specific contrast agent area and the background area and σ 1 and σ 2 in the denominator indicate the standard deviation (STD) of the contrast agents area and the background, respectively. A larger CNR indicates a better image quality. In Figure 8, both of these two decomposition images can highlight the iodine areas, but GSSM has a better performance in lower iodine concentration conditions contrast agents than TASM. The CNR values of the decomposition images are shown in Table 3 and Figure 9.
Comparison of Different Energy Bin Widths
DAR results of B L and B R for different energy bin widths are shown in Figure 10 and the optimal energy bin width is determined as 5 keV. The corresponding experimental results are shown in Figure 11. The experimental results are in good agreement with DAR results. Small energy bin width, such as 2 keV, provides good contrast of the iodine areas but strong noise in the decomposition image ( Figure 11A), while large energy bin width has a reverse effect ( Figure 11D). The result shows that the 5 keV energy bin width provides the best decomposition image quality which indicates the objective function DAR is effective.
Comparative Experiment
The energy bins for TASM and GSSM are listed in Table 4. The images are shown in Figures 12A-C, and the CNR quantitative comparison of the iodine areas in the decomposition images is shown in Figure 12D.
The normal DR image ( Figure 12A) cannot distinguish the iodine areas while the images acquired by TASM and GSSM ( Figures 12B,C) can exclude the interference of the background very well. Figure 12D shows that the decomposition image acquired by GSSM extracts the K-edge signal of the iodine areas better than TASM. Quantitative analysis of CNR for different concentrations of iodine solution is shown in Table 5. The CNR results are improved in GSSM compared with the TASM image; this improvement is more obvious in low concentration level because the K-edge signal drops in this condition, causing a greater deviation of energy bins and quality degradation of decomposition image. The CNR of 25 mg I/ml in GSSM is equivalent to the CNR of 50 mg I/ml in TASM.
DISCUSSION
This research proposes the GSSM method to optimize the extracted K-edge signal for multienergy imaging; it takes the degraded energy resolution of PCDs and the continuous x-ray spectrum into consideration. GSSM has several advantages over the traditional method. Firstly, the K-edge decomposition imaging can highlight the region of the contrast agents to get a better image quality than the ordinary DR. Secondly, GSSM is more effective than TASM in extracting the signal of K-edge, improving the quality of the decomposition images. Furthermore, GSSM can get optimized width and locations of the energy bins from the theoretical attenuation curve without the spectrum mapping process, which is more convenient for practical use. At last, due to the better performance at lower contrast agent concentration, GSSM has the potential to reduce the amount of contrast agents compared to TASM. Thus it can further improve the safety and reduce the side effect of contrast agents.
For the energy thresholds limitation of the PCD used in this study, we performed multiple scanning procedures to simulate the multienergy imaging process which can be done in a single exposure with a multithreshold PCD, such as Medipix-3 providing up to 8 energy thresholds. Besides, since the count rate of the PCD is much lower than the conventional detector, the PCD cannot be widely used at present, but it can get exciting results in some low count rate scenarios such as the breast imaging and the small animal imaging.
It should be noted that, for the material decomposition method, there are some comprehensive models considering various effects of the imaging system [39,40]. However, as to the determination of the energy bins in K-edge decomposition imaging, a Gaussian model can get better results than the current methods; a more accurate result can be expected if the model can consider other effects of the imaging system. Besides, a difference between the heights of equivalent attenuation coefficient curves is noticed in Imaging System and its possible explanations are factors like Compton scattering and charge sharing. Therefore, specially designed correction methods could be helpful to achieve better performance. Moreover, the energy bin widths on both sides of the K-edge are equal in this preliminary result. Future study can put attention on imaging with nonequal energy bin widths.
CONCLUSION
The image contrast can be improved by using the K-edge imaging technique. This research proposes a more convenient and efficient energy bins determination method, GSSM, for extracting the K-edge signal in multienergy imaging based on the PCDs. The decomposition image acquired by GSSM has better quality than that acquired by TASM in terms of CNR. Because the improvement is especially considerable under low contrast agent concentration situation, our method has a potential to reduce the requirement of contrast agent in multienergy imaging, which could be valuable in the clinical applications.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
ZZ was responsible for the method and experimental design as well as the writing of the article, JH and XZ assisted in the experiment and data processing, QX and ML provided us with software and algorithm help, and LW, CW, and ZW gave full guidance and help in the design and completion of the research.
FUNDING
where ΔAtt eq represents the difference of equivalent attenuation coefficient between the left and the right energy bins beside the K-edge. N represents the total noise level of these two images. The quality of the decomposition image is the best when the DAR is maximized.
The attenuation characteristic A of the acquired image can be expressed as follows: where N R is the random noise component and Att eq (B) is the equivalent attenuation coefficient. The mean value and variance of A are calculated in E(A) Att eq (B) ρd , Relative standard deviation is used as the representation of the image's noise level, and then N is expressed as follows: The corresponding representation of the left and the right energy bins can be expressed as in (5) and (6).
For the left bin of K-edge: For the right bin of K-edge: The ΔAtt eq and N in DAR can be expressed as follows: ΔAtt eq A R − A L E A R − E A L Att eq B R ρd − Att eq B L ρd N Var(A L ) + Var(A R ) Att eq (B R ) ρd − Att eq (B L ) ρd Att eq (B R ) ρd − Att eq (B L ) ρd The final DAR is shown in DAR B L , B R Att eq B R p ρd − Att eq B L p ρd where (ρd) is the mass thickness of the sample, B L and B R are the left and the right energy bins beside the K-edge, and Att eq (B L ) and Att eq (B R ) are the equivalent attenuation coefficients of B L and B R . | 2021-02-08T14:11:11.258Z | 2021-02-08T00:00:00.000 | {
"year": 2021,
"sha1": "80eaf443c30688ea143f9e1906278028475cf794",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2020.601623/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "80eaf443c30688ea143f9e1906278028475cf794",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
229463951 | pes2o/s2orc | v3-fos-license | Repolarização Ventricular como Ferramenta de Monitoramento da Atividade Elétrica Cardíaca
Mailing Address: Carlos Alberto Pastore • Av. Dr. Enéas de Carvalho Aguiar, 44, AB. Postal Code 05403-000, Cerqueira César, São Paulo, SP – Brazil E-mail: ecg_pastore@incor.usp.br
The centennial ECG is still an excellent tool to assess electrical activity of the heart.ECG has been renovated over the last decades to keep pace with evolution in other areas of knowledge, such as genetics, molecular biology and electrophysiology.
Our experience has demonstrated that, among the large diagnostic arsenal available to investigate heart diseases, the electrocardiogram, this simple, practical, remote and quick tool, is capable of accurately monitoring the extent and severity of cardiac involvement in various scenarios.
QT interval and its variations, many decades after being first reported, still holds relevant parameters to indicate whether a patient is at risk for severe and sometimes fatal cardiological events.
In 1856, the first patient with long QT syndrome was reported by Meissner.Although its genetic origin was established in 1901, it was only in 1991 that Keating first demonstrated the association of patients with long QT syndrome and short arm mutation of chromosome 11.Bazzet, in 1920, reported his formula for heart rate correction of the QT interval. 1e emergence of the COVID-19 pandemic in March 2020 showed a disease initially with respiratory symptoms, but with the possible involvement of several other organs due to its very aggressive inflammatory response.
Taking advantage of their experience with treating the COVID-19 cardiac repercussions, experts analyzed electrocardiographic findings during the period of infection.
In the study by Koc et al. 2 published in this edition of Arquivos Brasileiros de Cardiologia, the authors examined the alterations of QT, QTc and Tpe (Ppeak-Tend) intervals, and the Tpe/QT and Tpe/QTc ratios, all of which are parameters of ventricular repolarization.
The study group of 120 patients, 90 of whom infected with COVID-19, and 30 age-and-sex-matched healthy controls, was divided into four groups: I -healthy controls and COVID-19 patients: II -without pneumonia, III -with mild pneumonia, and IV -with severe pneumonia.Results showed that one out of five patients with COVID-19 had myocardial damage.
The study showed that in cases with severe pneumonia there are clear ventricular repolarization alterations.In spite of practically normal QT values, analysis of the parameters studied demonstrated increased dispersion of transmural repolarization, which is the usual etiology of severe arrhythmias.
The most frequent causes of cardiac mortality in patients with COVID-19 were arrhythmic events.The types of arrhythmia were diverse, with many relevant aspects.The mechanism of arrhythmias could not be characterized, but the literature reports the presence of arrhythmic phenomena in 27.8%, and of ventricular tachycardia /ventricular fibrillation (VT/VF) in 5.9% among the 187 patients studied by Guo et al. 3 The most important mechanism of ventricular arrhythmias reported in patients with COVID-19 is similar to that of arrhythmias found in patients with acute myocarditis.The analysis of acute myocarditis repercussions in other studies showed increased QT, QTc and Tpe intervals, and Tpe/QT and Tpe/QTc ratios.
In the study discussed here, all these measures clearly increased with disease severity, as seen in the COVID-19 patients with severe pneumonia.
Confirming reports of higher frequency of arrhythmias in patients with increased troponin levels, increase in highsensitivity troponin I levels showed a positive and effective relationship with the measures of QT parameters.
In a recent report, 4 the authors mention a study 5 that categorized the cardiac complications of COVID-19 into five types: (1) Cardiac damage (ischemia or myocarditis) The authors state that "cardiac involvement in COVID-19 patients is reflected in ECG alterations as ST-T alterations, QT prolongation, conduction disorders and ventricular arrhythmias."Thus, "patients with cardiac symptoms and ECG abnormalities must be carefully assessed in order to diagnose COVID-19-related cardiac complications, such as myocarditis, myocardial ischemia or severe arrhythmias." 4 this pandemic, we must maintain these clinical suspicions even for patients who present with discrete symptoms or signs.There is no doubt that the presence of cardiovascular disease worsens the prognosis of the process.The virus cannot be DOI: https://doi.org/10.36660/abc.20201009
Pastore Ventricular Repolarization
Arq Bras Cardiol.2020; 115(5):914-915 considered as the cause of all cardiovascular complications, but it can worsen or reveal precarious underlying conditions.
In the article discussed here, 2 the alterations of repolarization observed, although not specific, call for further investigation to exclude disease-related complications.
The presence of general (16.7%) and malignant (11.5%) arrhythmias also raised greater concern in conditions with a more severe myocardial involvement than with mild involvement.
Comparison of the study by Haseeb et al. 4 and the one in this edition of Arquivos Brasileiros de Cardiologia 2 leads us to conclude that electrical alterations detected in the ECG can be relevant to make a decision about diagnosis and management.
The presence of ischemic alterations, QT prolongation, electrical conduction disorders and arrhythmias in the ECG can be a big warning sign to guide management in case of cardiological involvement.
This is an open-access article distributed under the terms of the Creative Commons Attribution License New-onset or worsening of preexisting heart failure (4) Thromboembolic disease (5) Cardiac abnormalities induced by medical treatment | 2020-12-03T09:07:39.353Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "2b195687e6228cb3d816393a746b1b355258b8ed",
"oa_license": "CCBY",
"oa_url": "https://abccardiol.org/wp-content/uploads/articles_xml/1678-4170-abc-115-05-0914/1678-4170-abc-115-05-0914-en.x44344.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f89f7af35a22044e5411c3137ae128c574372a92",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
268567925 | pes2o/s2orc | v3-fos-license | New Die-Compaction Equations for Powders as a Result of Known Equations Correction: Part 1–Review and Analysis of Various Die-Compaction Equations
: The well-known equations for the powder compaction process (PCP) in a rigid die published from the beginning of the last century until today were considered in this review. Most of the considered equations are converted into the dependences of densification pressure on the powder’s relative density. The equations were analyzed and their ability to describe PCP was assessed by defining the coefficient of determination when approximating experimental data on the compaction of various powders. It was shown that most of the equations contain two constants the values of which are determined by fitting the mathematical dependence to the experimental curve. Such equations are able to describe PCP with high accuracy for the compaction of powders up to a relative density of 0.9–0.95. It was also shown that different equations can describe PCP in the density range from the initial density to 0.9 with the same high accuracy, but when the process of compaction is extrapolated to higher values of density, the curves diverge. This indicates the importance of equations that can unambiguously describe PCP to a relative density equal to or close to 1.0. For an adequate description of PCP for relative density greater than 0.95, equations containing three or four constants have proven useful.
Introduction
The powder compaction process (PCP) in a rigid die is one of the main processes in final product fabrication from various powders.Thus, the comprehensive study and description of PCP has attracted the attention of many specialists in the field of powder metallurgy.There are many equations that describe PCP, and various approaches have been used to derive them.Initially, to describe PCP, simple mathematical functions were used, e.g., exponential [1][2][3], power [4][5][6], and logarithmic [7][8][9][10][11].Later the proposed equations took into account various physical phenomena occurring in PCP such as contact interaction of particles, local and general deformation, hardening, shear between particles and within particles, friction, etc. [12][13][14][15][16][17].To derive the equations with physical constants, some researchers considered a powder body as a combination of a large number of individual particles contacting each other [14,18,19].An approach based on the evolution of the contact interaction between individual particles under the effect of pressure can be designated as a discrete PCP theory.There is also an approach that considers a powder sample as a quasi-continuous two-phase body that can decrease in volume under the effect of pressure.In this case, the compaction of the quasi-continuous body in a rigid die makes it possible to obtain the corresponding equations for describing PCP [20][21][22][23][24][25].This approach can be considered a continuous one.
However, the equations for PCP in a rigid die are not able to describe the entire compaction process with high accuracy, since the process itself is complex and multi-stage.The known equations are able to adequately describe only one or two of the three available Powders 2024, 3 112 stages [18].In addition, there are fundamental discrepancies between the equations, which are associated with the description of PCP upon approaching a pore-free state, where some equations give a finite value of the compaction pressure while other ones give infinite pressure.It is therefore difficult to choose an equation that allows one to determine the compaction pressure accurately enough to achieve a relative density close to 1.0.
Certain difficulties are associated with the determination of the true plastic deformation of particles during PCP in a rigid die because of particle rearrangement that can take place and that depends on the shape of particles and on their yield strength.Many equations for PCP do not explicitly reflect the effect of particle rearrangement on the increase in sample density.Some authors [17,26] have made an attempt to take into account the degree of density change due to particle rearrangement.These authors believe that rearrangement takes place throughout almost the entire compaction process.However, according to other researchers [27], rearrangement ends at the initial stage of the compaction process.This implies the existence of a porosity threshold beyond which no rearrangement occurs.The presence of a threshold or critical phenomena during PCP in a rigid die can be described based on percolation theory [28][29][30], but the modification of the powder compaction equations using this theory has not yet led to adequate equations.
Another issue occurs when describing the compaction of "high" samples, i.e., samples with a height-to-diameter ratio exceeding 1.0.It refers to the non-uniform distribution of density along both the height and the diameter of the sample.To solve the problem of determining the density of a sample at its different points the finite element method known for metal forming is used.This method allows one to determine the density at any point of a sample with known parameters of powder particle strength, using the hardening law and the friction forces between the particles and the die wall [31][32][33].However, the task of determining the true strength of particles and the law of hardening during their deformation is not easy.
To determine the properties of a specific powder, experiments are required on triaxial powder compression and on the determination of the yield surface depending on the sample density.Such experiments are relatively complex and must be carried out in special laboratories [34][35][36], which makes it difficult to obtain quick results for different powders.In addition, information about the properties of a powder can also be obtained from experiments on compaction in a rigid die of "low" samples in which, after compaction, the height becomes half the diameter.In these samples the effect of friction forces can be neglected.It should be noted that most of the experiments are performed with low samples, and that compaction equations are tested mainly on such samples.Therefore, before solving the problems of determining the density distribution in high samples (height/diameter ≥ 1) and samples of complex shape (stepped), it is necessary to have an equation that allows one to describe the compaction process of low samples with the highest accuracy and with a change in density from initial density to close to 1.0.
The adequacy of an equation for PCP in rigid die is determined by assessing the accuracy of fitting the equation to the experimental data, that is, by finding the determination coefficient R 2 .However, in many cases the adequacy of equations is unknown.To confirm this thesis, two review papers on the equations of PCP in a rigid die, published in 2007 and 2017, can be noted.The author of one of them is T. Çomo glu (2007) [37], who considered the equations proposed by Walker, Bal'shin, Heckel, Kawakita and Lüdde, Cooper and Eaton, Leuenberger, Shapiro, Sonnergaard.In that review, it was noted that the PCP in a rigid die typically consists of three stages and it is difficult or impossible to describe it by a single equation.The author analyzes known equations with various constant parameters and tries to relate these parameters to the particular stage of compaction and to the mechanical properties of materials.He also contrasts or compares some equations and shows their similarities and differences.An attempt was made to evaluate the equations for their ability to describe the compaction process of plastic and ceramic powders, and a possibility was also considered of describing the compaction of powder mixtures consisting of plastic and brittle powders.However, in that review there is no graphical representation of the experimental and theoretical curves of the compaction process, and there is also no quantitative assessment of the accuracy of the powder compaction process description by different equations.
In the second review, by Popescu and Vidu (2017) [38], the equations of Shapiro-Kolthoff and Konopicky, Bal'shin, Heckel, Cooper and Eaton, Kawakita and Lüdde, Ge Rong-de, Panelli and Filho, Parilak and Dudrova, Castagnet and Leal Neto, Gerdemann and Jablonski were considered.That review compares different equations describing the compaction process of powder mixtures consisting of plastic and non-plastic powders, and also establishes, in some cases, the influence of the shape and size of brittle particles on the constants of equations.However, in that review, as in the previous one, there are no graphical examples (with the exception of one figure ) showing the correspondence of theoretical curves to experimental data, and there is also no quantitative assessment of the agreement degree.This does not make it possible to understand how a wider range of density changes affects the accuracy of the PCP description, and also does not allow us to evaluate the compaction process at the stage of powder compression in the region of relative density close to unity.
Therefore, the purpose of this communication is to review the known equations for PCP in a die.An attention will be focused on assessing the accuracy of PCP description by various equations, as well as on a graphical representation (visualization) of correspondence theoretical curves to experimental data.In this case, it seems to be appropriate to convert the known equations that show the dependence of relative or absolute density on the compacting pressure into the dependence of pressure on relative density, since such dependence can have a simpler form and thus makes it easy to determine the specific compaction energy of the powder using the formula: where w is the specific energy in J/m 3 or in Mpa, p(ρ) is the experimental or theoretical external pressure (on the punch) dependence on the relative density, and ρ 0 and ρ are the initial and current relative densities of the sample, respectively.The assessment of accuracy of PCP description by various equations was performed by computer approximation with these equations of the experimental data on compaction different powders.For approximation, the Russified program "Wolfram Mathematica 10.4" was used.
Accuracy Assessment of PCP Description Using Equations Obtained by Selecting Mathematical Functions
Equations in the form of simple mathematical functions were among the first formulated ones, and they appeared at the beginning of the 20th century.Most researchers point out that the first powder compaction equation was proposed by E. Walker (1923) [1].Then, almost the same equation was proposed by M. Balshin (1948) [2] and H. Lipson (1950) [3].If we take the compaction pressure as a function and the relative density as an argument, then the compaction equation for the powders, by these authors, will be an exponential function: where p is the compaction pressure, β = 1/ρ is the relative volume of the powder sample, ρ is the relative density, k 1 and k 2 are the constants, and a 1 = Exp(k 1 ).
Other researchers, e.g., W. Rutkowski and H. Rutkowska (1949) [4], C. Agte and M. Petrdlik (1951) [5], and G. Meyerson (1962) [6] used a power function to describe PCP in the form of the following relationship between compaction pressure and relative density: Powders 2024, 3 114 where a 2 and b are constant parameters determined by approximating experimental data on powder compaction.The presented mathematical functions (exponential (2) and power (3)) are quite simple, but there is no clear understanding what range of density change can be accurately described by them.It is only clear that the smaller this range, the more accurately it is described by these equations.The accuracy of approximation of real experimental data on various PCPs by these exponential and power dependences is shown by approximation of experimental data on the compaction of iron, copper, and nickel powders from publications [39][40][41], respectively.The experimental data on the compaction of these powders are given in Table 1.This makes it possible to use them to checkup the approximation carried out and approximate with equations by other researchers.The results of the approximation of experimental data by exponential (2) and power (3) equations are shown in Figure 1, and the values of the constants of these equations and the coefficient of determination R 2 are presented in Table 2.As seen, the exponential (2) and power (3) dependences describe PCP quite accurately from the initial relative density to a density of 0.95, and power dependence (3) describes PCP with a higher accuracy that was also noted in another research [6].
where 2 and b are constant parameters determined by approximating experimental data on powder compaction.The presented mathematical functions (exponential (2) and power (3)) are quite simple, but there is no clear understanding what range of density change can be accurately described by them.It is only clear that the smaller this range, the more accurately it will be is described by these equations.The accuracy of approximation of real experimental data on various PCPs by these exponential and power dependences is shown by approximation of experimental data on the compaction of iron, copper, and nickel powders from publications [39][40][41], respectively.The experimental data on the compaction of these powders are given in Table 1.This makes it possible to use them to checkup the approximation carried out and approximate with equations by other researchers.The results of the approximation of experimental data by exponential (2) and power (3) equations are shown in Figure 1, and the values of the constants of these equations and the coefficient of determination R 2 are presented in Table 2.As seen, the exponential (2) and power (3) dependences describe PCP quite accurately from the initial relative density to a density of 0.95, and power dependence (3) describes PCP with a higher accuracy that was also noted in another research [6].In the above equations, PCP is conceded to be a monotonous process without any indication that, in reality, it proceeds in several stages with their own mechanisms of powder densification.Most experts agree on the existence of three stages of PCP, although according to some authors, there are 4 stages [42,43].The three stages are characterized as follows.The first stage is the rearrangement of particles when elastic and slight plastic deformation can occur.The second stage is the compaction due to the local plastic deformation of the particles during their contact.The third stage is interpreted ambiguously: it is either the localization of plastic deformation near the pore or the general plastic deformation of particles.These stages are clearly manifested in case of constructing a graphical dependence of the experimental relative density on the densification pressure in logarithmic coordinates [44].Moreover, each stage can be described, as the authors [44] suggested, by a power equation with its own constants: where m is a constant different for each stage, ρ* and p* are the minimum relative density and pressure, respectively, for a particular compaction stage.We note that the PCP staging was also established when considering one of the most common equations for PCP in a rigid die.This equation was proposed by different authors, e.g., L.F. Athy (1930) [7], I. Shapiro and I. M. Kolthoff (1947) [8], K. Konopicky (1948) [9], T. N. Znatokova and V. I. Likhtman (1954) [10], and R.W. Heckel (1961) [11], and in the original record it has the form: where k and a are constants.This equation is often referred to as the Konopicky's equation or as the Heckel's equation.If the experimental data on the compaction of powders are presented in coordinates "ln(1/1ρ)" and "p", then a broken line is obtained, indicating different stages of compaction with their own values k and a.The Equation (5) often met in literature [45,46], and after transformation, where the pressure is a function and the relative density is an argument, It takes the following form: where It is important to note that the Equation ( 6) allows one to describe only the intermediate or second stage of powder compaction.Therefore, in most cases, when experimental data on powder compaction are limited by low density (ρ ≤ 0.9), the Equation (6) describes the compaction process without an initial stage.It should also be noted that Equation ( 6) can be written otherwise if the value of the constant a is determined by substituting pressure equal to zero (p = 0) into this equation.In this case, Equation (6) will look as: Here, the parameter ρ ′ 0 means the conditional initial density that is greater which the actual initial density of the powder ρ 0 .This indicates that Equations ( 6) and ( 7) cannot take into account the initial process of powder compaction.
Furthermore, Equation ( 5) has been modernized several times.In one case, modernization was proposed by M. Kuntz and H. Leuenberger [30].They transformed the Heckel's equation, i.e., Equation (6), to the form: where C is a constant, ρ c is the conditional initial density of the powder or the second constant.
The use of Equation ( 8) for approximation of the experimental process of various powders compaction showed that it was more adequate than the Equation ( 6).In particular, the experimental data on the compaction of iron, copper, and nickel powders presented in Table 1 were used for approximation by Equations ( 6) and (8).The results of approximation are shown in Figure 2, and the values of the equations constants and the coefficient of determination R 2 are presented in Table 3.In addition, Table 3 shows the value of the conditional initial density ρ ′ 0 , which is used in Equation ( 7) and is also determined by fitting the curve to the experimental data.
Powders 2024, 3, FOR PEER REVIEW 6 can be written otherwise if the value of the constant is determined by substituting pressure equal to zero (p = 0) into this equation.In this case, Equation (6) will look as: Here, the parameter 0 ′ means the conditional initial density that is greater which the actual initial density of the powder ρ0.This indicates that Equations ( 6) and ( 7) cannot take into account the initial process of powder compaction.
Furthermore, Equation ( 5) has been modernized several times.In one case, modernization was proposed by M. Kuntz and H. Leuenberger [30].They transformed the Heckel's equation, i.e., Equation (6), to the form: where C is a constant, ρc is the conditional initial density of the powder or the second constant.
The use of Equation ( 8) for approximation of the experimental process of various powders compaction showed that it was more adequate than the Equation ( 6).In particular, the experimental data on the compaction of iron, copper, and nickel powders presented in Table 1 were used for approximation by Equations ( 6) and ( 8).The results of approximation are shown in Figure 2, and the values of the equations constants and the coefficient of determination R 2 are presented in Table 3.In addition, Table 3 shows the value of the conditional initial density 0 ′ , which is used in Equation ( 7) and is also determined by fitting the curve to the experimental data.Figure 2 and Table 3 show that the new Equation ( 8) describes PCP with higher accuracy than the original Heckel's equation (6).In this case, the value of the conditional initial density ρc in Equation ( 8), obtained as a result of the approximation, turns out to be lower than the actual initial density ρ0 that makes it possible to accurately describe the initial stage of the compaction process.Table 3. Constants and coefficient of determination R 2 obtained via approximation of experimental data on PCP for iron, copper, and nickel using Equations ( 6) and ( 8). 3 show that the new Equation ( 8) describes PCP with higher accuracy than the original Heckel's equation (6).In this case, the value of the conditional initial density ρ c in Equation ( 8), obtained as a result of the approximation, turns out to be lower than the actual initial density ρ 0 that makes it possible to accurately describe the initial stage of the compaction process.
Parameter
A higher accuracy of approximation of experimental data on PCP compared to the equation of K. Konopicky or R. W. Heckel can be reached by another modernization of the equation, which leads to the appearance of the so-called double logarithmic form of the equation for powder compaction in a rigid die.Three different teams of authors proposed such an equation independently.First, the work of Ge Rong-de (1991) [47] should be noted, in which he proposed a new differential equation for PCP in a rigid die: where K, n, and m are constants.At the same time, Ge Rong-de claimed that at n = 0, the integration of Equation ( 9) leads to an equation that describes PCP at low and high pressures with high accuracy.This equation in the original record is as follows: where A and B are constants.
In another case, the same equation was proposed by other authors-A.B. Yu and Z. H. Gu (1993) [48], who used their own differential equation in the form: where k, a, b are constants.Upon integrating Equation (11), these authors obtained an equation for PCP in the form: where K = k/(1 + b) and n = 1 + b are new constants.In the third case, L. Parilak and E. Dudrova (1994) [49] proposed an equation that in its original form was written as where θ and θ 0 are the current porosity and the initial porosity, respectively, of the powder in the die, p is the compaction pressure, K and n are constants.Equation ( 13), after making double logarithm and replacement of porosity by relative density, takes the form of Equation ( 12) into which Equation (10) can also be easily converted.Consequently, the analysis of PCP by at least three groups of authors led to the derivation of the same equation, which contains the powder initial density ρ 0 , i.e., the powder density after filling the die, and two constants that can be determined by fitting the curve of Equation (12) to experimental data on PCP.If in the Equation ( 12) we interchange the function and the argument, i.e., we take the pressure as a function and the relative density as an argument, then we obtain a simpler equation: where m = 1/n and B = (1/K) m are new constants.
Powders 2024, 3 It is of interest to check the accuracy of PCP description for different powders by Equation ( 12), or rather by Equation (14).Prior to this, we need to remember one more equation which also contains the powder initial density and two constants, and which was proposed much earlier than the Equation (12).We are talking about the equation of K. Kawakita and K.H. Ludde (1970) [50].That equation is rather often referred in the literature, and in original form it is as: where C is the degree of volume reduction, V 0 is the initial volume of the powder, V is the volume of the powder under pressure P, a and b are constants characterizing the powder.
If we transform it in such a way that the compacting pressure p is a function, and the relative density ρ is an argument, then it takes the form: where ρ 0 is the initial relative density of the powder filled into the die, a and b ′ = 1/b are constants.
To check the approximation accuracy of the experimental data on PCP by the equation of K. Kawakita and K.H. Lüdde ( 16) and the equation of Ge Rong-de, A. B. Yu and Z. H. Gu, L. Parilak and E. Dudrova ( 14), we used the experimental data on the compaction of powders of iron, copper and nickel presented in Table 1.The results of approximation are presented in the form of graphs in Figure 3, and the values of the constants of Equations ( 16) and ( 14) and the coefficient of determination R 2 are shown in Table 4.
the curve of Equation (12) to experimental data on PCP.If in the Equation ( 12) we interchange the function and the argument, i.e., we take the pressure as a function and the relative density as an argument, then we obtain a simpler equation: where m = 1/n and B = (1/K) m are new constants.It is of interest to check the accuracy of PCP description for different powders by Equation ( 12), or rather by Equation (14).Prior to this, we need to remember one more equation which also contains the powder initial density and two constants, and which was proposed much earlier than the Equation (12).We are talking about the equation of K. Kawakita and K.H. Ludde (1970) [50].That equation is rather often referred in the literature, and in original form it is as: where C is the degree of volume reduction, V0 is the initial volume of the powder, V is the volume of the powder under pressure P, a and b are constants characterizing the powder.
If we transform it in such a way that the compacting pressure p is a function, and the relative density ρ is an argument, then it takes the form: where ρ0 is the initial relative density of the powder filled into the die, a and b′ = 1/b are constants.
To check the approximation accuracy of the experimental data on PCP by the equation of K. Kawakita and K.H. Lüdde ( 16) and the equation of Ge Rong-de, A. B. Yu and Z. H. Gu, L. Parilak and E. Dudrova (14), we used the experimental data on the compaction of powders of iron, copper and nickel presented in Table 1.The results of approximation are presented in the form of graphs in Figure 3, and the values of the constants of Equations ( 16) and ( 14) and the coefficient of determination R 2 are shown in Table 4. Table 4. Constants and coefficient of determination R 2 obtained via approximation of experimental data on the compaction of powders of iron, copper, and nickel using Equations ( 14) and ( 16).Accuracy assessment of the experimental data approximation on the compaction of iron, copper and nickel powders by Equations ( 14) and (16) show that these equations describe PCP very accurately and practically in the same way, since the coefficient of determination differs slightly.In this case, the coefficient of determination by Equation ( 14) for copper and nickel is slightly higher than that by Equation ( 16), and for iron it is slightly lower (Table 3).In addition, these equations take into account the initial density of the powder and accurately describe the initial stage of powder compaction.But the fundamental difference between them consists in the fact that at a relative density ρ = 1, the compaction pressure by Equation ( 16) has a finite value, while by Equation ( 14), it is equal to infinity.It is still difficult to say, which equation is more adequate, but the higher accuracy of Equation ( 14) in two cases out of three may indicate that the equation in which the pressure tends to infinity as the relative density increases up to 1.0 seems to be more valid.
Parameter
The above comparison of two equations shows how important it is to accurately describe the entire PCP, including the process of compaction to a relative density close to 1.0.Alongside with this, it should be noted that there are very few experimental data in the literature on the compaction of metal powders with a final density that exceeds 0.95.Such experimental data, in particular, on the compaction of various iron powders, are available in the book by R. Kieffer and W. Hotop (1948) [51].They also showed that it is difficult to achieve a relative density ρ = 1.0, even at a pressure of 3000 MPa.In this regard, of great interest are the equations which are able to describe PCP with high accuracy when the pressure tends to infinity at a relative density ρ < 1.0.In practice, it occurs when metal powders with low plasticity or very hard powders, such as ceramics, are compacted.To describe such a case, the J. Secondi's equation ( 2002) [52] that contains the relative density parameter at infinite pressure is used.This parameter is denoted as ρ ∞ , and it means that the infinite pressure can occur at a relative density ρ < 1.0.In the original record, the J. Secondi's equation [52] had the form: where ρ ∞ is the relative density at which the compaction pressure tends to infinity, ρ 0 is the initial relative density of the powder, p is the compaction pressure, K and n are constant parameters which control the hardening and plasticity of the powder material.With taking the compacting pressure p as a function and the relative density ρ as an argument, the J. Secondi's Equation ( 17) is converted to the form: where K ′ = 1/K m and m = 1/n are new constants.One should pay attention to the fact that Equation ( 17) turns into Equation ( 14), if the limiting relative density ρ ∞ , will be substituted by the limiting density ρ ∞ = 1.0.Therefore, the Secondi's equation can describe a wider class of powder materials including both hard or brittle and plastic powders.As an example of the approximation by this equation of experimental data on compaction of hard-to-deform plastic and almost nondeformable brittle powders, we used the experimental data on compaction of titanium carbide powder [53] and three iron powders [51] that were compacted to an extremely high density.The values of relative density at different compaction pressures for titanium carbide and three iron powders are given in Table 5.
The results of approximation by the Secondi's Equation ( 18) of these powders are shown in Figure 4, and the values of constant parameters and the coefficient of determination are listed in Table 6.In addition, Figure 4 and Table 6 also show for comparison the results of these powders approximation by Equation (14).As seen, the Equation ( 18) describes PCP for brittle and ductile powders with higher accuracy than Equation ( 14) does.
Material
Table 5. Experimental data on compaction of titanium carbide powders [53] and some iron powders [51].* Iron powders correspond to the following methods of their preparation: FeKH3-vibration grinding, FeKH6-carbonyl, and FeKH9 -steel.
Material
The results of approximation by the Secondi's Equation ( 18) of these powders are shown in Figure 4, and the values of constant parameters and the coefficient of determination are listed in Table 6.In addition, Figure 4 and Table 6 also show for comparison the results of these powders approximation by Equation ( 14).As seen, the Equation ( 18) describes PCP for brittle and ductile powders with higher accuracy than Equation ( 14) does.One of the factors that cause the increased accuracy of the Secondi's Equation ( 18) when describing the compaction process to an extremely high density is probably the presence of three constants.Another important advantage of the Secondi's equation is the fact that it provides for a real powder compaction process, where the compaction pressure tends to infinity at a relative density of significantly below 1.0.The equation of the authors Ge Rong-de, A. B. Yu and Z. H. Gu, L. Parilak and E. Dudrova (14) does not provide for such a possibility.It follows that the equations for PCP with two empirical constants cannot describe this process to the relative density level of 0.96-0.99 with high accuracy.[53] and three iron powders from [51] using Equations ( 18) and (14).It was important to analyze another modified Konopicky-Heckel's equation describing PCP to a high density.We mean the equation of R. Panelli and F. Ambrosio Filho (1998) [54], which in the original record had the form:
Parameter
where A and B are constants, and which can also be written in a shorter form: where ρ ′ 0 = 1 − Exp(−B) is also constant.If pressure is taken as a function, then the Equation ( 19) takes the form: where To evaluate the accuracy of the PCP description by this equation, we have approximated the experimental data on the compaction of two groups of powders.In one group the experimental data show a final relative density less than 0.95, and in the other group-more than 0.95.That is, for the first group we use the data on iron, copper, and nickel powders from Table 1, and for the latter one-iron powders from Table 5.The results of the approximation of PCP for these powders by Equation ( 21) are shown in Figure 5, and the values of the constant parameters and the coefficient of determination R 2 -in Table 7.As seen from the results of this approximation, Equation ( 21) describes PCP quite accurately when the powders are compacted to a density of less than 0.95.If the powders are compacted to a density close to 1.0, then the Equation ( 21) describes the PCP with low accuracy, at which the beginning and middle of PCP are distorted.In this case, it is impossible to correctly assess the features of the powder and its mechanical and deformation properties.It follows that in order to determine the real properties of a powder during its compaction in a die, it is necessary to have, firstly, experimental data on the compaction of a particular powder to a density close to 1.0 and, secondly, an equation that allows to describe such a process with high accuracy.In this regard, the equations for PCP that contain some physical characteristics of the materials used for powder production are of interest.
Equations in Which the Physical Characteristics of Compact Materials Are Used
The above equations for PCP description have basically two constants.In some cases, attempts were made to establish the physical meaning of these constants.e.g., two constants, K and n, in Equations ( 13) and ( 17) characterize, according to the authors, the powder material plasticity and the hardening work during powder deformation.These constants are similar to the coefficients characterizing the plasticity and strain hardening of compact materials.In this regard, some researchers have tried to obtain an equation for As seen from the results of this approximation, Equation ( 21) describes PCP quite accurately when the powders are compacted to a density of less than 0.95.If the powders are compacted to a density close to 1.0, then the Equation ( 21) describes the PCP with low accuracy, at which the beginning and middle of PCP are distorted.In this case, it is impossible to correctly assess the features of the powder and its mechanical and deformation properties.It follows that in order to determine the real properties of a powder during its compaction in a die, it is necessary to have, firstly, experimental data on the compaction of a particular powder to a density close to 1.0 and, secondly, an equation that allows to describe such a process with high accuracy.In this regard, the equations for PCP that contain some physical characteristics of the materials used for powder production are of interest.
Equations in Which the Physical Characteristics of Compact Materials Are Used
The above equations for PCP description have basically two constants.In some cases, attempts were made to establish the physical meaning of these constants.e.g., two constants, K and n, in Equations ( 13) and ( 17) characterize, according to the authors, the powder material plasticity and the hardening work during powder deformation.These constants are similar to the coefficients characterizing the plasticity and strain hardening of compact materials.In this regard, some researchers have tried to obtain an equation for PCP taking the strength and plasticity characteristics of the material used for powder production from reference books.
An attempt to relate one of the equation constants to the yield strength of the powder material was made by S. Torre (1948) [12] and A. N. Nikolaev (1962) [13].They proposed the following equations: where σ S is the yield strength of the powder material and C = 2.5 ÷ 3 is a constant.
123
The Equation ( 23) was modified by G. M. Zhdanovich (1999) [14] as follows: Equation ( 24) does not have the disadvantage inherent in Equation (23), where the pressure becomes negative when the relative density is less than 0.5, but the parameter C is not known in advance and must be determined, as G. M. Zhdanovich points out, from the experiment.The above equations are difficult to use for description of PCP, since the yield strength of particles can differ significantly from the yield strength of an absolutely dense material.
Of interest is an equation that takes into account the features of both ductile and brittle powders, as well as the effect of powder friction against the die wall during PCP.Such an equation was proposed in the work of Li S., Khosrovabadi P.B., Kolster B.H. (1994) [16].Moreover, these authors presented experimental data on the compaction of not only plastic Ni powder but also a mixture of plastic (Ni) and ceramic (Al 2 O 3 ) powders, as well as nickel-coated ceramic (Al 2 O 3 , SiC) powders.They took into account the forces of powder friction against the walls of the die and, as a result, proposed the following equation (in original form): where P is the external pressure, D is the absolute density of the powder blank at pressure P, D 0 is the initial density of the powder, D m is the theoretical density of the powder material, M 0 is the compaction modulus for a dense sample, K is a dimensionless parameter associated with the friction coefficient of the powder (against the die wall) and workpiece geometry, and m and n are empirical constants.In this case, the parameters M 0 and K are also constant.
If the above equation is potentiated and the absolute density is converted into relative density, then the following equation is obtained which associates pressure with the relative density: where the three constants A = M/K, m and n can be determined by fitting the equation to the experimental curve.The authors of this equation showed that the compaction of nickel powder, equivolume mixture of nickel and aluminum oxide powders, as well as aluminum oxide and silicon carbide powder coated with aluminum can be described by Equation (25) with high accuracy [16].However, it should be kept in mind that the experimental data obtained by authors apply to the materials with a final relative density less than 0.8.For approximation by Equation (26) the experimental data for higher final density, e.g., the data on the compaction of iron and copper powders from Table 1, as well as unusual data on the compaction of coarse and fine iron and copper powders from the book by F. V. Lenel [55], (p.96, Figures 3-25) were of interest.The experimental data on compaction of coarse and fine Fe and Cu powders, presented in [55], were obtained by digitizing the experimental points on the corresponding curves, and these data are given in Table 8.Note that in [55] only the bulk or apparent density of coarse and fine iron and copper powders is indicated without information on the sizes of the powders.
The results of approximation by Equation (26) of experimental data on the compaction of various iron and copper powders are shown in Figure 6, and the values of the constant parameters of Equation ( 26) and the coefficient of determination R 2 are listed in Table 9. on the corresponding curves, and these data are given in Table 8.Note that in [55] only the bulk or apparent density of coarse and fine iron and copper powders is indicated without information on the sizes of the powders.The results of approximation by Equation ( 26) of experimental data on the compaction of various iron and copper powders are shown in Figure 6, and the values of the constant parameters of Equation ( 26) and the coefficient of determination R 2 are listed in Table 9.Table 9. Constant parameters of Equation ( 26) obtained via approximation of experimental data on the compaction of iron and copper powders from Table 1 and iron and copper powders from Table 8.The presented results of approximation of experimental data on the compaction of various iron and copper powders by Equation (26) show that the equation proposed by previously mentioned authors [16] allows description of PCP quite accurately in cases where the powder is compressed to a low density.And this happens despite the fact that the proposed equation has three constants that implicitly take into account the resistance of the powder material to deformation, the powder friction against the die walls, and the compacted powders geometry.In addition, a characteristic feature of Equation ( 26) is the fact that it provides for obtaining a final pressure when the relative density of the powder reaches 1.0.Moreover, the value of this pressure (see p max in Table 9) is unexpectedly low that confirms the impossibility of this equation to accurately describe PCP at the final stage.
Parameter
Noteworthy is another equation for PCP presented in one of the works by G. Aryanpour and M. Farzaneh (2015) [17].When deriving the equation, the authors analyzed various mechanisms of PCP, in particular-particles rearrangement and plastic deformation.With that, they made the assumption that these two mechanisms work together up to a sufficiently high relative density of 0.95, at which, as they believe, there are no more open pores, and the particle rearrangement mechanism does not operate.To describe compaction due to the particle rearrangement mechanism, the authors suggested the following equation: where b and a are constants determined from the experiment, ρ 0 is the initial density of the powder in the die.After a simple transformation and taking logarithm, the Equation ( 27) takes a form: where c = 1/a is a constant.
To describe the powder compaction due to plastic deformation, the researchers [17] used the Heckel's Equation [11] written in the form: where k is a constant, but ρ ′ 0 is the conditional initial density of the powder.According to G. Aryanpour and M. Farzaneh, these two mechanisms are summed up during compaction of the powder, and as a result, the following equation was proposed (in this case, the conditional initial density ρ ′ 0 in Equation ( 29) is replaced by the actual initial density ρ 0 ): The Equation (30), as noted by its authors, can be used to describe PCP in the density range from ρ 0 to ρ = 0.95.It takes into account the action of two mechanisms-particles rearrangement and particles deformation.In this case, at the beginning of the compaction process, the rearrangement mechanism prevails, and at the end of the compaction, the deformation mechanism does.Unfortunately, this equation cannot be transformed in such a way that it could be solved with regard to the pressure p and an appropriate approximation can be made.However, to estimate the accuracy of approximation of various experimental data by Equation (30), it is necessary to transform it so that the function is the relative density ρ.In this case, it will acquire the following form: But the Equation (31) hides compaction mechanisms, making it less useful.At the same time, this equation makes it possible to approximate experimental data on powder compaction and determine the values of three constants-b, c, and k.Taking into consideration that this equation has three constants, its adequacy in describing PCP not only up to a relative density of 0.95 but also up to a higher density is of interest.Therefore, to approximate the experimental data on PCP by this equation, we chose the experimental Powders 2024, 3 126 results previously used by us on the compaction of iron, copper, and nickel powders, presented in Table 1, as well as on iron powders compacted to high density presented in Table 5.The results of the approximation of experimental data on the compaction of these powders are shown in Figure 7.The values of the constants of Equation ( 31) and the coefficient of determination R 2 are given in Table 10.
But the Equation ( 31) hides compaction mechanisms, making it less useful.At the same time, this equation makes it possible to approximate experimental data on powder compaction and determine the values of three constants-b, c, and k.Taking into consideration that this equation has three constants, its adequacy in describing PCP not only up to a relative density of 0.95 but also up to a higher density is of interest.Therefore, to approximate the experimental data on PCP by this equation, we chose the experimental results previously used by us on the compaction of iron, copper, and nickel powders, presented in Table 1, as well as on iron powders compacted to high density presented in Table 5.The results of the approximation of experimental data on the compaction of these powders are shown in Figure 7.The values of the constants of Equation ( 31) and the coefficient of determination R 2 are given in Table 10. 1, as well as iron powders (d) FeKH3, (e) FeKH6, and (f) FeKH9 from Table 5. 1, as well as iron powders (d) FeKH3, (e) FeKH6, and (f) FeKH9 from Table 5.
Table 10.Constant parameters in Equation ( 31) resulted from the approximation of experimental data on the compaction of iron, copper, and nickel powders from Table 1 and iron powders FeKH3, FeKH6, and FeKH9 from Table 5.As seen from presented results, the accuracy of the approximation of experimental data on the compaction of various powders by Equation ( 31) is high for cases of powder compaction up to a density of 0.95.This may indicate the adequacy of the hypothesis that allows the simultaneous existence of the mechanisms of rearrangement and deformation of particles during the compaction process up to a relative density of 0.95.Alongside with this, the Equation (31) can, in some cases, describe with high accuracy the PCP up to a relative density exceeding 0.95 (Table 10, FeKH9 powder).It is also important that this equation allows one to estimate the degree of density change during powder compaction due to both particles rearrangement (∆ρ r ) and plastic deformation (∆ρ d ).
Parameter Powder
In order to evaluate the change in density by two mechanisms, it was suggested to determine the derivatives of the relative density with regard to pressure separately for Powders 2024, 3 127 the rearrangement and plastic deformation mechanisms [17].If we fully expand these derivatives, then they will look as: -for particles rearrangement (32) and -for plastic deformation (33) The areas under the curves ( 32) and ( 33) within the pressure range from zero to a value corresponding to a given density will show the degree of change in density for the corresponding compaction mechanism.Numerical integration of dependences (32) and (33) for the examples shown in Figure 7, within the pressure range from zero to the pressure corresponding to a density of 0.95, led to the results presented in Table 11.In addition, this Table 11 presents the true value of particles plastic deformation, which, for the case of reaching the relative density of 0.95, was calculated using the following formula: Table 11.Degree of change in density due to rearrangement and plastic deformation of particles during compaction of the powder from the initial density to a density of 0.95, as well as the true degree of plastic deformation of the particles and the degree of change in the relative density from initial to 0.95.
Density Change Mechanism Powders
Fe [ By the data in Table 11, the degree of change in the powder density due to rearrangement during compaction in a rigid die can exceed the degree of change in density through plastic deformation of the particles.In relatively hard powders, such as FeKH3, FeKH6, and FeKH9, densification occurs practically due to the particles rearrangement.There cannot be excluded the case when description of solid powders particles rearrangement with the Equation ( 27) is very approximate.In this regard, I would also like to draw attention to the fact that the Equation ( 27) exactly corresponds to the equation of M. Yu.Bal'shin (1972) [18] (p.163, Eq.V.47) which he proposed to describe the third stage of the densification process, i.e., the stage where practically there is no rearrangement of the particles.
It should also be noted that the process of particle rearrangement during compaction of powders in a rigid die is explicitly reflected in the equation proposed as far back as in the middle of the last century (1962) by A. R. Cooper and L. E. Eaton [26].In the original record this equation is as follows: where V* is the degree of powder sample volume change under pressure, V 0 , V and V ∞ are initial volume at zero pressure, current volume at pressure P, and volume at infinite pressure, respectively, P is the compaction pressure, a 1 and k 1 are constants characterizing the rearrangement process, and a 2 and k 2 are constants characterizing the process of plastic deformation.
Powders 2024, 3 128 When compacting the powder, it is advisable to operate with the relative density ρ = V com /V (V com is the volume of a pore-free sample), then the Equation ( 35) can be converted into the dependence of relative density (ρ) on pressure (p): where ρ 0 is the initial density of powder, ρ ∞ ≤ 1 is the density at infinite pressure.This equation was proposed in order to describe the compaction process of ceramic powders.Therefore, it can be used to approximate the compaction process of any hard powders.In this regard, the applicability of Equation ( 36) was tested for describing the process of compaction of titanium carbide powders and iron powders presented in Table 5.The results are shown in Figure 8, and the values of the constant parameters of Equation ( 36) and the coefficient of determination R 2 are listed in Table 12.The approximation was performed at a given value of the parameter ρ ∞ for each powder.
where V* is the degree of powder sample volume change under pressure, V0, V and V∞ are initial volume at zero pressure, current volume at pressure P, and volume at infinite pressure, respectively, P is the compaction pressure, a1 and k1 are constants characterizing the rearrangement process, and a2 and k2 are constants characterizing the process of plastic deformation.
When compacting the powder, it is advisable to operate with the relative density ρ = Vcom/V (Vcom is the volume of a pore-free sample), then the Equation ( 35) can be converted into the dependence of relative density (ρ) on pressure (p): where ρ0 is the initial density of powder, ρ∞ ≤ 1 is the density at infinite pressure.This equation was proposed in order to describe the compaction process of ceramic powders.Therefore, it can be used to approximate the compaction process of any hard powders.In this regard, the applicability of Equation ( 36) was tested for describing the process of compaction of titanium carbide powders and iron powders presented in Table 5.The results are shown in Figure 8, and the values of the constant parameters of Equation ( 36) and the coefficient of determination R 2 are listed in Table 12.The approximation was performed at a given value of the parameter ρ∞ for each powder.5.
Table 12.Constant parameters in Equation ( 36) resulted from the approximation of experimental data on the compaction of titanium carbide and iron powders FeKH3, FeKH6, and FeKH9 from Table 5. 5.It follows from the presented approximation results that Equation (36) that contains four constant parameters (the fifth parameter ρ ∞ is predetermined) is capable of describing the compaction process of hard metal and ceramic powders with high accuracy.The most important result of approximating the process of compaction of hard powders by this equation is the fact that the process of particle rearrangement affects the change in porosity to a much greater extent than the process of powder deformation.This is evidenced by the value of the constant a 1 in Table 12, which reaches 80% of the entire compaction process.A similar result follows when describing the process of compaction of hard iron powders by Equation (31), Table 11.2017) [56] is worth of attention.As said above, these authors previously proposed an equation with two constants, which in many cases allows one to describe PCP quite accurately.In new article, they proposed a novel equation based on their former one (13) which was transformed into an equation with one constant due to the relation established between the constants K and n in the form ln(K) = 1.2952 − 7.3349n.With this, Equation ( 13) after taking a double logarithm acquires the form:
Modern Works
where P 0 and P are the initial and current porosities of the powder sample.
After potentiation, the Equation ( 37) is transformed into the following expression: from which one can get an equation where the compaction pressure is a function, and porosity or relative density (ρ = 1 − P) is an argument: where m = 1/n is a new constant.
There is an ambiguous attitude to this new equation for PCP.On one hand, with an equation containing one constant parameter it is easier to describe the compaction of different powders, but, on the other hand, such an equation is not capable of taking into account the great diversity in the morphology and properties of powders.All differences in the behavior of powders during compaction are averaged in one parameter.This may reduce the accuracy of the densification process description of certain powders.Indeed, the approximation by Equation (39) of experimental data on the compaction of iron, copper and nickel powders (Table 1) showed a lower accuracy in two cases out of three (Table 13) as compared with the approximation by two-parameter equation ( 14) (Table 4).With a reduced accuracy in describing PCP, we will get to know with reduced accuracy the degree of plastic deformation of particles, the work of plastic deformation, the magnitude of the resistance of particles to deformation, as well as other characteristics of the powder.Therefore, a one-parameter equation can be used to approximate assessment of the compaction process and powder properties.
Another recent work on compressibility of powders during compaction in a die deserves an attention, namely, the article of J.M. Montes, F.G. Cuevas, J. Cintas et al. (2018) [57].When developing their equation, the authors aimed at minimizing the number of experimental constants in the equation and using physical parameters, which are characteristic of the powder material and known in advance from reference books.Unfortunately, in many cases, such a desire cannot lead to obtaining an adequate compaction equation due to significant differences in the strength and plastic properties of powders and bulk material they are made of.The researches [57] proposed the following equation to describe PCP: where P N is the external pressure during compaction of the powder, Θ and Θ M are the current and initial porosity of the powder sample, ξ is the coefficient that takes into account the friction of the powder against the die wall, k and n are the Hollomon equation parameters which characterize the law of hardening during deformation of a dense sample obtained from compacted powder.
According to the authors of [57], the parameters k and n are very close to the reference values for the powder material.Therefore, in Equation ( 40), the parameters k and n are known in advance, and the parameters Θ M and ξ are determined by fitting the curve according to Equation (40) to the experimental compaction curve of a particular powder.In the authors' opinion, the advantage of Equation ( 40) is the use of previously known parameters k and n with a clear physical meaning.However, the proposed equation does not take into account the phenomenon of particle rearrangement during powder compaction, as well as the possible discrepancy between the strain hardening of particles themselves and the strain hardening of a compact sample made from these particles.
In reality, such factors as the rearrangement of particles and the specific nature of the particle strain hardening, especially in the compaction of fine-grained particles, are of fundamental importance in describing the compaction process and for the adequacy of the corresponding equation.Therefore, the free status of the parameters k and n in Equation ( 40) can take into account both the particle rearrangement factor and the specific nature of the strain hardening of these particles.In this regard, an attempt was made to use Equation (40) to approximate the experimental curves for the compaction of various powders, provided that the parameters k and n are free, and the initial porosity is known.To perform the approximation, we transformed Equation ( 40) into the dependence of the compaction pressure p on the relative density ρ: where K ≈ k (since ( √ 3/2) × ξ ≈ 1) and n are constant parameters obtained from the experiment, ρ 0 is the initial relative density of the powder in the die.
To carry out the approximation, the experimental data for iron, copper and nickel powders from Table 1 as well as hard-to-compact iron powders from Table 5 were used.The results of the approximation are shown in Figure 9, and the values of the constants are listed in Table 14.
where K ≈ k (since (√3/2) × ξ ≈ 1) and n are constant parameters obtained from the experiment, ρ0 is the initial relative density of the powder in the die.
To carry out the approximation, the experimental data for iron, copper and nickel powders from Table 1 as well as hard-to-compact iron powders from Table 5 were used.The results of the approximation are shown in Figure 9, and the values of the constants are listed in Table 14.The results of the approximation of various powders by Equation (41) show that in some cases this equation allows one to describe very accurately the PCP, despite the change in some principles underlying the derivation of Equation ( 40).However, hard 41) obtained via approximation of experimental data on the compaction of iron [39], copper [40], and nickel [41] powders, as well as FeKH3, FeKH6, and FeKH9 iron powders from the book [51].The results of the approximation of various powders by Equation (41) show that in some cases this equation allows one to describe very accurately the PCP, despite the change in some principles underlying the derivation of Equation (40).However, hard powders compacted to a density greater than 0.95 cannot be adequately described by this equation.
Parameter
A. Molinari et al. (2018) [27] also proposed a new equation for powder compaction in a rigid die.The authors studied the compaction of low-alloyed iron powders (alloyed with CuMo, Mo, CrMo, or CuMoNi) in a mixture with graphite and lubricant, and suggested describing PCP by the absolute density dependence on the average compaction pressure in the form of a power law: where ρ 0 and ρ are the absolute initial and current density of the powder in the die (the dash above the symbol distinguishes the absolute density from the relative density, which we denote as ρ), respectively, P m is the average compaction pressure, A and B are constant parameters obtained from the experiment.It should be noted that a feature of the experimental PCP in article [27] is a relatively narrow range of compaction pressure up to 540 MPa, as well as the use of the absolute Powders 2024, 3 132 density of the material instead of the relative density.If we transform Equation ( 42) into the dependence of pressure on relative density, then we obtain power law: where ρ 0 and ρ are relative initial and current densities, respectively, A ′ = 1/A (1/B) and B ′ = 1/B are constant parameters obtained from the experiment.Note that using of the power law to describe PCP is not a new solution (see Equation (3)), but Equation ( 42) takes into account the real fact that at a density equal to the initial density ρ 0 , the compaction pressure is zero.However, in the long-known equation ( 3), it is easy to take into account this fact if instead of the current relative density ρ we substitute the expression (ρ − ρ 0 )/(1 − ρ 0 ), that allows taking into account not only the initial density but also the entire range of density changes from ρ 0 to ρ = 1.Then Equation (3) will acquire the form: where c and d are constant parameters obtained from the experiment.
Comparison of the adequacy of Equations ( 43) and (44) showed that in describing the compaction of iron, copper, and nickel powders that we used the Equation ( 44) was more accurate.This follows from the approximation results presented in Figure 10 and in Table 15.As seen from Figure 10 and Table 15, the power-law dependence of pressure on relative density does not provide a high accuracy of the PCP description, especially in the initial stage of compaction, which is described by Equation ( 43) less accurately.Obviously by this reason the authors [27] divided PCP into two stages, in one of which particles are rearranging, and in the other, plastic deformation occurs.In this case, each stage of compaction was approximated by a power equation with its own constant parameters.However, such a method for PCP description was proposed by researches [44] as early as in 1975.
The ability to describe almost the entire compaction process with a single equation raises the question of how one equation can describe the different stages of PCP.In this 43) and ( 44) after approximation of experimental data on the compaction of iron, copper, and nickel powders from Table 1.As seen from Figure 10 and Table 15, the power-law dependence of pressure on relative density does not provide a high accuracy of the PCP description, especially in the initial stage of compaction, which is described by Equation ( 43) less accurately.Obviously by this reason the authors [27] divided PCP into two stages, in one of which particles are rearranging, and in the other, plastic deformation occurs.In this case, each stage of compaction was approximated by a power equation with its own constant parameters.However, such a method for PCP description was proposed by researches [44] as early as in 1975.
Parameter
The ability to describe almost the entire compaction process with a single equation raises the question of how one equation can describe the different stages of PCP.In this case, two possibilities may exist: either the compaction process occurs according to one yet unknown mechanism, or the equation must contain a number of constants, which take into account, to varying degrees, the manifestation of different mechanisms.In the next two messages of this article (Part 2 and Part 3), new equations for powders compaction in a rigid die are proposed, which contain three and four constants.But in one case (Part 2), to obtain such equations, various equations of M. Yu.Bal'shin were corrected, and in another case (Part 3), the plasticity equations of a porous body proposed by Kuhn and Downey or Skorokhod, Martynova and Shtern were transformed.
According to the results of analysis concerning the equations presented in the form of simple mathematical functions or in the form of semi-empirical equations, which take into account some physical constants of the powder material, the following conclusion can be formulated.
1.
The considered equations for the compaction of powders in a rigid die in most cases contain two constants, which depend on the powder material and the type of equation and are determined from the experiment.At the same time, the accuracy of describing the process of powder compaction by equations with two constants is different.In addition, some equations lead to a finite value of compaction pressure upon reaching the pore-free state of the sample, while the others lead to an infinite pressure.
2.
High accuracy in describing the process of powder compaction using different equations is observed in the case of using a shortened range of density changes, i.e., up to 0.8-0.9, but such a description of the compaction process is not complete, since fundamental differences in the behavior of powders are observed when a relative density close to unity is reached.When compacting powders to a density greater than 0.95, the highest accuracy in describing the compaction process can be provided by equations with three constants, for example, such as the J. Secondi's equation.
3.
Increasing the number of constants in the equation to three or four allows us to identify compaction mechanisms such as particle rearrangement and plastic deformation.This is shown by discussing the equations of A. R. Cooper and L. E. Eaton or G. Aryanpour and M. Farzaneh, which consider that the rearrangement and plastic deformation mechanisms operate in parallel throughout the compaction of powders to a relative density of 0.95.4.
In a number of proposed equations, there was an attempt to take into account both the strength and plasticity characteristics of a powder compact material and the shape and size of particles, as well as the internal friction forces between particles and external friction forces between particles and the die wall.But this came down to setting the appropriate coefficients, which are constant throughout the entire compaction process.Moreover, in most cases, the value of these coefficients must be determined in the experiment and this neutralizes the desire of researchers to work without using empirical constants.
Figure 1 .
Figure 1.Approximation dependences of experimental data on PCP for (a) iron, (b) copper, and (c) nickel using exponential Equation (2) (dashed line) and power Equation (3) (solid line).Here and below, light circles are the experimental data.
Figure 1 .
Figure 1.Approximation dependences of experimental data on PCP for (a) iron, (b) copper, and (c) nickel using exponential Equation (2) (dashed line) and power Equation (3) (solid line).Here and below, light circles are the experimental data.
Figure 2
Figure 2 and Table3show that the new Equation (8) describes PCP with higher accuracy than the original Heckel's equation(6).In this case, the value of the conditional initial density ρ c in Equation (8), obtained as a result of the approximation, turns out to be lower than the actual initial density ρ 0 that makes it possible to accurately describe the initial stage of the compaction process.A higher accuracy of approximation of experimental data on PCP compared to the equation of K. Konopicky or R. W. Heckel can be reached by another modernization of the equation, which leads to the appearance of the so-called double logarithmic form of the equation for powder compaction in a rigid die.Three different teams of authors proposed such an equation independently.First, the work of Ge Rong-de (1991)[47] should be noted, in which he proposed a new differential equation for PCP in a rigid die:
Describing the Process of Powder Compaction in a Rigid Die Recently published papers with new equations for PCP are of increased interest.First of all, a new article by L. Parilak, E. Dudrova et al. (
Figure 9 .
Figure 9. Approximation dependences according to Equation (41) as a result of processing experimental data on the compaction of (a) iron, (b) copper, and (c) nickel, as well as hard-to-compress iron powders (d) FeKH3, (e) FeKH6, and (f) FeKH9.
Figure 9 .
Figure 9. Approximation dependences according to Equation (41) as a result of processing experimental data on the compaction of (a) iron, (b) copper, and (c) nickel, as well as hard-to-compress iron powders (d) FeKH3, (e) FeKH6, and (f) FeKH9.
Figure 10 .
Figure 10.Approximation curves according to Equation (43) (dotted line) and Equation (44) (solid line) as a result of processing experimental data on the compaction of (a) iron, (b) copper, and (c) nickel powders.
Figure 10 .
Figure 10.Approximation curves according to Equation (43) (dotted line) and Equation (44) (solid line) as a result of processing experimental data on the compaction of (a) iron, (b) copper, and (c) nickel powders.
Table 2 .
Constants and coefficient of determination R 2 obtained via approximation of experimental data on PCP for iron, copper, and nickel using exponential (2) and power (3) equations.
Table 6 .
Constants and coefficient of determination R 2 obtained via approximation of experimental data on the compaction of titanium carbide powder
Table 7 .
(21)tants and coefficient of determination R 2 obtained via approximation of experimental data on the compaction of powders of iron, copper, nickel, and FeKH3, FeKH6 and FeKH9 using Equation(21).
Table 7 .
Constants and coefficient of determination R 2 obtained via approximation of experimental data on the compaction of powders of iron, copper, nickel, and FeKH3, FeKH6 and FeKH9 using Equation (21).
Table 8 .
Experimental data on the compaction of coarse and fine iron and copper powders[55].
Table 8 .
Experimental data on the compaction of coarse and fine iron and copper powders[55].
It follows from the presented approximation results that Equation (36) that contains four constant parameters (the fifth parameter ρ∞ is predetermined) is capable of describing the compaction process of hard metal and ceramic powders with high accuracy.The most
Table 12 .
Constant parameters in Equation (36) resulted from the approximation of experimental data on the compaction of titanium carbide and iron powders FeKH3, FeKH6, and FeKH9 from Table5.
Table 13 .
(39)value of the constant m in Equation(39)and the coefficient of determination R 2 obtained after the approximation of experimental data on compaction of iron, copper, and nickel powders from Table1.
Table 14 .
Constant parameters in Equation (
Table 15 .
(44)tants in Equations (43) and(44)after approximation of experimental data on the compaction of iron, copper, and nickel powders from Table1. | 2024-03-22T16:04:26.739Z | 2024-03-18T00:00:00.000 | {
"year": 2024,
"sha1": "226af68519a841d2e3ddae316d4b4c5d876b6a0c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2674-0516/3/1/8/pdf?version=1710752552",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8cb5a900791e180b08c3fd26e3448d393357d32b",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
1074299 | pes2o/s2orc | v3-fos-license | Kikuchi-Fujimoto disease
Kikuchi-Fujimoto disease (KFD) is a benign and self-limited disorder, characterized by regional cervical lymphadenopathy with tenderness, usually accompanied with mild fever and night sweats. Less frequent symptoms include weight loss, nausea, vomiting, sore throat. Kikuchi-Fujimoto disease is an extremely rare disease known to have a worldwide distribution with higher prevalence among Japanese and other Asiatic individuals. The clinical, histopathological and immunohistochemical features appear to point to a viral etiology, a hypothesis that still has not been proven. KFD is generally diagnosed on the basis of an excisional biopsy of affected lymph nodes. Its recognition is crucial especially because this disease can be mistaken for systemic lupus erythematosus, malignant lymphoma or even, though rarely, for adenocarcinoma. Clinicians' and pathologists' awareness of this disorder may help prevent misdiagnsois and inappropriate treatment. The diagnosis of KFD merits active consideration in any nodal biopsy showing fragmentation, necrosis and karyorrhexis, especially in young individuals presenting with posterior cervical lymphadenopathy. Treatment is symptomatic (analgesics-antipyretics, non-steroidal anti-inflammatory drugs and, rarely, corticosteroids). Spontaneous recovery occurs in 1 to 4 months. Patients with Kikuchi-Fujimoto disease should be followed-up for several years to survey the possibility of the development of systemic lupus erythematosus.
Initially described in Japan, KFD was first reported in 1972 almost simultaneously by Kikuchi [1] and by Fujimoto et al. [2] as a lymphadenitis with focal proliferation of reticular cells accompanied by numerous histiocytes and extensive nuclear debris [3].
Epidemiology
Kikuchi-Fujimoto disease is an extremely rare disease known to have a worldwide distribution with a higher prevalence among Japanese and other Asiatic individuals [4]. Only isolated cases are reported in Europe. Affected patients are most often young adults under the age of 30 years; the disease is seldom reported in children. A female preponderance of cases has been underlined in the literature (female to male ratio 4:1). Recent reports seem to indicate that the female preponderance was overemphasized in the past and that the actual ratio is closer to 1:1 [4,5].
Etiology and pathogenesis
There is much speculation about the etiology of KFD. A viral or autoimmune cause has been suggested. The role of Epstein-Barr virus, as well as other viruses (HHV6, HHV8, parvovirus B19) in the pathogenesis of KFD remains controversial and not convincingly demonstrated [4]. A viral infection is, nonetheless, possible by virtue of clinical manifestations, as described by Unger et al. [6] that include upper respiratory prodrome, atypical lymphocytosis and lack of response to antibiotic therapy, and certain histopathologic features (i.e., T-cells as revealed by immunological marker studies). KFD has also been recorded in HIV-and HTLV-1-positive patients [7].
On the other hand, electron microscopic studies have identified tubular reticular structures in the cytoplasm of stimulated lymphocytes and histiocytes in patients with KFD [3]. Since these structures have also been noted within endothelial cells and lymphocytes of patients with systemic lupus erythematosus (SLE) and other autoimmune disorders, some authors hypothesized that KFD may reflect a self-limited autoimmune condition induced by virus-infected transformed lymphocytes [8]. It is possible that KFD may represent an exuberant T-cell mediated immune response in a genetically susceptible individual to a variety of non-specific stimuli [4].
Although the mechanism of cell death involved in KFD has not been extensively studied, Ohshima et al. have shown that apoptotic cell death may play a role in the pathogenesis of the disease [9]. According to these authors, proliferating CD8 positive T-cells may act as "killers" and "victims" in the apoptotic process via Fas-and perforine-pathways.
Clinical manifestations
The onset of KFD is acute or subacute, evolving over a period of two to three weeks. Cervical lymphadenopathy is almost always present consisting of tender lymph nodes that involve mainly the posterior cervical triangle. Lymph node size has been found to range from 0.5 to 4 cm, but it may reach 5 to 6 cm and rarely larger than 6 cm. Generalized lymphadenopathy can occur [5,10] but is very rare. In addition to lymphadenopathy, 30 to 50% of patients with KFD may have fever, usually of low-grade, associated with upper respiratory symptoms. Less frequent symptoms include weight loss, nausea, vomiting, sore throat and night sweats [11,12]. Leukopenia can be found in up to 50% of the cases. Atypical lymphocytes in the periph-eral blood have also been observed. Involvement of extranodal sites in KFD is uncommon but skin, eye and bone marrow affection, and liver dysfunction have been reported [4]. KFD has also been reported as a cause of prolonged fever of unknown origin [13]. Although the disease has been linked to SLE, as well as to other autoimmune conditions [4,14], the real strength of such associations remains to be clarified. There have been anecdotal reports of unusual features of KFD including carcinoma [15], diffuse large B-cell lymphoma [16] and hemophagocytic syndrome [17]. There are occasional reports describing cases of extranodal skin involvement or, even more rarely, of fatal multicentric disease.
Diagnosis
Kikuchi-Fujimoto disease is generally diagnosed on the basis of an excisional biopsy of affected lymph nodes. No specific diagnostic laboratory tests are available. The results of a wide range of laboratory studies are usually normal. Nevertheless, some patients have anemia, slight elevation of the erythrocyte sedimentation rate and even leukopenia. Of note, one third of patients present atypical peripheral blood lymphocytes [5]. Characteristic histopathologic findings of KFD include irregular paracortical areas of coagulative necrosis with abundant karyorrhectic debris, which can distort the nodal architecture, and large number of different types of histiocytes at the margin of the necrotic areas. The karyorrhectic foci are formed by different cellular types, predominantly histiocytes and plasmacytoid monocytes but also immunoblasts and small and large lymphocytes. Neutrophils are characteristically absent and plasma cells are either absent or scarce. Importantly, atypia in the reactive immunoblastic component is not uncommon and can be mistaken for lymphoma [18]. The immunophenotype of KFD typically consists of a predominance of T-cells, with very few Bcells. There is an abundance of CD8+ T-cells over CD4+. The histiocytes express histiocyte-associated antigens such as lysozyme, myeloperoxidase (MPO) and CD68.
Finally, striking plasmacytoid monocytes are also positive for CD68 but not for MPO [4].
Differential diagnosis
Although KFD is considered very uncommon, this disorder must be included in the differential diagnosis of 'lymph node enlargement' since its course and treatment differ dramatically from those of lymphoma, tuberculosis and SLE. The histological differential diagnosis of KFD mainly includes reactive lesions as lymphadenitis associated with SLE or herpes simplex, non-Hodgkin's lymphoma, plasmacytoid T-cell leukemia, Kawasaki's disease, myeloid tumor and even metastasic adenocarcinoma [4].
The differentiation of KFD from SLE can sometimes be problematic because both can show similar clinical and histological features. Furthermore, KFD has been reported in association with SLE. In this case, before making the diagnosis of KFD, laboratory tests including C3, C4, anti-Sm, and LE cells are needed to rule out SLE.
The diagnosis of KFD is generally not difficult, although early lesions lacking overt necrosis can be misdiagnosed as malignant lymphoma, due to the presence of abundant immunoblasts [7]. Features of KFD that may help prevent its misdiagnosis as malignant lymphoma include incomplete architectural effacement with patent sinuses, presence of numerous reactive histiocytes, relatively low mitotic rates, absence of Reed-Sternberg cells.
The recognition of KFD is necessary because one can avoid laborious investigation for infectious and lymphoproliferative diseases.
Clinical course and management
Kikuchi-Fujimoto disease is typically self-limited within one to four months. A low but possible recurrence rate of 3 to 4% has been reported [3]. In some few patients, SLE may occur some years later. No risk to other family members is felt to be associated with KFD [7]. Symptomatic measures aimed to relief the distressing local and systemic complains should be employed. Analgesics-antipyretics and nonsteroidal anti-inflammatory drugs may be used to alleviate lymph node tenderness and fever. The use of corticosteroids has been recommended in severe extranodal or generalized KFD but is of uncertain efficacy. Surgical consultation may be indicated for a diagnostic excisional lymph node biopsy. Patients with KFD require a systematic survey and regular follow-up for several years to rule out the development of SLE. The cervical lymphadenopathy runs a benign course and appears to resolve spontaneously 1 to 6 months after definite diagnosis. | 2018-04-03T01:01:21.477Z | 2006-01-01T00:00:00.000 | {
"year": 2006,
"sha1": "8e6165bae932255904060bbc29ecd44b3792cb4c",
"oa_license": "CCBY",
"oa_url": "https://ojrd.biomedcentral.com/track/pdf/10.1186/1750-1172-1-18",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "87a824c92c4dcc9ecb53fa194de57634b246808d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204909419 | pes2o/s2orc | v3-fos-license | Quercetin Directly Targets JAK2 and PKCδ and Prevents UV-Induced Photoaging in Human Skin
Quercetin is a naturally occurring polyphenol present in various fruits and vegetables. The bioactive properties of quercetin include anti-oxidative, anti-cancer, anti-inflammatory, and anti-diabetic effects. However, the effect of quercetin on skin aging and the direct molecular targets responsible have remained largely unknown. Herein, we investigated the protective effect of quercetin against UV-mediated skin aging and the molecular mechanisms responsible. Treatment with quercetin suppressed UV-induced matrix metalloproteinase-1 (MMP-1) and cyclooxygenase-2 (COX-2) expression and prevented UV-mediated collagen degradation in human skin tissues. Quercetin exerted potent inhibitory effects towards UV-induced activator protein-1 (AP-1) and nuclear factor-kappa B (NF-κB) activity. Further examination of the upstream signaling pathways revealed that quercetin can attenuate UV-mediated phosphorylation of extracellular signal-regulated kinase (ERK), c-Jun N terminal kinases (JNK), protein kinase B (Akt), and signal transducer and activator of transcription 3 (STAT3). Kinase assays using purified protein demonstrated that quercetin can directly inhibit protein kinase C delta (PKCδ) and Janus kinase 2 (JAK2) kinase activity. Quercetin was observed to bind to PKCδ and JAK2 in pull-down assays. These findings suggest that quercetin can directly target PKCδ and JAK2 in the skin to elicit protective effects against UV-mediated skin aging and inflammation. Our results highlight the potential use of quercetin as a natural agent for anti-skin aging applications.
Introduction
The process of skin aging involves both endogenous and exogenous aging and is generally associated with deep wrinkles, pigmentation, sagging, and laxity [1,2]. Chronic or excessive exposure to UV radiation can promote epidermal inflammation, wrinkle formation, and tumorigenesis [3]. UV exposure to the skin causes the induction of matrix metalloproteinases (MMPs), particularly MMP-1, which mediates the degradation of collagen leading to skin aging [3,4]. Cyclooxygenase 2 (COX-2) is an inducible enzyme functioning as a pro-inflammatory mediator which has been reported to participate in skin photoaging and skin cancer development [5]. UV-driven upregulation of enzymes such as MMP-1 and COX-2 has been known as a major factor promoting skin aging and thus has been recognized as a therapeutic target.
UV radiation activates a variety of signaling molecules involved in the regulation of skin aging [6]. Exposure to UV induces the activation of Protein kinase C (PKC) family members including PKC-delta (PKCδ), which in turn interact with the mitogen-activated protein kinase (MAPK) pathways [7,8]. Activation of MAPKs-including extracellular signal-regulated kinase (ERKs), c-Jun N terminal kinases (JNKs), and p38-is closely associated with the UV-mediated skin aging process [6,9]. Specifically, MAPKs can stimulate transcription factors such as nuclear factor-kappa B (NF-κB) and activator protein-1 (AP-1), which promote the degradation of the extra cellular matrices via upregulation of MMPs and COX-2 expression and repress collagen production in the skin [10,11]. In addition, Janus kinase 2/signal transducer and activator of transcription 3 (JAK2/STAT-3) signaling pathway plays a critical role in skin inflammation [12,13]. These studies provide evidence that PKCδ and JAK2 act as key intermediates in UV-mediated signaling, and the potential to be therapeutic targets in preventing skin-aging.
Quercetin (3,3 ,4 ,5,7-pentahydroxyflavone), a well-known member of the flavonoid family, has been reported to elicit a variety of beneficial effects, including anti-oxidant, anti-inflammatory, and anti-carcinogenic activities [14,15]. However, the anti-skin aging efficacy of quercetin and its direct molecular targets are not fully understood. In the present study, we sought to investigate the mechanisms of action responsible for the effects of quercetin against UV-induced skin aging in skin cells and human skin tissue.
Quercetin Prevents UV-Induced Skin Aging in Human Skin Tissue
To investigate the protective effects of quercetin against UV-induced skin aging, human skin tissues were treated with quercetin and irradiated with UV once a day for 10 successive days in ex vivo conditions ( Figure 1A). As MMP-1 functions as a key enzyme in skin wrinkle formation [3], we first examined the inhibitory effect of quercetin against UV-induced MMP-1 levels. UV irradiation increased MMP-1 expressions, which was suppressed by treatment with quercetin in human skin tissues ( Figure 1B). Because UV also mediates skin aging through promoting inflammatory responses, we evaluated the effects of quercetin on COX-2 expression, a major inducer of inflammation [5]. Quercetin attenuated UV-induced COX-2 expression in human skin tissues ( Figure 1B). Analysis of collagen content demonstrated that quercetin can prevent UV-induced collagen degradation in human skin tissues ( Figure 1C). These results indicate that quercetin has anti-wrinkle and anti-inflammatory effects in human skin tissue.
Quercetin Inhibits UV-Induced AP-1 and NF-κB Activation
To determine whether the inhibitory effects of quercetin on UV-induced skin aging are mediated by transcriptional regulation, we examined UV-induced cox-2 promoter activity with a luciferase reporter assay in JB6 P+ epidermal cells. Quercetin significantly reduced UV-induced cox-2 promoter activity in a dose-dependent manner (Figure 2A). Previous studies have shown that UV irradiation activates several transcription factors including AP-1 and NF-κB, which subsequently induces MMP-1 and COX-2 expression [10,11]. We next investigated whether quercetin affects AP-1 or NF-κB activation using cells stably transfected with a NF-κB or AP-1 luciferase reporter plasmid. Quercetin significantly inhibited UV-induced activation of both targets ( Figure 2B,C). These results demonstrate that quercetin inhibits UV-induced skin aging by suppressing the AP-1 and NF-κB. Inhibitory effect of quercetin against UV-induced COX-2 promoter activity and AP-1 and NF-κB activation. Cells stably expressing COX-2 promoter reporter or AP-1 or NF-κB reporter plasmids were used. Quercetin was treated at the indicated concentrations for 1 h prior to UV irradiation. Luciferase activity was measured for (A) COX-2, (B) AP-1, and (C) NF-κB. ## indicate significant (p < 0.001) induction by UV compared to the un-treated control. *,** indicate significant (p < 0.05, and p < 0.001, respectively) inhibition of activity by quercetin compared to the UV-only group.
Expression of Dominant Negative PKCδ Suppresses UV-Induced MAPK and Akt Activation
Previous reports suggest that the PKC family, particularly PKCδ, may act as upstream regulators of MAPK and Akt [18]. To elucidate the role of PKCδ in regulating UV-induced signaling pathways, we examined MAPK (ERK, JNK, and p38) and Akt phosphorylations after UV irradiation in cells expressing dominant negative PKCδ (PKCδ-DN). UV irradiation increased phosphorylation levels of ERK, JNK, p38, and Akt with peak induction at 30 min ( Figure 4 and Figure S1A). In contrast, inhibition of PKCδ activity by expressing PKCδ-DN suppressed UV-stimulated phosphorylations of ERK JNK, and Akt, but not the phosphorylation of p38 (Figure 4). Cells expressing PKCδ-DN also showed reduced MMP-13 and COX-2 expression levels ( Figure S1B). These results show that PKCδ functions as an upstream regulator of ERK, JNK, and Akt in UV signaling pathway and reflects the changes seen after quercetin treatment.
Quercetin Directly Binds to and Attenuates the Activity of PKCδ and JAK2 Kinase
As suppressing PKCδ displayed similar signaling patterns to that of quercetin treatment, we questioned whether quercetin affects PKCδ activity. To examine the effect of quercetin on PKCδ kinase activity, we examined the in vitro kinase activity using purified PKCδ. Quercetin inhibited PKCδ activity in a dose-dependent manner ( Figure 5A). In addition, as STAT3 is a direct substrate of JAK2, we also examined if quercetin targets JAK2. Treatment with quercetin reduced the activity of JAK2 kinase ( Figure 5B). A pull-down assay using quercetin-conjugated sepharose 4B beads showed that quercetin directly interacts with PKCδ isolated from both skin cells and tissue extracts ( Figure 5C). Also, quercetin-sepharose 4B beads, but not sepharose 4B beads alone, bound JAK2 from skin cells and human skin tissue extracts ( Figure 5D). These results indicate that quercetin can bind to JAK2 and PKCδ in the skin and inhibit their kinase activities.
Docking Model of Quercetin with PKCδ and JAK2
To further investigate how quercetin binds to PKCδ and JAK2, we modeled the structure of quercetin in complex with PKCδ and JAK2. Quercetin can be placed in the hydrophobic region of their kinase domains with a number of hydrogen bonds that can hold the molecule in position ( Figure 6). Quercetin was able to dock to PKCδ in the same hydrophobic area ( Figure 6A). Its binding modes to JAK2 kinase domains were almost identical, suggesting that quercetin may bind to and inhibit the function of both domains ( Figure 6B). However, its binding mode was different to those of PKCδ.
Taken together, our findings provide evidence that quercetin attenuates UV-stimulated MMP-1 and COX-2 expressions via direct inhibition of JAK2 and PKCδ kinase activities, thereby preventing UV-induced skin aging.
Discussion
Quercetin, found in red wine, fruits, and vegetables, exhibits beneficial effects against various diseases such as diabetes, cancer, and inflammatory disorders [14,15]. However, little is known about its anti-skin aging effect and the target molecule of quercetin in UV-mediated signaling. Many studies have reported that UV-induced inflammatory responses and degradation of extracellular matrices are the primary mechanisms responsible for the development of skin aging [4,19]. We discovered that quercetin could block UV-induced COX-2, MMP-1, and collagen degradation in human skin. Examination of the molecular mechanism revealed that quercetin exerts protective effects against UV-mediated skin aging via directly targeting PKCδ and JAK2.
UV exposure induces MMP-1 expression, collagen degradation, and inflammation by activation of the transcription factors such as AP-1 and NF-κB [10,11,20]. AP-1 binds to specific regions of the promoter of cox-2 and mmp-1 [10]. Similarly, NF-κB is also known to regulate inflammatory related genes and mmp-1 [11,21]. In our study, quercetin suppressed cox-2 promoter activity, as well as AP-1 and NF-κB activation. The potent inhibitory effect towards AP-1 and NF-κB could be a key reason for the observed anti-skin aging phenotype driven by quercetin.
Because quercetin inhibited MAPK, Akt, and STAT3 signaling pathways, we hypothesized that the molecular targets of quercetin could be upstream kinases of MAPK, Akt, and STAT3. JAK2 kinase is a well-known upstream regulator of STAT3, and the JAK-STAT pathway is heavily involved in inflammation and growth regulation [12,13]. In addition, PKCδ is an upstream regulator of the MAPK and Akt signaling pathways [7,22]. To confirm the role of PKCδ in UV-induced MAPK signal pathway in skin cells, a PKCδ dominant negative protein was used in our study. Our results showed that PKCδ was involved in controlling UV-stimulated MAPK and Akt signaling.
PKCδ regulates various cellular functions including growth, proliferation, and inflammatory processes [23]. In addition, a previous study has shown that PKCδ modulates the expression of collagen genes in skin cells [24]. Novel PKC inhibitors have been shown to mediate UV-induced MMP-1 expression in a previous study [25]. JAK2 and PKCδ are involved in not only skin aging but also oncogenic alterations such as cell survival, proliferations, and invasion [23,26]. For these reasons, our results carefully suggest that the anti-cancer effect of quercetin may be partly related to the inhibition of JAK2 and PKCδ.
Our findings demonstrate the effect of quercetin on target molecules for UV-induced skin aging, highlighting its potential as a cosmeceutical ingredient for anti-aging products.
Cell Culture and UV Irradiation
The JB6 P+ and JB6 P+ PKCδ DN cells (kindly provided by Dr. Zigang Dong, University of Minnesota) were cultured in MEM (Corning, New York, NY, USA) containing 5% FBS (Gibco, Waltham, MA, USA) and 1% penicillin/streptomycin (Corning, New York, NY, USA) in a 37 • C incubator under a humidified atmosphere of 5% CO 2 . The solar UV light system (Q-Lab Corporation, Cleveland, OH) was used as the UV light source. Cells were exposed to UV at 26 KJcm −2 for 26 min. The percentage of UVA and UVB from lamps was measured with UV meter as 94.5% and 5%, respectively.
Ethic Statement
Human abdominal skin tissue was purchased from Biopredic International (Rennes, France) followed the French Law L.1245-2 CSP. The French Ministry of Higher Education and Research has approved Biopredic International, a human skin supplier, for the acquisition, transformation, sales, and export of human biological material to be used in research (AC-2013-1754, 13 th July 2018). This study complied all principles set forth in the Helsinki Declaration.
Excised Human Skin and UV Irradiation
Skin tissue (diameter of 10 mm) was incubated with DMEM (Corning, New York, NY, USA) containing 10% fetal bovine serum (Gibco, Waltham, MA, USA) with penicillin/streptomycin (Corning, New York, NY, USA) at 37 • C in a humidified incubator containing 5% CO 2 . Ex vivo human skin tissues were treated with quercetin and exposed to UV daily for 10 consecutive days.
Kinase Assays
The in vitro kinase assays were conducted using the SelectScreen Kinase profiling service (Thermo Fisher Scientific, Waltham, MA, USA).
Immunoblotting
After cell lysis, protein concentrations were quantified using a Pierce BCA Protein Assay Kit (Thermo Fisher Scientific, Waltham, MA, USA) as described in the manufacturer's manual. Lysate proteins from cell and human skin tissues were separated by SDS-PAGE and transferred to a nitrocellulose membrane (Bio-Rad, Hercules, CA, USA). After blocking in 5% skim milk in TBS, containing 0.1% Tween 20 (TBST) the membrane was incubated with corresponding antibodies at 4 • C overnight. After washing with TBST, a horseradish peroxidase-conjugated secondary antibody was applied to the membranes and protein bands were visualized with Western Lightning Plus-ECL (Perkin Elmer, Waltham, MA, USA).
Histological Analaysis
Human skin tissues were fixed with 4% formaldehyde and embedded in paraffin. The sliced sections were stained with Masson's trichrome for collagen fibers.
In Vitro and Ex Vivo Pull-Down Assays
Lysates from JB6 P+ cells and ex vivo skin tissue samples were mixed with quercetin-conjugated Sepharose 4B beads or with Sepharose 4B beads alone (as a control) in reaction buffer (150 mM NaCl, 5 mM EDTA, 50 mM Tris pH 7.5, 1 mM DTT, 1 µg protease inhibitor mixture, 0.02 mM PMSF, 2 µg/mL bovine serum albumin and 0.01% Nonidet P-40) at 4 • C overnight with rotation. After washing the beads five times with washing buffer (150 mM NaCl, 5 mM EDTA, 50 mM Tris pH 7.5, 1 mM DTT, 0.02 mM PMSF, and 0.01% NP-40), binding was detected by immunoblotting with the corresponding antibodies.
Complex Structure Computational Modeling
A docking simulation was executed with AutoDock Vina (Version 1.1.2, The scripps Research Institute, La Jolla, CA, USA) [28] using three dimensional structures of quercetin, PKCδ, and JAK2. The structure of quercetin was retrieved from PubChem [29] (PubChem ID: 5280343), and the structures of the JAK2 kinase domains corresponding to residues 545~809 (PDB ID: 5UT3) and 849~1124 (PDB ID: 4D1S) were retrieved from the Protein Data Bank [30]. The structural model of PKCδ in the SWISS-MODEL repository [31], which was based on the structure of the homologue with a sequence identity of 72%, was used in the docking simulation. Up to 10 docking poses were sought within the cubic box of size 30 Angstroms placed near the ATP binding site, and the optimal docking pose was reported in this study.
Statistical Analysis
Data are expressed as mean ± standard deviation (SD) of three independent experiments. Statistical significance between groups was determined using ANOVA with post-hoc Bonferroni test using SPSS (Ver. 20; SPSS, Inc., Chicago, IL, USA). A value of p < 0.05 was considered to indicate statistical significance.
Conclusions
Our findings provide evidence that quercetin directly targets PKCδ and JAK2 protein kinases to effectively inhibit the hallmarks of UV-induced skin aging including wrinkle formation and inflammation in human skin tissues, and may have applications as a cosmeceutical ingredient to prevent skin photoaging.
Conflicts of Interest:
The authors declare no conflict of interest. | 2019-10-27T13:06:55.802Z | 2019-10-23T00:00:00.000 | {
"year": 2019,
"sha1": "1d2c1ff44eff83b7f74b99efd5fe733da2a14d7d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/20/21/5262/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d22cb5d6310f24acf597e5110890aa6e182d3df",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
149616567 | pes2o/s2orc | v3-fos-license | The Color of Law: A Forgotten History of How Our Government Segregated America
The Color of Law examines the local, state and federal housing policies that mandated segregation. He notes that the Federal Housing Administration, which was established in 1934, furthered the segregation efforts by refusing to insure mortgages in and near African-American neighborhoods — a policy known as "redlining." At the same time, the FHA was subsidizing builders who were mass-producing entire subdivisions for whites — with the requirement that none of the homes be sold to AfricanAmericans. (Source)
With bountiful evidence and rigorous detail, Rothstein rejects the prevailing view, upheld to this day by the Supreme Court, that individual decisions create a natural geography of de facto racial segregation in our cities, and argues instead that our government at all levels abetted and sponsored what is in fact de jure segregation. This is the heart of The Color of Law. According to Rothstein, the government has systematically violated the rights it created in the 13 th , 14 th , and 15 th Amendments to the Bill of Rights for black Americans, and his book is essentially a treatise that methodically uncovers this narrative of history.
Each chapter of the book presents a careful yet forceful analysis of historical data, records, and events that uncover this de jure segregation across all facets of our cities. Rothstein demonstrates how public housing, zoning, insurance policies, taxation, labor unions, and police forces all developed and executed racially targeted policies and practices that created widespread discrimination and inequality at the hands of the government.
A NEW NARRATIVE OF RACIAL SEGREGATION
The Color of Law: A Forgotten History of How Our Government Segregated America co, the most liberal of our modern cities, then must it have occurred everywhere? One of the cases Rothstein presents in the Bay Area occurred in the mid-1950s, when Ford Motors closed its plant in racially diverse Richmond and moved 50 miles south to a racially-restrictive neighborhood in Milpitas. Developments built near the new plant were being subsidized by the Federal Housing Administration and were explicitly designated for whites only. As a result, most black workers could not move near the new plant and either lost work or faced long commutes if they remained in Richmond. For Rothstein, the story is a typical one in this country; a story not made by racially biased individual actors, but rather orchestrated knowingly by the government. Ultimately, this type of story results in the economic inequality and racial segregation we see between cities like Richmond and Milpitas today.
Rothstein's argument is strengthened by balancing a discussion of sweeping, large-scale violations with those that are more personal and shocking in their injustices. First, he thoroughly discusses the explicitly racist principles outlined in the 1935 Underwriting Manual of the Federal Housing Administration, which would not insure mortgages to African American families because "incompatible racial groups should not be permitted to live in the same communities." For nearly the next twenty years, subsequent editions repeated this guideline. Rothstein then rounds out his final chapters by giving a passionate account of government failure to enforce the ba- sic rights of many black families who faced extreme physical violence and property damage if they moved into a white neighborhood and were often quickly driven out. The depth of his research at both scales powerfully illustrates the pervasiveness of racial segregation in our society.
Rothstein makes clear that de jure segregation occurs not just spatially, but economically as well. In his chapter discussing the suppressed incomes of black families, Rothstein explains how black families' inability to afford to live in middle-class communities is a direct result of federal and state labor market policies that depressed African American wages with undisguised racial intent. Rothstein dives into the history of specific pieces of legislation like the Wagner Act, which legally empowered labor organizations that refused to admit African Americans. This is a compelling framework that makes clear that income segregation and the wealth gap are by no means de facto, but rather the outcome of economic policies created by powerful organizations. Here Rothstein posits that de jure segregation is not limited to one side of the political spectrum, and demonstrates how many institutions played a part in the segregation and discrimination of African Americans.
Rothstein's argument throughout The Color of Law is clearly articulated and is certainly an important one, but it should also be considered as merely the latest development in a new body of literature that accepts this disturbing narrative of our country 's history as given. Rothstein builds on important works from Weaver (1948), Kushner (1980), Hirsch (1983), Jackson (1985), Massey (1993), and Sugrue (1996), but his argument is exacting in its classification of segregation as de jure. Rothstein himself is a research associate at the Economic Policy Institute and a Fellow at the Thurgood Marshall Institute of the National Association for the Advancement of Colored People (NAACP) Legal Defense Fund, and as such, his examination is largely limited to the economic and legal implications of segregation. Thus, while he presents a comprehensive account in favor of explicit remediation to the inequities caused by racially homogenous neighborhoods, a consideration of the social and cultural impacts of segregation is conspicuously absent.
Near the end of his book, Rothstein concedes that a discussion of real remedies and solutions is outside the scope of this research. Yet this lack of attention to feasible policy solutions at both the federal level and the more local, contextual level feels inadequate. Throughout this book, Rothstein also curiously rejects the phrase "people of color," essentially suggesting without equally convincing evidence that other minorities do not face de jure segregation to the same extent as African Americans. While this seems to be an unproductive way to position his profoundly important research, it certainly leaves room for future literature to address what Rothstein has explicitly chosen to ignore.
Nonetheless, Rothstein concludes his book with an anecdote that is both compelling and revealing in the context of the magnitude of research he has presented. He looks briefly at several U.S. history textbooks widely used in public schools and points to passages that reveal how the myth of de facto segregation is perpetuated and described as a passive force outside the control of government. In this sense, the history of racial segregation is not so much "forgotten" as it is reframed as one that acquits our federal, state, and local governments of responsibility. The value in Rothstein's book comes from challenging this narrative, and from providing a history that acknowledges this massive body of evidence.
Ultimately, Rothstein's book is an essential read for all, but particularly for those whose work may be based on an old narrative of racial segregation. It is a meticulous and deliberate account of history that must be relearned and brought to the fore if America is ever to heal its racial fracturing. | 2018-12-05T12:05:14.329Z | 2017-05-02T00:00:00.000 | {
"year": 2017,
"sha1": "0978232920f2b66280aeba97e4d98230907e6f6e",
"oa_license": null,
"oa_url": "https://escholarship.org/content/qt7v76m4nd/qt7v76m4nd.pdf?t=pood8l",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c1235ccbb8ca53963345a86687a64213bf38b410",
"s2fieldsofstudy": [
"Law",
"History",
"Sociology"
],
"extfieldsofstudy": [
"Political Science"
]
} |
181913951 | pes2o/s2orc | v3-fos-license | Health insurance status affects hypertension control in a hospital based internal medicine clinic
Hypertension is a worldwide disorder that contributes significantly to morbidity, mortality, and healthcare costs in both developed and developing communities. A retrospective cohort study of hypertensive patients attending the Internal Medicine continuity clinic at Nashville General Hospital (NGH) between January and December 2007 was conducted. Given the easy access to health care at NGH and affordable Blood pressure (BP) medications, we explored the ability to achieve optimal BP control <140/90 mmHg and evaluated which factors are associated. Of the 199 subjects, 59% achieved BP goal <140/90 mmHg. The mean BP was 139/80 mmHg. Health insurance status was associated with SBP and DBP (All P < 0.046). Patients with health insurance had a 2.2 fold increased odds of achieving BP control compared to patients without health insurance (P = 0.025). Furthermore, the number of BP medications used was significantly associated with SBP and DBP (All P < 0.003). Patients taking more than three BP medications had a 58% reduced odds of achieving optimal BP control compared to patients taking one medication (P = 0.039). Ethnicity was not associated with achieving BP control. Our study revealed the number of BP medications used and health insurance status, are factors associated with achieving BP control.
Introduction
Hypertension is a worldwide disorder that contributes significantly to morbidity, mortality, and healthcare costs in both developed and developing communities. The prevalence of hypertension in U.S. adults in 2009 was approximately 40% and the National Health and Nutrition Examination Survey (NHANES) report has shown that this prevalence has remained unchanged during the past 10 years [1][2][3]. The JNC 8 guidelines recommended goal for patients with uncomplicated hypertension is an average systolic blood pressure (SBP) of less than 140 mm Hg and a diastolic blood pressure (DBP) of less than 90 mm Hg for patients less than 60 years [4]. For patients with certain co-morbid conditions that increase cardiovascular risk such as diabetes mellitus and chronic kidney disease), the goal BP is similar at less than 140/90 mm Hg [4]. However, only about half of those with hypertension have their BP controlled despite these recommendations [5].
Randomized controlled trials have convincingly shown that treatment of hypertension reduces the risk of stroke, coronary heart disease, congestive heart failure, and mortality associated with these conditions [4,6]. Despite these, about 30% of patients with hypertension are not being treated pharmacologically, and only about 55% of hypertensives have their blood pressure under control [1]. Furthermore, achievement of guideline-defined treatment goals has been shown to be useful in the assessment of the quality of care [7,8].
Although awareness and treatment of hypertension have improved in recent decades, those who have difficulties accessing health care are less likely to have their BP controlled than those who have no barriers to health care access [9,10]. In addition, awareness of hypertension is higher among those who have adequate health care access because those who have difficulties accessing healthcare may delay necessary health care screenings [11]. No previous study have evaluated the factors associated with BP control when easy access to health care is available to hypertensive patients in the US.
The Nashville General Hospital (NGH) located in the Davidson county area of the state of Tennessee, USA serves the residents of the county and attends to all patients with or without health insurance. The hospital has programs that provide financial assistance to uninsured patients which covers access to healthcare and purchase of medications at little or no cost to the patients. Commonly used blood pressure medications are available to patients for $4 for a 30 day supply at most retail pharmacies. The primary goal of this study is to assess the ability to achieve target blood pressure control amongst hypertensive patients attending the internal medicine clinic at NGH given the easy access to health care patients are afforded. The secondary goal of this study is to evaluate the factors associated with achieving blood pressure control in the patients attending the internal medicine clinic at NGH.
Methods
A retrospective chart review of patients attending the internal medicine clinic at NGH was carried out after appropriate approval has been received from the Meharry Medical College (medical school affiliated with NGH) Institutional Review Board (IRB). The inclusion criteria for patients include: (1) age over 18 years, (2) continuous enrollment in the internal medicine continuity clinic for more than 1 year, (3) seen in clinic between January and December of 2007, (4) seen at least twice in the clinic during the 12 months prior to data collection with documentation of blood pressure measurement, (5) diagnosis and treatment for hypertension. Patients with end stage renal disease on chronic hemodialysis were excluded. Data from the medical records of these patients were extracted and recorded on standardized data collection forms. Data extracted include patient's age, sex, ethnicity, health insurance status, number of antihypertensive medications, history of diabetes mellitus and chronic kidney disease.
Blood pressures were obtained using a digital sphygmomanometer with patient seated and arm placed at appropriate position and length. Hypertension was considered controlled if blood pressure is at target goal which is SBP of less than 140 mm Hg and a DBP of less than 90 mm Hg.
The list of BP medications used in the Internal Medicine clinic of Nashville General included calcium channel blockers, diuretics, beta blockers, angiotensin receptor blockers or angiotensin converting enzyme inhibitors which are available to patients for $4 at most retail pharmacies.
Statistical analysis
Continuous variables are presented as the mean AE standard deviation (SD for normally distributed variables and median (interquartile range) for variables that are not normally distributed. Dichotomous variables are presented as numbers and frequencies. For the primary outcome, the proportion of patients achieving pre-set BP goals was computed. To test the secondary hypothesis, univariate regression analysis was conducted to assess the association of each of the collected covariates (age, sex, ethnicity, health insurance status, the number of antihypertensive medications, a diagnosis of diabetes, and a diagnosis of chronic kidney disease) and blood pressure control as a continuous variable, and as a dichotomous variable of BP control of <140/90 mmHg. Multivariate analysis exploring the effects of covariates (as earlier stated) on BP control was conducted. Statistical analysis was conducted using the IBM Statistical Package for Social Sciences (SPSS) version 21. Graphs were produced using the Graph pad Prism software.
Demographics
200 patients met the study criteria but one patient was excluded because the blood pressure was incorrectly documented as a four digit blood pressure measurement. Of the remaining 199 study patients, 59% were female and the patient's age ranged from 22 to 98 years old with an average of 57 AE 11 years old. The study population was mostly African American (68%) and 32% did not have any health insurance. Only 12% of the study population had chronic kidney disease stage 3 or less and 45% of the population had diabetes mellitus. (Table 1). 55% of the study population used at least two blood pressure medications.
Blood pressure control
The mean (95% CI) SBP of the entire study group was 139 mmHg (136-141) mmHg while the mean (95% CI) diastolic blood pressure was 80 mmHg (79-82) mmHg. 61% of the study population achieved optimal SBP control <140 mmHg and 82% of the study population achieved optimal DBP control <90 mmHg. Overall, 59% of the study population achieved optimal guideline directed BP goal <140/90 mmHg.
Health insurance status and blood pressure control
Health insurance status was associated with SBP control. Patients without health insurance had higher SBP [Mean (95% CI; 141 (137-146) mmHg] compared to patients with health insurance [Mean (95% CI; 137 (134-140) mmHg (P ¼ 0.046, Fig. 1). Health insurance status was also associated with DBP. Patients without health insurance had significantly higher DBP compared to patients with health insurance [Mean (95% CI; 84 (81-88) mmHg vs 78 (76-80) mmHg; P ¼ 0.003)]. Overall, patients with health insurance had a 2.2 fold increased odds of achieving optimal BP control less than 140/90 mmHg compared to patients without health insurance (P ¼ 0.025).
Number of BP medications used and blood pressure control
In addition to the health insurance status of patients, the number of blood pressure medications used was significantly associated with SBP and DBP (All P < 0.003, Fig. 2). Patients taking three or more BP meds had significantly higher SBP (Mean (95% CI; 143 (137-149) mmHg) and DBP (Mean 95% CI; 82 (79-85) mmHg) compared to patients taking one BP medication [Mean SBP, 95% CI; 134 (130-138) mmHg and Mean DBP 95% CI; 76 (73-80) mmHg respectively; All P < 0.003]. Furthermore, patients taking more than three BP medications had a 58% reduced odds of achieving adequate BP control less than 140/90 mmHg compared to patients taking one medication (P ¼ 0.039).
Discussion
This significant findings of this research is that providing access to hypertension patients and using blood pressure medications with an average cost of $4 for a 30 day supply achieved BP control similar to the national average. Furthermore, despite the easy access to health care provided to these patients, health insurance status and number of blood pressure medications were the factors affecting blood pressure control.
Achieving BP control can be done using inexpensive generic or branded medications with comparable results. A third of patients that attended the Internal medicine clinic at Nashville general Hospital did not have health insurance and required use of generic medications with a cost of about $4 for a 30 day supply. These generic medications were deliberately prescribed to patients who could not afford the more [12]. Additional factors except having easy access to their primary care physician or access to the medications prescribed are responsible for the level of BP control achieved. Health insurance status is an indirect measure of a patient's socioeconomic status. Health insurance status is usually dependent on age, level of education, ethnicity and household income [13]. Patients that have health insurance are also more likely to access health care [9][10][11]. However, given that the NGH Internal medicine clinic provided easy access to all patients irrespective of their health insurance status, it was surprising that this factor continued to play a major factor affecting BP control even after removing effects of other known factors such as ethnicity, age, sex, or co-morbid clinical conditions (chronic kidney disease and diabetes mellitus). This indicates that a patient's socioeconomic status plays a major role in achieving adequate BP control. This may be due to the patient's level of education affecting their understanding of the severity of their illness and their adherence with BP medications. Although we used drugs costing about $4 to purchase a 30 day supply, this may be unaffordable to these patients as they have low household income. Further studies to explore the effect of adherence to treatment and understanding of the disease condition on BP control will be needed to confirm our findings.
Ethnicity is a well-established factor on the prevalence of hypertension, severity of hypertension, and achieving BP control as it is worse in the African American population [3]. However, our study revealed that when patients have health care access, ethnicity does not play a factor in achieving BP control. Even with 68% of our study population being African Americans, achieving optimal BP control <140/90 mmHg was not different compared to the Caucasian patients that made up 29% of the study population because everyone had access to health care. Furthermore, BP control is more suboptimal in African American patients with additional co-morbidities like chronic kidney disease and diabetes mellitus than in Caucasian patients [14][15][16][17]. However, our study revealed ethnicity is not a factor in achieving optimal BP control in these patients with co-morbidities when access to health care and affordable BP medications are provided.
The number of BP medications used by a patient is an indicator of severity of hypertension. Patients with resistant hypertension require at least three BP medications including a diuretic to achieve BP control. Although, we do not have data about baseline BP before these patients were started on BP medications, those requiring more BP medications probably had more severe hypertension at baseline. Our study revealed that the severity of BP is a factor in achieving BP control. These patients did not necessarily have more co-morbid conditions as the effects of this was adjusted for, but the severity of hypertension persistently contributed to optimal BP control. These patients may require additional BP medications and/or other BP medication combinations to achieve better BP control. A longer follow up in clinic may be needed to make these changes and achieve optimal BP control.
We do not know if findings of this present study can be extrapolated to other disease conditions like diabetes mellitus or hyperlipidemia. Additional studies in these disease conditions will be needed to test this hypothesis in a clinic setting like ours that affords easy health care access and affordable medications. Furthermore, our study was limited to a one year follow up in the clinic. A longer follow up may have revealed better BP control as patients requiring more BP medication combinations may achieve better control over time.
Our study have some limitations. We did not have a baseline BP before patients were started on medications. However, patients had to have been compliant with their clinic visits at least one year prior to enrollment into our study, and the percentage of patients achieving optimal BP control was similar to the national average. Furthermore, we had a limited study population of about 200 patients, however, the level of BP control was similar to the national average and our study population comprised mostly African Americans which is a true reflection of the ethnicity affected by the disease burden.
In conclusion, our study revealed optimal BP control can be achieved using generic affordable BP medications when easy access to health care is provided. Optimal BP control can be achieved without ethnicity or co morbid conditions like chronic kidney disease or diabetes mellitus playing a factor. However, the severity of hypertension and health insurance status may be a factor in achieving optimal BP control even when easy access to health care is provided and affordable medications are used.
Conflict of interest
The authors have no conflict of interest to disclose.
Funding
None. Fig. 1. Health Insurance status is associated with achieving systolic blood pressure control. Systolic blood pressures are depicted as mean (95% confidence interval). Fig. 2. The number of blood pressure medication used is associated with achieving systolic blood pressure control. Systolic blood pressures are depicted as mean (95% confidence interval). | 2019-06-07T21:13:22.416Z | 2019-04-11T00:00:00.000 | {
"year": 2019,
"sha1": "57290646cd92d88bc44aa579bd5194861c9e3eff",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijchy.2019.100003",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e832898d9cebf2c33090ece38f53a89333de6aa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248405656 | pes2o/s2orc | v3-fos-license | Band Flattening and Landau Level Merging in Strongly-Correlated Two-Dimensional Electron Systems
We review recent experimental results indicating the band flattening and Landau level merging at the chemical potential in strongly-correlated two-dimensional (2D) electron systems. In ultra-clean, strongly interacting 2D electron system in SiGe/Si/SiGe quantum wells, the effective electron mass at the Fermi level monotonically increases in the entire range of electron densities, while the energy-averaged mass saturates at low densities. The qualitatively different behavior of the two masses reveals a precursor to the interaction-induced single-particle spectrum flattening at the chemical potential in this electron system, in which case the fermion"condensation"at the Fermi level occurs in a range of momenta, unlike the condensation of bosons. In strong magnetic fields, perpendicular to the 2D electron layer, a similar effect of different fillings of quantum levels at the chemical potential -- the merging of the spin- and valley-split Landau levels at the chemical potential -- is observed in Si inversion layers and bilayer 2D electron system in GaAs. Indication of merging of the quantum levels of composite fermions with different valley indices is also reported in ultra-clean SiGe/Si/SiGe quantum wells.
I. INTRODUCTION
In a non-interacting fermion system with a continuous spectrum, the occupation probability for a quantum state at fixed chemical potential and temperature is a function of the single-particle energy only [1]. If the temperature tends to zero, the energy interval separating the filled and empty quantum states also tends to zero. For free particles, there appears a Fermi surface in momentum space with dimensionality d − 1, where d is the dimensionality of the fermions.
In general, this reasoning is not true for interacting fermions [2][3][4][5][6][7][8]. In this case the single-particle energy depends on electron distributions, and the occupation numbers of quantum states at the chemical potential can be different, falling within the range between zero and one. A topological phase transition has been predicted at T = 0 in strongly correlated Fermi systems that is related to the emergence of a flat portion of the single-particle spectrum at the chemical potential as the strength of fermion-fermion interaction is increased (the top inset of Fig. 1). This transition is associated with the band flattening or swelling of the Fermi surface in momentum space, which is preceded by an increasing quasiparticle effective mass m F at the Fermi level that diverges at the quantum critical point. The creation and investigation of flat-band materials is currently a forefront area of modern physics [9][10][11][12]. The interest is ignited, in particular, by the fact that, due to the anomalous density of states, the flattening of the band may be important for the construction of room temperature superconductivity. The appearance of a flat band is theoretically predicted [13][14][15] in a number of systems, including heavy fermions, high-temperature superconducting materials, 3 He, and two-dimensional electron systems.
The role of electron-electron interactions in the behavior of two-dimensional electron systems increases as the electron density is decreased. The interaction strength is characterized by the Wigner-Seitz radius, r s = 1/(πn s ) 1/2 a B (here n s is the electron density and a B is the effective Bohr radius in semiconductor), which in the single-valley case is equal to the ratio of the Coulomb and kinetic energies.
It has been experimentally shown that with decreasing electron density (or increasing interaction strength) in ultraclean SiGe/Si/SiGe quantum wells, the mass at the Fermi level monotonically increases in the entire range of electron densities [16]. In contrast, the energy-averaged mass saturates at low densities. The qualitatively different behavior of the two masses reveals a precursor to the interaction-induced single-particle spectrum flattening at the Fermi level in this electron system.
For an interacting fermion system placed in strong perpendicular magnetic fields, one expects a similar effect of different fillings of quantum levels at the chemical potential. Given the energies of two quantum levels intersect each other when varying an external parameter, these can be the same as the chemical potential over a range of parameter values, i.e., the levels can merge at the chemical potential over this range [17]. The level merging implies that there is an attraction between two partially-filled quantum levels. The merging interval is determined by the possibility of redistributing quasiparticles between the levels. The effect of merging is in contrast to a simple crossing of quantum levels at some electron density/magnetic field value. Experimentally, such merging of Landau levels has been detected in Si metal-oxide-semiconductor field-effect transistors (MOS-FETs) [18] and GaAs-based bilayer structures [19]. Furthermore, in ultra-clean SiGe/Si/SiGe quantum wells, an indication of merging of the composite fermion levels with different valley indices has been reported [20]. Below we review pertaining recent experimental data.
II. BAND FLATTENING AT THE FERMI LEVEL
We start with an indication of band flattening at the Fermi level reported in Ref. [16]. Raw experimental data obtained in strongly correlated 2D electron systems can be divided into two groups: (i) data describing the electron system as a whole, like the magnetic field required to fully polarize electron spins, the thermodynamic density of states, or magnetization of the electron system, and (ii) data related solely to the electrons at the Fermi level, like the amplitude of the Shubnikov-de Haas oscillations yielding the effective mass m F and Landé g-factor g F at the Fermi level. As a rule, the data in the first group are interpreted using the quasiparticle language in which the energy-averaged values of effective mass, m, and Landé g-factor, g, are used. To determine the values, the formulas that hold for the case of non-interacting electrons are employed. Although this approach is ideologically incorrect, the results for m and g often turn out to be the same as the results for m F and g F . Particularly, in a 2D electron system in Si MOSFETs, simultaneous increase of the energy-averaged effective mass and that at the Fermi level was reported in earlier publications [21][22][23][24][25][26][27]; it was found that the effective mass is strongly enhanced at low densities while the g-factor stays close to its value in bulk silicon, which did not confirm the occurrence of the Stoner instability in a 2D electron system in silicon. The mass renormalization is independent of disorder, being determined by electron-electron interactions only [28]. The strongly enhanced effective mass in Si MOSFETs was interpreted in favor of the formation of the Wigner crystal or a preceding intermediate phase whose origin and existence can depend on the level of disorder in the electron system.
The experimental results reported in this section were obtained in ultra-low-disordered (001) SiGe/Si/SiGe quantum wells described in detail in Refs. [29,30]. The maximum electron mobility in these samples reached 240 m 2 /Vs, which is the highest mobility reported for this electron system and is some two orders of magnitude higher than the maximum electron mobility in the least disordered Si MOSFETs. The parallel-field magnetoresistance (i.e., magnetoresistance measured in the configuration where the magnetic field is parallel to the 2D plane) allows one to determine the field of the full spin polarization, B c , that corresponds to a distinct "knee" of the experimental dependences followed by the saturation of the resistance [31,32] (see the bottom inset to Fig. 2). The magnetic field where the spin polarization becomes complete is plotted as a function of electron density in Fig. 2 for two samples. Over the electron density range 0.7 × 10 15 m −2 < n s < 2 × 10 15 m −2 , the data are described well by a linear dependence that extrapolates to zero at n s ≈ 0.14 × 10 15 m −2 (dashed black line). However, at lower electron densities down to n s ≈ 0.2 × 10 15 m −2 (up to r s ≈ 12), the experimental dependence B c (n s ) deviates from the straight line and linearly extrapolates to the origin.
The solid red line in Fig. 2 shows the polarization field B c (n s ) calculated using the quantum Monte Carlo method [33]. The experimental results are in good agreement with the theoretical calculations for the clean limit k F l 1 (here k F is the Fermi wavevector and l is the mean free path), assuming that the Landé g-factor, renormalized by electron-electron interactions, is equal to 2.4. Although in Ref. [33] Landé g-factor was equal to 2, the reason for the 20% discrepancy between the theory and experiment may be due to the finite size of the electron wave function in the direction perpendicular to the interface. Besides, the product k F l decreases with decreasing electron density, which leads to a downward deviation in the theoretical dependence, as shown by the dotted red line in the upper inset to Fig. 2.
To check whether or not the residual disorder affects the results for the magnetic field of complete spin polarization, we compare our data with those previously obtained on Si/SiGe samples with electron mobility an order of magnitude lower than that in our samples [34]. At high electron densities, the dependence B c (n s ) in Ref. [34] is also linear and extrapolates to zero at a finite density. Furthermore, the slope of the dependence is equal to 6 × 10 −15 T·m 2 and is close to the slope 5.4 × 10 −15 T·m 2 observed in our experiment. However, the offset of approximately 0.3 × 10 15 m −2 in Ref. [34] is appreciably higher than that in our case. Therefore, the behavior of the polarization field B c is affected by the disorder potential in agreement with Refs. [33,35]. A good agreement between our experimental data for B c and the calculations for the clean limit [33] provides evidence that the electron properties of our samples are only weakly sensitive to the residual disorder, and the clean limit has been reached in our samples.
The product g F m that characterizes the whole 2D electron system can be determined in the clean limit from the equality of the Zeeman splitting and the Fermi energy of a completely spin-polarized electron system where g v = 2 is the valley degeneracy and µ B is the Bohr magneton.
On the other hand, the Landé g-factor g F and effective mass m F at the Fermi level can be determined by the analysis of the Shubnikov-de Haas oscillations in rel-FIG. 1: Product of the Landé factor and effective electron mass in SiGe/Si/SiGe quantum wells as a function of electron density determined by measurements of the field of full spin polarization (squares) and Shubnikov-de Haas oscillations (circles) at T ≈ 30 mK. The empty and filled symbols correspond to two samples. The experimental uncertainty corresponds to the data dispersion and is about 2% for the squares and about 4% for the circles (g0 = 2 and m0 = 0.19 me are the values for noninteracting electrons). The top inset shows schematically the single-particle spectrum of the electron system in a state preceding the band flattening at the Fermi level (solid black line). The dashed violet line corresponds to an ordinary parabolic spectrum. The occupied electron states at T = 0 are indicated by the shaded area. Bottom inset: the effective mass mF versus electron density determined by analysis of the temperature dependence of the amplitude of Shubnikov-de Haas oscillations, similar to Ref. [38]. The dashed line is a guide to the eye. From Ref. [16].
atively weak magnetic fields, as it was done in Ref. [16]: where T D is the Dingle temperature, T is the temperature, m e is the free electron mass, ω c is the cyclotron splitting, ∆ Z is the Zeeman splitting, and ∆ v is the valley splitting. It is clear from Eq. (2) that as long as one sets Z v i = 1 in the range of magnetic fields studied, the fitting parameters are T D m F , m F , and g F m F [36]. The values T D m F and m F are obtained in the temperature range where the spin splitting is insignificant. Be-FIG. 2: Dependence of the field of complete spin polarization, Bc, on electron density at a temperature of 30 mK for two samples (dots and squares). The dashed black line is a linear fit to the high-density data which extrapolates to zero at a density nc. The solid red line corresponds to the calculation [29] for the clean limit. Top inset: the low density region of the main figure on an expanded scale. Also shown by the dotted red line is the calculation [29] taking into account the electron scattering. Bottom inset: the parallel-field magnetoresistance at a temperature of 30 mK at different electron densities indicated in units of 10 15 m −2 . The polarization field Bc determined by the crossing of the tangents is marked by arrows.
splitting, ∆ Z is the Zeeman splitting, and ∆ v is the valley splitting.
The main result shown in Fig. 1 is that the products of the average g F m and g F m F at the Fermi level behave similarly at high electron densities, where electronelectron interactions are relatively weak, but differ at low densities, where the interactions become especially strong. The product g F m F monotonically increases as the electron density is decreased in the entire range of electron densities, while the product g F m saturates at low n s . We emphasize that it is the qualitative difference in the behaviors of the two sets of data that matters, rather than comparison of the absolute values. Taking into account the negligibility of the exchange effects in the 2D electron system in silicon [21,22], this difference can only be attributed to the different behaviors of the The polarization field Bc determined by the crossing of the tangents is marked by arrows. From Ref. [16].
ing weakly sensitive to these two fitting parameters, the shape of the fits at the lowest temperatures turns out to be very sensitive to the product g F m F . The quality of the fits is demonstrated in Fig. 3. The magnetoresistance δρ xx = ρ xx − ρ 0 normalized by ρ 0 (where ρ 0 is the monotonic change of the dissipative resistivity with magnetic field) is described well using Eq. (2).
The main result shown in Fig. 1 is that the products of the average g F m and g F m F at the Fermi level behave similarly at high electron densities, where electron-electron interactions are relatively weak, but differ at low densities, where the interactions become especially strong [37]. The product g F m F monotonically increases as the electron density is decreased in the entire range of electron densities, while the product g F m saturates at low n s . We emphasize that it is the qualitative difference in the behaviors of the two sets of data that matters, rather than a comparison of the absolute values. Taking into account the negligibility of the exchange effects in the 2D electron system in silicon [21,22], this difference can only be attributed to the different behaviors of the two effective masses. Their qualitatively different behavior indicates the interaction-induced band flattening at the Fermi level in this electron system. To add confidence in our results and conclusions, we show in bottom inset density for two samples. The Dingle temperature for two spin subbands is found to be different in our samples, similar to the results for Si MOSFETs of Ref. 27 . Although the effect is appreciably weaker in our case, we have to introduce another fitting parameter γ for T u,d D = TD(1 ± γ). The difference between the Dingle temperatures for two spin subbands does not exceed 6%, the Dingle temperature for energetically favorable spin direction being smaller over the range of electron densities 0.6 × 10 15 m −2 < ns < 2 × 10 15 m −2 , whereas at lower densities the quantity γ changing sign. TD(1 ± γ) for two spin subbands versus electron density for two samples. The Dingle temperature for energetically favorable spin direction is smaller over the range of electron densities 0.6 × 10 15 m −2 < ns < 2 × 10 15 m −2 , whereas at lower densities the quantity γ changes sign. From Ref. [16].
in Fig. 1 the data for the effective mass m F determined by the analysis of the temperature dependence of the amplitude of Shubnikov-de Haas oscillations, similar to Ref. [38]. The similar behavior of m F and g F m F with electron density allows one to exclude any possible influence of the g-factor on the behavior of the product of the effective mass and g-factor, which is consistent with the previously obtained results for the 2D electron system in silicon.
The experimental results are naturally interpreted within the concept of the fermion condensation [2,4,8] that occurs at the Fermi level in a range of momenta, unlike the condensation of bosons. With increasing strength of electron-electron interactions, the single-particle spectrum flattens in a region ∆p near the Fermi momentum p F (top inset to Fig. 1). At relatively high electron den-sities n s > 0.7 × 10 15 m −2 , this effect is not important since the single-particle spectrum does not change noticeably in the interval ∆p and the behaviors of the energyaveraged effective mass and that at the Fermi level are practically the same. Decreasing the electron density in the range n s < 0.7 × 10 15 m −2 gives rise to the flattening of the spectrum so that the effective mass at the Fermi level, m F = p F /v F , continues to increase (here v F is the Fermi velocity). In contrast, the energy-averaged effective mass does not, being not particularly sensitive to this flattening. In the critical region, where the effective mass at the Fermi level tends to diverge, m F is expected to be temperature dependent. A weak decrease of the value g F m F with temperature is indeed observed at the lowest-density point in Fig. 1. In the critical region, the increase of m F is restricted by the limiting value determined by temperature: m F < p F ∆p/4k B T . In our experiments, the increase of m F reaches a factor of about two at n s = 0.3×10 15 m −2 and T ≈ 30 mK, which allows one to estimate the ratio ∆p/p F ∼ 0.06. It is the smallness of the interval ∆p that provides good agreement between the calculation [33] and our experiment.
It is worth noting that the effective mass at the Fermi level tends to diverge at a density n m higher than the critical electron density n c of the metal-insulator transition, revealing the qualitative difference between the ultralow-disorder SiGe/Si/SiGe quantum wells and the least-disordered Si MOSFETs where the opposite relation n c ≥ n m is found [39]. This indicates that these two densities are not directly related, and the fermion condensation and metal-insulator transition are two different transitions.
III. MERGING OF LANDAU LEVELS IN A STRONGLY-INTERACTING TWO-DIMENSIONAL ELECTRON SYSTEM
Another example of a nontrivial manifestation of fermion interactions in strongly correlated Fermi liquids is the merging of quantum levels in a Fermi system with a discrete spectrum, in which case the fillings of the two quantum levels at the chemical potential are different [18].
Application of the perpendicular magnetic field B on a homogeneous 2D electron system creates two subsystems of Landau levels numbered i and distinguished by ± projections of the electron spin on the field direction. The energy levels ε ± i in each set are spaced by the cyclotron splitting ω c = eB/m * c, and the two sets of the Landau levels are shifted with respect to each other by the spin splitting ∆ Z = gµ B B, where m * and g are the values of mass and Landé g-factor renormalized by electron interactions (for simplicity, the valley degeneracy is so far neglected). The Landau levels with opposite spin directions should intersect with changing electron density, as caused by the strong dependence of the effective mass on n s , provided the g-factor depends weakly on n s . In par- interface, the intervalley charge transfer creates an incremental electric field. In accordance with Eq. (10), we Although the distance α ∼ 0.4Å is small compared to the thickness of the 2D electron system which is about 50Å at densities n s ≈ 1 × 10 11 cm −2 [12], the estimated interaction energy n 0 Γ(i) ∼ 0.02 meV at B ≈ 1 T is comparable with the valley splitting ∆ v ≈ 0.06 meV. The value of ∆ v is calculated using the known formula ∆ v ≈ 0.015(n s +32n depl /11) meV, where n depl ≈ 1×10 11 cm −2 is the depletion layer density and the densities are in units of 10 11 cm −2 [12]. The strength of the merging effect being obviously determined by Γ(i), the appreciable interaction energy should lead to a wide merging region at fixed filling factor ν = 4i + 4. The lower boundary of the merging region of the neighboring Landau levels is given by the expression In the high-density limit, where the effect of electronelectron interactions is negligible, the effective mass and g factor are equal to m b = 0.19m e and g 0 = 2 so that the cyclotron splitting significantly exceeds the spin splitting. At low electron densities, where the interaction effects are strong, the effective mass m(n s ) is found to diverge as m b /m ≃ (n s − n c )/n s at the quantum critical point close to the metal-insulator transition which occurs at n c ≃ 8 × 10 10 cm −2 , while the g factor stays close to g 0 , being equal to g ≃ 1.4g 0 [13][14][15][16][17]. The Landau level fan diagram for this electron system in perpendicular magnetic fields is represented in Fig. 2. The quantum oscillation minima at filling factor ν = 4i + 4 disappear below some electron density n * depending on ν, while the minima at ν = 4i + 2 persist down to appreciably lower densities [10]. Although this behavior is consis- tent with the sharp increase of the effective mass with decreasing n s , the dependence of the density n * on filling factor (or B) turns out to be anomalously strong and lacks explanation. Particularly, this cannot be accounted by the impurity broadening of quantum levels in terms of ω c τ ∼ 1 (where τ is the elastic scattering time) in which case the drop of mobility eτ /m at low electron densities is controlled by the increasing mass [11].
Using the above expressions for m, g, and ∆ v , we determine from Eq. (13) the expected upper boundary of the merging region n m (B), shown by the solid blue line in Fig. 2. The calculated boundary is in agreement with the experimental density n * (B) at which the oscillation minima at ν = 4i + 4 vanish. This fact gives evidence for the level merging in a 2D electron system in silicon.
We now discuss the possibility that the description of the high-field data n * (B) improves within the merging picture if one takes account of nonlinear (cubic) corrections to the spectrum ε(p) near the Fermi surface that lead naturally to a decrease of the effective mass with magnetic field. The cubic corrections should be important near the quantum critical point since the linear term is strongly suppressed, and the spectrum takes the form ticular, at high electron densities, the cyclotron splitting usually exceeds the spin splitting, whereas at low densities, the opposite case ω c < ∆ Z should occur due to the sharply increasing mass. Both the thermodynamic and kinetic properties of the electron system are determined by the position of the chemical potential relative to the quantum levels, which is in turn determined by the magnetic field and electron density. The filling factor is equal to ν = n s /n 0 , where n 0 = eB/hc is the level degeneracy. When ν is fractional, the chemical potential is pinned to the partially filled quantum level. The probability of finding an electron at the chemical potential is given by the fractional part of the filling factor and can be varied between zero and one. At the integer filling factor, there is a jump of the chemical potential. In an experiment, the jump manifests itself as a minimum in the longitudinal electrical resistance. The resistance minima in the (B, n s ) plane correspond to a Landau level fan chart.
Provided that the external magnetic field is fixed and many quantum levels are occupied, the variation of the electron density in a quantum level is small compared to n s . The variation of the energy ε λ is evaluated using the Landau relation where Γ λσ is the electron-electron interaction amplitude that is a phenomenological ingredient of the Fermi liquid theory [1]. Selecting the magnetic field at which the difference between the neighboring Landau levels ε + i and interface, the intervalley charge transfer creates an incremental electric field. In accordance with Eq. (10), we ,i ) = 2πe 2 α/κ and Γ(i) = 4πe 2 α/κ, where κ is the dielectric constant. Although the distance α ∼ 0.4Å is small compared to the thickness of the 2D electron system which is about 50Å at densities n s ≈ 1 × 10 11 cm −2 [12], the estimated interaction energy n 0 Γ(i) ∼ 0.02 meV at B ≈ 1 T is comparable with the valley splitting ∆ v ≈ 0.06 meV. The value of ∆ v is calculated using the known formula ∆ v ≈ 0.015(n s +32n depl /11) meV, where n depl ≈ 1×10 11 cm −2 is the depletion layer density and the densities are in units of 10 11 cm −2 [12]. The strength of the merging effect being obviously determined by Γ(i), the appreciable interaction energy should lead to a wide merging region at fixed filling factor ν = 4i + 4. The lower boundary of the merging region of the neighboring Landau levels is given by the expression In the high-density limit, where the effect of electronelectron interactions is negligible, the effective mass and g factor are equal to m b = 0.19m e and g 0 = 2 so that the cyclotron splitting significantly exceeds the spin splitting. At low electron densities, where the interaction effects are strong, the effective mass m(n s ) is found to diverge as m b /m ≃ (n s − n c )/n s at the quantum critical point close to the metal-insulator transition which occurs at n c ≃ 8 × 10 10 cm −2 , while the g factor stays close to g 0 , being equal to g ≃ 1.4g 0 [13][14][15][16][17]. The Landau level fan diagram for this electron system in perpendicular magnetic fields is represented in Fig. 2. The quantum oscillation minima at filling factor ν = 4i + 4 disappear below some electron density n * depending on ν, while the minima at ν = 4i + 2 persist down to appreciably lower densities [10]. Although this behavior is consis- tent with the sharp increase of the effective mass with decreasing n s , the dependence of the density n * on filling factor (or B) turns out to be anomalously strong and lacks explanation. Particularly, this cannot be accounted by the impurity broadening of quantum levels in terms of ω c τ ∼ 1 (where τ is the elastic scattering time) in which case the drop of mobility eτ /m at low electron densities is controlled by the increasing mass [11].
Using the above expressions for m, g, and ∆ v , we determine from Eq. (13) the expected upper boundary of the merging region n m (B), shown by the solid blue line in Fig. 2. The calculated boundary is in agreement with the experimental density n * (B) at which the oscillation minima at ν = 4i + 4 vanish. This fact gives evidence for the level merging in a 2D electron system in silicon.
We now discuss the possibility that the description of the high-field data n * (B) improves within the merging picture if one takes account of nonlinear (cubic) corrections to the spectrum ε(p) near the Fermi surface that lead naturally to a decrease of the effective mass with magnetic field. The cubic corrections should be important near the quantum critical point since the linear term is strongly suppressed, and the spectrum takes the form [5] zeroes at the filling factor ν = n s hc/eB = 2i + 2, one starts from the higher density where both levels (i + 1) − and i + are completely filled at ν = N = 2i + 3, the difference D(N ) being negative. Removing the electrons from the level (i + 1) − implies that the electron density decreases and D increases. The level crossing occurs at ν = 2i + 2, i.e., the level i + becomes empty and the level (i + 1) − is completely filled, under the condition the single-particle levels attract to each other and merge at the chemical potential µ, as described by the merging equation ε − i+1 = ε + i = µ. Both levels exhibit partial occupation with fractions of empty states 0 < f i < 1 and 0 < f i+1 < 1 that obey the normalization condition f i + f i+1 = f = N − ν. The merging starts when the empty states appear in the level ε + i and ends when this level is completely emptied. This corresponds to the increase of the fraction of empty states f in the range between f = 1 (or ν = 2i + 2) and f = min(1 + Γ(i)/(Γ −− i+1,i+1 − Γ +− i,i+1 ), 2). Outside the merging region, the conventional Landau level diagram is realized. Note that the gap between the neighboring Landau levels ε + i and ε − i+1 proves to be invisible in transport and thermodynamic experiments. The upper boundary of the merging region n m (B) can be written The electron system in (100) Si MOSFETs is characterized by the presence of two valleys in the spectrum so that each energy level ε ± i is split into two levels, as shown schematically in Fig. 4. One can easily see that the valley splitting ∆ v promotes the merging of quantum levels. The bigger the valley splitting, the higher the electron density at which the levels (i + 1) − and i + with different valley indices should merge at the chemical potential at filling factor ν = 4i + 4. The upper boundary of the merging region n m (B) is determined by the relation that is different from Eq. (6) by the presence of the valley splitting. Since the electron density distributions corresponding to two valleys are spaced by distance α in the direction perpendicular to the Si-SiO 2 interface, the intervalley charge transfer creates an incremental electric field leading to Γ(i) = 4πe 2 α/κ, where κ is the dielectric constant. Γ(i) determines the strength of the merging effect, and the lower boundary of the merging region of the neighboring Landau levels is given by the expression In the high-density limit, where the effects of electronelectron interactions are negligible, the effective mass and g factor are equal to m 0 = 0.19 m e and g 0 = 2 so that the cyclotron splitting significantly exceeds the spin splitting. At low electron densities, where the interaction effects are strong, the effective mass m * (n s ) is found to diverge as m 0 /m * (n s − n c )/n s at the quantum critical point close to the metal-insulator transition which occurs at n c 8 × 10 10 cm −2 , while the g-factor stays close to g 0 , being equal to g 1.4 g 0 [21,22,24,25]. The Landau level fan diagram for this electron system in perpendicular magnetic fields is represented in Fig. 5. The quantum oscillation minima at filling factor ν = 4i + 4 disappear below some electron density n * depending on ν, while the minima at ν = 4i + 2 persist down to appreciably lower densities. Although this behavior is consistent with the sharp increase of the effective mass with decreasing n s , the dependence of the density n * on the filling factor (or B) turns out to be anomalously strong and lacks explanation. Particularly, this cannot be accounted for by the impurity broadening of quantum levels in terms of ω c τ ∼ 1 (where τ is the elastic scattering time) in which case the drop of mobility eτ /m * at low electron densities is controlled by the increasing mass [24].
The expected upper boundary of the merging region n m (B), shown by the solid blue line in Fig. 5, has been determined in Ref. [18]. The calculated boundary is in agreement with the experimental density n * (B) at which the oscillation minima at ν = 4i + 4 vanish. This fact gives evidence for the level merging in a 2D electron system in silicon.
The description of the high-field data n * (B) improves within the merging picture if one takes into account nonlinear (cubic) corrections to the spectrum at the Fermi surface near the quantum critical point that naturally lead to a decrease of the effective mass with the magnetic field. The corrected dependence n m (B) is shown by the dotted violet line in Fig. 5; for more on this, see Ref. [18].
IV. INTERACTION-INDUCED MERGING OF LANDAU LEVELS IN AN ELECTRON SYSTEM OF DOUBLE QUANTUM WELLS
The spectrum of a two-dimensional electron system subjected to a perpendicular magnetic field consists of two equidistant ladders of quantum levels for the spin up and down directions, as considered in the preceding section. If the magnetic field is tilted by an angle β, the spacing between the quantum levels in each of the spin ladders is equal to ω c = eB cos(β)/m * c, and the shift between the ladders equals gµ B B. Increasing the tilt angle leads to the crossing of the quantum levels of the two ladders. The crossing happens for the first time at an angle β 1 that satisfies the condition cos(β 1 ) = gm * /2m e . At β = β 1 , the chemical potential jumps at even filling factors and the corresponding fan chart lines should disappear. If one takes into account the interaction between the electrons of neighboring quantum levels and increases the tilt angle in the vicinity of β 1 , then, tentatively, the quantum level filled before crossing should have got emptied with increasing β. However, suppose the single-particle energy of electrons on the emptying level decreases due to the electron interaction. In that case, both levels remain pinned to the chemical potential over a wide range of angles ∆β 1 that is determined by the interaction strength. The probability of finding an electron at the chemical potential is different for opposite spin orientations, depending on the external parameter, the tilt angle. Such a behavior corresponds to the merging of quantum levels.
In the above hypothetical consideration, the crossing or merging of quantum levels is controlled by the tilt angle of the magnetic field. In the experiments on a strongly-interacting 2D electron system in (100) Si MOS-FETs, the disappearance of the longitudinal resistance minima is analyzed when changing both the perpendicular magnetic field and electron density at fixed filling factor ν = 4(i + 1), where i is an integer. In this case, the level merging occurs near the quantum critical point, as controlled by the effective mass depending on electron density [18]. One might think that the level merging is a precursor of the Fermi surface swelling. In fact, the two effects are not necessarily related to each other. Below it is demonstrated that the effect of level merging occurs in a bilayer 2D electron system with a tunnel barrier between the electron layers [19]. Note that the effective mass enhancement is insignificant in this case. plane are marked by the dots. The filling factor ν for the double layer electron system as well as the filling factor ν1 (ν2) for the back (front) layer are indicated. Over the shaded areas, the merging of quantum levels in perpendicular magnetic fields is impossible. In the regions marked by the ovals, no resistance minima are observed in a perpendicular magnetic field, whereas these appear in a tilted magnetic field. tying level decreases due to the electron interaction, both levels remain pinned to the chemical potential over a wide range of angles ∆β 1 that is determined by the interaction strength.
The probability to find an electron at the chemical potential is different for opposite spin orientations, being dependent on the external parameter which is the tilt angle. Such a behavior is known as the merging of quantum levels.
In the above hypothetical consideration the crossing or merging of quantum levels is controlled by the tilt angle of the magnetic field. In the experiments on a strongly-interacting 2D electron system in (100) Si MOSFETs, the disappearance of the longitudinal resistance minima is analyzed when changing both the perpendicular magnetic field and electron density at fixed filling factor ν = 4(i + 1), where i is an integer. In this case the level merging occurs near the quantum critical point, as controlled by the effective mass depending on electron density [12]. One might think that the level merging is a precursor of the Fermi surface swelling. In fact, the two effects are not necessarily related to each other. Below, we demonstrate that the effect of level merging occurs in a bilayer 2D electron system with a tunnel barrier between the electron layers. Note that although the Shubnikov-de Haas effect in similar double layer electron systems was investigated in a number of publications [17][18][19][20][21], only the level crossing was observed in Refs. [17,20,21].
The samples used contain a parabolic quantum well grown on a GaAs substrate, as shown schematically in Fig. 2. The width of the parabolic part of the well, limited by vertical walls, is about 760Å. At the center of the well there is a narrow tunnel barrier that consists of three monolayers of Al x Ga 1−x As (x = 0.3). The symmetrically doped structure is capped by 600Å AlGaAs and 40Å GaAs layers over which a metallic gate is evaporated. The presence of the tunnel barrier leads to a splitting of each subband bottom caused by quantization in the z direction. At the point of the symmetric electron density distribution, the splitting energy is equal to 1.3 meV. The structure of the quantum levels in the bilayer 2D electron system in perpendicular magnetic fields is similar to that in the 2D electron system in (100) Si MOSFETs, where the spin and valley splittings are present, with a distinction that the spin splitting ∆ Z in accessible magnetic fields is the smallest (Fig. 3(a)). Applying a voltage V g between the gate and the contact to the quantum well makes it possible to tune the electron density. The electrons appear in the back part of the quantum well when the gate voltage is above V th1 ≈ −0.7 V and occupy one subband up to V g = V th2 ≈ −0.3 V (Fig. 1). At V g > V th2 , the electrons appear in the front part of the well and fill the second subband up to the balance point V g = V balance ≈ 0. This behavior is typical of the so-called "soft" double layer electron systems [18,19,22].
We focus on the range of gate voltages between V th2 and V balance where the electrons in perpendicular magnetic fields occupy the two quantum ladders. Positions of the quantum levels are determined by the magnetic field and gate voltage. Over the shaded areas in Fig. 1, the gaps in the single-particle spectrum and the chemical potential jumps are protected by quantum effects [18]. In the remaining areas, the merging of quantum levels is in principle possible.
We now consider the filling factor ν = 3. At V g = V th2 , the magnetic field is equal to B ν = n 1 (V th2 )hc/3e, where n 1 (V th2 ) is the electron density in the back layer. The energy ε 0 − gµ B B ν /2 of the spin up level of the front layer FIG. 6: Landau level fan chart for the AlGaAs double quantum well studied in Ref. [19]. Positions of the longitudinal resistance minima in the (B, Vg) plane are marked by the dots. The filling factor ν for the double layer electron system as well as the filling factor ν1 (ν2) for the back (front) layer are indicated. Over the shaded areas, the merging of quantum levels in perpendicular magnetic fields is impossible. In the regions marked by the ovals, no resistance minima are observed in a perpendicular magnetic field, whereas these appear in a tilted magnetic field. From Ref. [19].
The sample used in this section is a parabolic quantum well with a narrow tunnel barrier grown on a GaAs substrate (for a detailed description, see Ref. [19]). Applying a voltage V g between the gate and the contact to the quantum well makes it possible to tune the electron density. The electrons appear in the back part of the quantum well when the gate voltage is above V th1 ≈ −0.7 V and occupy one subband up to V g = V th2 ≈ −0.3 V (Fig. 6). At V g > V th2 , the electrons appear in the front part of the well and fill the second subband up to the balance point V g = V balance ≈ 0.
The authors focus on the range of gate voltages between V th2 and V balance where the electrons in perpendicular magnetic fields occupy the two quantum ladders. Positions of the quantum levels are determined by the magnetic field and gate voltage. Over the shaded areas in Fig. 6, the gaps in the single-particle spectrum and the chemical potential jumps are protected by quantum effects [40]. In the remaining areas, the merging of quantum levels is, in principle, possible.
Below, the filling factor ν = 3 is considered. At V g = V th2 , the magnetic field is equal to B ν = n 1 (V th2 )hc/3e, where n 1 (V th2 ) is the electron density in the back layer. The energy ε 0 − gµ B B ν /2 of the spin up level of the front layer ladder is the same as the energy ε 1 −gµ B B ν /2 of the spin up level of the back layer ladder (Fig. 7(a)). Since far from the balance point, the electron density in the back layer remains practically unchanged with increasing V g ladder is the same as the energy ε 1 − gµ B B ν /2 of the spin up level of the back layer ladder (Fig. 3(a)). Since far from the balance point, the electron density in the back layer remains practically unchanged with increasing V g above V th2 (see, e.g., Fig. 1(b) of Ref. [17]), the electron density in the front layer in a magnetic field B = B ν + ∆B along the dashed line at ν = 3 restricted by the oval in Fig. 1 is equal to n 2 ≃ ∆n = 3e∆B/hc. To balance the change in the cyclotron energy e∆B/mc and have both levels pinned to the chemical potential µ, it is necessary to transfer a small amount of electrons between the levels which gives rise to a shift of the single-particle levels where Γ λσ j is the electron interaction amplitude, the index b (f ) refers to the back (front) layer, and j = 0, 1 is the Landau level number. Both levels are pinned to the chemical potential under the condition δε 0 − δε 1 above V th2 (see, e.g., Fig. 6(b) of Ref. [41]), the electron density in the front layer in a magnetic field B = B ν +∆B along the dashed line at ν = 3 restricted by the oval in Fig. 6 is equal to n 2 ∆n = 3e∆B/hc. To balance the change in the cyclotron energy e∆B/m * c and have both levels pinned to the chemical potential µ, it is necessary to transfer a small amount of electrons between the levels which gives rise to a shift of the single-particle levels where Γ λσ j is the electron interaction amplitude, the index b (f ) refers to the back (front) layer, and j = 0, 1 is the Landau level number. Both levels are pinned to the chemical potential under the condition δε 0 − δε 1 = e∆B/m * c, which yields where Γ = Γ bb In the parallel-platecapacitor approximation, one gets where a is the distance between the weight centers of the electron density distributions in the z direction in both subbands. The level merging holds for the filling factor ν 2 = (n 2 + δn 2 )hc eB < 1.
The authors stress that in the wide range of magnetic fields at fixed filling factor ν, the probability of finding an electron with energy equal to the chemical potential is different for the two merged levels, as shown in Fig. 7(b).
Although the case of filling factor ν = 3 has been considered above for simplicity, the same arguments are also valid for higher filling factors.
The occurrence of the merging of quantum levels in the experiment is confirmed by using tilted magnetic fields. With tilting magnetic field, the magnetoresistance minima and chemical potential jumps arise [42] particularly along the dashed lines at ν = 3 and ν = 4 indicated by the ovals in Fig. 6. The appearance of the chemical potential jumps in the double layer electron system in tilted magnetic fields signals that the quantum levels are narrow enough.
As has been mentioned above, the chemical potential jumps can be protected by quantum effects. In general, a transfer of electrons between the quantum levels of different subbands leads to mixing the wave functions of the subbands and opening an energy gap if the non-diagonal matrix elements are not equal to zero [40]. This is realized over the shaded areas in Fig. 6. In contrast, in the merging regions at ν = 3 and ν = 4 indicated by the ovals in Fig. 6, the non-diagonal matrix elements in perpendicular magnetic fields are equal to zero because of the orthogonality of the in-plane part of the wave functions in the bilayer electron system. Tilting the magnetic field breaks the orthogonality of the wave functions of the neighboring quantum levels, and the energy gap emerges [42,43].
V. MERGING OF THE QUANTUM LEVELS OF COMPOSITE FERMIONS
Finally, we consider an indication of merging of the quantum levels of composite fermions with different valley indices [20]. The concept of composite fermions [44][45][46][47][48] can successfully describe the fractional quantum Hall effect with odd denominators by reducing it to the ordinary integer quantum Hall effect for composite particles. In the simplest case, the composite fermion consists of an electron and two magnetic flux quanta and moves in an effective magnetic field B * given by the difference between the external magnetic field B and the field corresponding to the filling factor for electrons, equal to ν = 1/2. The filling factor for composite fermions, p, is connected to ν according to the expression ν = p/(2p ± 1). The fractional energy gap, which is predicted to be determined by the Coulomb interaction in the form e 2 /κl B , corresponds to the cyclotron energy of composite fermions ω * c = eB * /m CF c, where l B = ( c/eB) 1/2 is the magnetic length and m CF is the effective composite fermion mass. The electron-electron interactions enter the theory [44][45][46][47][48] implicitly because a mean-field approximation is employed, assuming that the electron density fluctuations are small. The theory is confirmed by the experimental observation of a scale corresponding to the Fermi momentum of composite fermions in zero effective magnetic field at ν = 1/2.
Samples studied in this section are ultraclean bivalley (001) SiGe/Si/SiGe quantum wells similar to those described in Refs. [29,30]. The longitudinal resistivity ρ xx as a function of the inverse filling factor is shown for different electron densities in Fig. 8(a). The resistance minima are seen at composite fermion quantum numbers p = 1, 2, 3, 4, and 6 near ν = 1/2 in positive and negative effective fields B * , revealing the high quality of the sample. The high quality of the quantum well is also confirmed by the presence of the ν = 4/5 and ν = 4/11 fractions [49], corresponding to p = 4/3, which can be described in terms of the second generation of composite fermions. The minima at p = 3 disappear below a certain electron density, although the surrounding minima at p = 2 and p = 4 persist to significantly lower densities. Clearly, the prominence of the minima at p = 3 at low electron densities cannot be explained by level broadening. On the other hand, this finding is strikingly similar to the effect of the disappearance of the cyclotron minima in the magnetoresistance at low electron densities in Si MOSFETs while the spin minima survive down to appreciably lower densities [50], which signifies that the cyclotron splitting becomes equal to the sum of the spin and valley splittings, and the corresponding valley sublevels merge [18].
Measurements in tilted magnetic fields allow one to distinguish between the spin and valley origin of the effect. The magnetoresistance as a function of the inverse filling factor is shown for the tilt angle Θ ≈ 61 • at different electron densities in Fig. 8(b). Here, the authors focus on the resistance minimum at ν = 3/5. The behavior observed for the ν = 3/5 minimum is very similar to that in perpendicular magnetic fields, which holds for all samples and tilt angles. One determines the onset n * s for the ν = 3/5 minimum and plots it versus the tilt angle, as shown in Fig. 9(b). The value n * s turns out to be independent, within the experimental uncertainty, of the tilt angle of the magnetic field. Since the spin splitting is determined by total magnetic field, ∆ s = gµ B B tot , one expects that the onset n * s should decrease with the tilt angle [the inset to Fig. 9(b)], which is in contradiction with the experiment. The authors conclude that the spin origin of the effect can be excluded, revealing its valley origin. The valley splitting ∆ v is expected to be insensitive to the parallel component of the magnetic field [51] so that the value n * s should be independent of the tilt angle, which is consistent with the experiment. Thus, these results indicate the intersection or merging of the quantum levels of composite fermions with different valley indices, which reveals the valley effect on the fractions.
It is clear that for the occurrence of the crossing or merging of the levels of composite fermions with different valley indices, the functional dependences of both splittings on magnetic field (or electron density) at fixed p should be different. Indeed, the cyclotron energy of composite fermions ω * c is determined by the Coulomb interaction energy e 2 /κl B , and the valley splitting ∆ v in The onset density n * s of the resistance minimum at ν = 3/5 in a SiGe/Si/SiGe quantum well as a function of the tilt angle. The dashed horizontal line is a fit to the data. The inset schematically (up to a numerical factor) shows the cyclotron energy of composite fermions (the solid line) and the Zeeman energy (the dotteddashed line) as a function of the magnetic field B at a fixed tilt angle. The slope of the straight line increases with increasing Θ. From Ref. [20]. a 2D electron system in Si changes linearly with changing magnetic field (or electron density) [52]. In high magnetic fields, the valley splitting strongly exceeds the cyclotron energy of composite fermions so that for the case of p = 3, all three filled levels of composite fermions belong to the same valley [ Fig. 9(a)]. As the magnetic field is decreased at fixed p, the lowest level with the opposite valley index should become coincident with the top filled level, leading to the vanishing of the energy gap and the disappearance of the resistance minimum at p = 3. With a further decrease of the magnetic field there should occur either a simple crossing of the levels and reappearance of the gap or merging/locking of the levels accompanied by a gradual change in the fillings of both levels [18]. In analogy with Si MOSFETs, it is very likely that the merging of the composite fermion levels with different valley indices occurs in ultra-low-disorder SiGe/Si/SiGe quantum wells.
VI. CONCLUSIONS
In conclusion, we have reviewed recent experimental results pointing to the band flattening and Landau level merging at the chemical potential in strongly correlated 2D electron systems. It is shown that the occupation numbers of quantum states at the chemical potential can be different within the range between zero and one, which reveals the non-Fermi-liquid form of the distribution function.
VII. ACKNOWLEDGMENTS
The ISSP group was supported by a Russian Government contract. S.V.K. was supported by NSF Grant No. 1904024. | 2022-04-28T06:47:08.301Z | 2022-04-26T00:00:00.000 | {
"year": 2022,
"sha1": "12d6ad14d1a081df64ef267035c3608c525e957a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2204.12565",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aa282d8f593fec86a92111a9fd2bd0b9076c3f8f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
18460907 | pes2o/s2orc | v3-fos-license | Univariate L p and l p Averaging , 0 < p < 1 , in Polynomial Time by Utilization of Statistical Structure
We present evidence that one can calculate generically combinatorially expensive L p and l p averages, 0 < p < 1, in polynomial time by restricting the data to come from a wide class of statistical distributions. Our approach differs from the approaches in the previous literature, which are based on a priori sparsity requirements or on accepting a local minimum as a replacement for a global minimum. The functionals by which L p averages are calculated are not convex but are radially monotonic and the functionals by which l p averages are calculated are nearly so, which are the keys to solvability in polynomial time. Analytical results for symmetric, radially monotonic univariate distributions are presented. An algorithm for univariate l p averaging is presented. Computational results for a Gaussian distribution, a class of symmetric heavy-tailed distributions and a class of asymmetric heavy-tailed distributions are presented. Many phenomena in human-based areas are increasingly known to be represented by data that have large numbers of outliers and belong to very heavy-tailed distributions. When tails of distributions are so heavy that even medians (L 1 and l 1 averages) do not exist, one needs to consider using l p minimization principles with 0 < p < 1.
Introduction
Minimization principles based on the l 1 and L 1 norms have recently rapidly become more common due to discovery of their important roles in sparse representation in signal and image processing [1,2], compressive sensing [3,4], shape-preserving geometric modeling [5,6] and robust principal component analysis [7][8][9].In compressive sensing and sparse representation, it is known that, under proper sparsity conditions (for example, the restricted isometry property [3,4]), l 1 solutions are equivalent to -l 0 solutions‖, that is, the sparsest solutions, an important result because it allows one to find the solution of a combinatorially expensive l 0 maximum-sparsity minimization problem by a polynomial-time linear programming procedure for minimizing l 1 functionals.When the data follow heavy-tailed statistical distributions and the tails of the distributions are -not too heavy,‖ various l 1 minimization principles, in the form of calculation of medians and quantiles, are primary choices that are efficient and robust against the many outliers [10][11][12].Such distributions correspond to the uncertainty in many human-based phenomena and activities, including the Internet [13,14], finance [15,16] and other human and physical phenomena [16].l 1 minimization principles are applicable also to data from light-tailed distributions such as the Gaussian, but, for such distributions, are less efficient than classical procedures (calculation of standard averages and variances).
When tails of the distributions are so heavy that even l 1 minimization principles do not exist, one needs to consider using l p minimization principles with 0 < p < 1, a topic on which investigation has recently started [2,3,[17][18][19][20].l p minimization principles, 0 < p < 1, are of interest because they produce solutions that are in general sparser, that is, closer to l 0 solutions, than l 1 minimization principles [20].However, when 0 < p < 1, solving l p minimization principles is generically combinatorially expensive (NP-hard) [18], because l p minimization principles can have arbitrarily large numbers of local minima.(-Generically‖ means -in the absence of additional information.‖)Investigations about polynomial-time l p minimization, 0 < p < 1, have focused on (1) obtaining local rather than global solutions [2,18,20] and (2) achieving a global minimum by restricting the class of problems to those with sufficient sparsity [3,17,19] (the approach used in compressive sensing).However, local solutions often differ strongly from global solutions and sparsity restrictions are often not applicable.The fact that the l 0 solution is, relative to other potential solutions, the sparsest solution does not imply that this solution is sparse to any specific degree.The sparsest solution may not be sparse in any absolute sense at all; it is just sparser than any other solution.
The approach that we will investigate in the present paper shares with compressive sensing the strategy of restricting the nature of the problem to achieve polynomial-time performance.However, we do so not by requiring sparsity to some a priori set level but rather by restricting the data to come from a wide class of statistical distributions, an approach not previously considered in the literature.This restriction turns out to be mild, often verifiable and often realistic since the problem as posed is often meaningful only when the data come from a statistical distribution.The approach in this paper differs from the approaches in the previous literature on l p minimization principles also in a second way, namely, in that it starts the investigation of l p minimization principles from consideration of their continuum analogues, L p minimization principles.
The classes of L p and l p minimization principles that we will investigate in this paper are those that represent univariate continuum L p averaging and discrete l p averaging, defined as follows.Univariate L p and l p averages are the real numbers a at which the following functionals A and B achieve their respective global minima: where ψ is a probability density function (pdf) that satisfies the conditions given below, and where the x i are data points from the distribution with pdf ψ.The pdf ψ is assumed to have measurable second derivative and to satisfy the following two conditions: radially strictly monotonically decreasing outwards from the mode (3a) ψ and dψ/dx bounded by c|x| -β and c|x| -β-1 , respectively, for given c and β > p + 1 as (3b) Without loss of generality, we assume that the mode, that is, the x at which ψ achieves its maximum, is at the origin.
In a departure from the traditional use of x as the independent variable of a univariate pdf, we will express univariate pdfs in radial form with r being the radius measured outward from the mode of the distribution.(This notation is chosen to allow natural generalization to higher dimensions in the future.)With the notation g(r) = ψ(-r) and f(r) = ψ(r), r ≥ 0, functional A can be rewritten in the form (4) Since functional (4) is finite only when (5) the mean (L 2 average) does not exist for distributions with β ≤ 3 and even the median (L 1 average) does not exist for distributions with β ≤ 2. For example, the median does not exist for the Student t distribution with one degree of freedom because β = 2 for this distribution.To create meaningful -averages‖ in these cases, weighted and trimmed sample means have been proposed with success [21].However, weighted and trimmed sample means require a priori knowledge of the specific distribution and/or of various parameters, knowledge that is often not available.Minimization of the L p functional (4) or of the l p functional (2) is, when 0 < p < min{1, β−1}, an alternative for creating an -average‖ for a heavy-tailed distribution or of a sample thereof.
In the present paper, we will investigate whether, by providing only the information that the data come from a -standard‖ statistical distribution that satisfies Conditions (3), the L p and l p averaging functionals A and B can be minimized in a way that leads to polynomial-time minimization of general L p and l p functionals.Specifically, in the next two sections, we will investigate to what extent the L p and l p averaging functionals are devoid of local minima other than the global minimum, a key feature in this process.For illustration of the theoretical results, we will present computational results for the following three types of distributions: In Distributions 2 and 3, α is a real number > 1. Gaussian Distribution 1 is used to show that the results discussed here are applicable not only to heavy-tailed distributions but also to light-tailed distributions.These results are applicable a fortiori to compact distributions with no tails at all (tails uniformly 0).(Analysis and computations were carried out with the uniform distribution and with a pyramidal distribution, two distributions with no tails, but these results will not be discussed here.)While L p and l p averages can be calculated for light-tailed and no-tailed distributions, there are more meaningful and more efficient ways, for example, arithmetic averaging, to calculate central points of light-tailed and no-tailed distributions.L p and l p averages are most meaningful for heavy-tailed distributions.
L p Averaging
We present in Figures 1-3 the functionals A(a) for Distributions 1-3, respectively, for various p.These functionals A(a) have one global minimum at or near r = 0, no additional minima, are convex in a neighborhood of the global minimum and are concave outside of this neighborhood.The fact that the A(a) are not globally convex is not important.Each A(a) is radially monotonically increasing outward from its minimum, which is sufficient to guarantee that there is only one global minimum and that there are no other local minima.On every finite closed interval in Figures 1-3 that does not include the global minimum, the derivative dA/da is bounded away from 0. Hence, in all these cases, standard line-search methods converge to the global minimum in polynomial time.The structure of A(a) seen in Figures 1-3 is due to the fact that A(a) is based on a probability density function with strictly monotonically decreasing density in the radial directions outward from the mode.This structure does not generically occur for density functions f(r) and g(r) representing, for example, irregular scattered clusters.However, averaging in general and L p averaging in particular make little sense when the data are clustered irregularly.The computational results presented in Figures 1-3 suggest the hypothesis The structure of the L p averaging functional A(a) seen in Figures 1-3 and described in the previous paragraph occurs for all symmetric distributions, a situation that can be shown as follows.For symmetric distributions (that is, those for which g(r) = f(r)), the L p averaging functional A(a) can be written as ( 9) A(a) is symmetric around a = 0, so we need consider only the behavior of A(a) for a ≥ 0. For a ≥ 0, (10) and (11) One computes expressions (10) and ( 11) by differentiating the right sides of expressions ( 9) and (10), respectively, with respect to a.One expresses the integral to be differentiated as the sum of an integral on (0,a) and an integral on (a,∞) and differentiates these two integrals separately.To simplify dA/da to the form given in (10), one integrates by parts and combines the two resulting integrals.From these expressions, one obtains first that dA/da(0) = 0 and d 2 A/da 2 (0) > 0, that is, there is a local minimum at a = 0 and second that, for all a > 0, dA/da(a) > 0, that is, A is strictly monotonically increasing for a > 0. Thus, for symmetric pdfs, A(a) has its global minimum at a = 0, that is, the L p average exists and is equal to the mode of the distribution.There are no places where dA/da = 0 other than at a = 0 and, on every finite closed interval that does not include the mode 0, dA/da is bounded away from 0. Standard line-search methods for calculating the minimum of this A(a) are thus globally convergent.
A general analytical structure for asymmetric distributions analogous to that described above for symmetric distributions is not yet available because, for asymmetric distributions, the properties of A(a) depend on additional properties of the probability density functions f(r) and g(r) that have not yet been clarified.Most of the previous statistical research about two-tailed distributions that extend infinitely in each direction has been focused on symmetric distributions and it is the symmetric case on which we will focus in the remainder of this paper.
l p Averaging
It is meaningful to calculate an l p average of a discrete set of data, that is, the point at which B(a) achieves its global minimum, only for data from a distribution that satisfies Conditions (3) and for which the L p average exists, that is, for which 0 < p < β − 1.We propose the following algorithm.
Algorithm 1: Algorithm for l p Averaging STEP 1. Sort the data x i , i = 1, 2, . . ., I, from smallest to largest.(To avoid proliferation of notation, use the same notation x i , i = 1, 2, . . ., I, for the data after sorting as before.)STEP 2. Choose an integer q that represents the number of neighbors of a given point in the sorted data set in each direction (lower and higher index) that will be included in a local set of indices to be used in the -window‖ in Step 4. (The -window size‖ is thus 2q + 1) STEP 3. Choose a point x j from which to start.(The median of the data, that is, the l 1 average, is generally a good choice for the initial x j .)STEP 4. For each k, j − q ≤ k ≤ j + q, calculate B(x k ).STEP 5.If the x k that yields the minimum of the B(x k ) calculated in Step 4 is x j , stop.In this case, x j is the computed l p average of the data.Otherwise, let x k be a new x j and return to Step 4. STEP 6.If convergence has not occurred within a predetermined number of iterations, stop and return an error message.
Remark 1. Algorithm 1 considers the values of B(a) only at the data points x i and not between data points.For a strictly between two consecutive data points x i and x i+1 , B(a) is concave and is above the line connecting (x i ,B(x i )) and (x i+1 ,B(x i+1 )), so a minimum cannot occur there.It is sufficient, therefore, to consider only the values of B at the points x i when searching for a minimum.A graph of the points (x i ,B(x i )), i = 1, 2, . . ., I, approximates the graph of the continuum L p functional A(a), which, for symmetric distributions, has only one local minimum, namely, its global minimum.The graph of the points (x i ,B(x i )) may have some relatively shallow local minima produced by the irregular spacing of the x i (cf.As mentioned in Step 3 of Algorithm 1, the median of the data is a much better choice for a starting point.However, choosing a point near the right tail makes the iterations of Algorithm 1 traverse a large distance before converging to an approximation of the l p average and thus provides an excellent test for the robustness of Algorithm 1. Computational results for p = 0.5, 0.1 and 0.02 and for window sizes 2q + 1 = 7, 13, 19 and 25 are presented in Tables 1-4.For reference, we note that the continuum L p averages of Distribution 2, when they exist, that is, when p < α − 1, are all 0. Thus, the errors of the l p averages in Tables 1-4 are the same as the l p averages themselves. The entries in Tables 1-4 indicate that, for all cases with p < α − 1, the l p average computed by Algorithm 1 is an excellent approximant of the L p average 0 given the large number of outliers and the huge spread of the data in Distribution 2. (For α = 3 and α = 1.02, the ranges of the data are [−16.0,22.6] and [−6.44 × 10 154 , 5.02 × 10 169 ], respectively.For α = 2, 1, 1.5, 1.1, 1.05, 1.04 and 1.03, the ranges are between these two ranges.)The entries for p = 0.5 with α = 1.5 and for p = 0.1 with α = 1.1, 1.05, 1.04 and 1.03 in Tables 1 and 2 indicate that, in a few cases when p is equal to or only slightly greater than α − 1, the l p average yielded by Algorithm 1 can still be a good approximant of the center of the distribution in spite of the fact that the l p average is theoretically meaningful only when p < α − 1.The entries for p = 0.5 with α = 1.1, 1.05, 1.04, 1.03 and 1.02 and for p = 0.1 with α = 1.02 indicate that, in accordance with expectations, when p is significantly greater than α − 1, the l p average produced by Algorithm 1 is not a meaningful approximant of the center of the distribution.Since larger window size is of assistance when attempting to -jump over‖ local minima, it is expected that l p averages should converge to the L p average 0 as the window size 2q + 1 increases (and as the sample size increases).The results in Tables 1-4 confirm that, for the samples used in these calculations, increasing the window size does indeed increase the accuracy of the l p averages as approximations of the L p average 0. In addition, the results in Tables 3 and 4 for p < α − 1 show that, for the samples used in these calculations, there is an optimal q, namely, q = 19 that produces l p averages that are just as good as the l p averages produced by the larger q = 25 but (due to smaller window size) requires less computational effort.Algorithm 1 is applicable to heavy-tailed distributions in general but the rule for choosing q will certainly be dependent on the specific class of distributions under consideration.While this rule is not yet known precisely, we can provide here a description of the principles that will likely be the foundations for the rule.The choice of q is related to how wide the local minima in the discrete functional B are.The local minima of B occur at places where there are clusters of data points (due to expected statistical variation in the sample).Understanding the relationships between (1) the clustering properties of samples from the given class of distributions, (2) the widths of the local minima as functions of the clustering and (3) the p-dependent analytical properties of functional B will likely yield the rule for choosing q.
Conclusions
The wide-spread impression that minimization of L p and l p functionals, 0 < p < 1, is combinatorially expensive is valid for general situations in which no structure of the data is known.However, the results in this paper suggest that, when the data come from an appropriate statistical distribution, L p and l p averages can be calculated in polynomial time.The approach of the paper is applicable without precise knowledge of the parameters of the distribution.One does not need precise knowledge of the parameters but rather only generalizations of Conditions (3), an upper bound on the exponent −β of the tail density and additional conditions for asymmetric distributions and for setting up a rule for choosing q in Algorithm 1.
Topics for future research include Quantitative rules for using information about the underlying continuum distribution to choose the q of Algorithm 1 based on a user's preferred tradeoff between maximum accuracy and minimum computational burden Investigation of the advantages and disadvantages of introducing smoothing in the B(x k ) calculated in Step 4 of Algorithm 1 to increase the robustness against shallow local minima; connection of the smoothing with properties of the underlying distributions Description of the class(es) of symmetric and asymmetric univariate and multivariate distributions for which radially strictly monotonic L p averaging functionals and radially nearly strictly monotonic l p averaging functionals can be created and thus for which L p and l p averages can be calculated in polynomial time Investigation of convergence of the l p average to the L p average and of related issues of efficiency, optimality, breakdown point, influence function, etc. Investigation of the conditions under which L p and l p averages converge to the mode as p → 0 Treatment of more general univariate and multivariate l p minimization problems including but not limited to l p regression and matrix-constrained l p minimization, for example, minimization of (12) (cf.[17,18]) (The l p averaging process considered in the present paper can be expressed in format (12).) Many phenomena in human-based areas (sociology, cognitive science, psychology, economics, human networks, social media, etc.) are increasingly known to be represented by data that have large numbers of outliers and belong to very heavy-tailed distributions, which suggests that L p and l p averaging, L p and l p regression and more general L p and l p minimization tasks, 0 < p < 1, will be important in practice.The results of the present paper provide the first indication that one may be able to solve, in polynomial time, generically combinatorially expensive L p and l p minimization problems for these phenomena by requiring only -natural‖ statistical structure without having to impose restrictions such as sparsity and without having to accept suboptimal local solutions instead of optimal global solutions.
Figures 4 below) and/or the asymmetry of the distribution.The window structure of Algorithm 1 is designed to allow the algorithm to -jump over‖ these local minima on its way to the global minimum.Remark 2. The cost of Algorithm 1 is polynomial, namely, the cost O(I log I) of the sorting operation of Step 1 plus the cost of the iterations of Step 4, namely, O(I 2 ) (= the number of iterations, which cannot exceed O(I), times the cost O(I) of calculating each iteration).Analogous algorithms for higher-dimensional averages are expected to retain this polynomial-time nature.In computational experiments, we used samples of size I = 2000 from the symmetric heavy-tailed Distribution 2 with various α, 1 < α ≤ 3, and window sizes 2q + 1 = 7, 9, 11, . . ., 25.For comparison with Figures 2, we present in Figures 4 the graphs of the points (x i , B(x i )) for the sample from Distribution 2 with α = 2 and p = 0.5 and 0.02.The starting point for Step 3 of the Algorithm 1 was chosen to be x I−2q , a point near the end of the right tail (beyond the limited domains shown in Figures 4).
Table 1 .
Sample l p averages calculated by Algorithm 1 with window size 2q + 1 = 7 for 2000-point data set from Distribution 2.
Table 2 .
Sample l p averages calculated by Algorithm 1 with window size 2q + 1 = 13 for 2000-point data set from Distribution 2.
Table 3 .
Sample l p averages calculated by Algorithm 1 with window size 2q + 1 = 19 for 2000-point data set from Distribution 2.
Table 4 .
Sample l p averages calculated by Algorithm 1 with window size 2q + 1 = 25 for 2000-point data set from Distribution 2. | 2015-07-17T22:55:48.000Z | 2012-10-05T00:00:00.000 | {
"year": 2012,
"sha1": "8a98c4bb032a468a5347d04e8a797143911884a1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4893/5/4/421/pdf?version=1349436608",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8a98c4bb032a468a5347d04e8a797143911884a1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
210921171 | pes2o/s2orc | v3-fos-license | Measurements-Based Channel Models for Indoor LiFi Systems
Light-fidelity (LiFi) is a fully-networked bidirectional optical wireless communication (OWC) that is considered a promising solution for high-speed indoor connectivity. Unlike in conventional radio frequency wireless systems, the OWC channel is not isotropic, meaning that the device orientation affects the channel gain significantly. However, due to the lack of proper channel models for LiFi systems, many studies have assumed that the receiver is vertically upward and randomly located within the coverage area, which is not a realistic assumption from a practical point of view. In this paper, novel realistic and measurement-based channel models for indoor LiFi systems are proposed. Precisely, the statistics of the channel gain are derived for the case of randomly oriented stationary and mobile LiFi receivers. For stationary users, two channel models are proposed, namely, the modified truncated Laplace (MTL) model and the modified Beta (MB) model. For LiFi users, two channel models are proposed, namely, the sum of modified truncated Gaussian (SMTG) model and the sum of modified Beta (SMB) model. Based on the derived models, the impact of random orientation and spatial distribution of LiFi users is investigated, where we show that the aforementioned factors can strongly affect the channel gain and system performance.
Abstract-Light-fidelity (LiFi) is a fully-networked bidirectional optical wireless communication (OWC) that is considered a promising solution for high-speed indoor connectivity. Unlike in conventional radio frequency wireless systems, the OWC channel is not isotropic, meaning that the device orientation affects the channel gain significantly. However, due to the lack of proper channel models for LiFi systems, many studies have assumed that the receiver is vertically upward and randomly located within the coverage area, which is not a realistic assumption from a practical point of view. In this paper, novel realistic and measurement-based channel models for indoor LiFi systems are proposed. Precisely, the statistics of the channel gain are derived for the case of randomly oriented stationary and mobile LiFi receivers. For stationary users, two channel models are proposed, namely, the modified truncated Laplace (MTL) model and the modified Beta (MB) model. For LiFi users, two channel models are proposed, namely, the sum of modified truncated Gaussian (SMTG) model and the sum of modified Beta (SMB) model. Based on the derived models, the impact of random orientation and spatial distribution of LiFi users is investigated, where we show that the aforementioned factors can strongly affect the channel gain and system performance.
A. Motivation
The total data traffic is expected to become about 49 exabytes per month by 2021, while in 2016, it was approximately 7.24 exabytes per month [1]. With this drastic increase, the fifth generation (5G) networks and beyond must urgently provide high data rates, seamless connectivity, robust security and ultra-low latency communications [2]- [4]. In addition, with the emergence of the internet-of-things (IoT) networks, the number of connected devices to the internet is increasing dramatically [5], [6]. This fact implies not only a significant increase in data traffic, but also the emergence of some IoT services with crucial requirements. Such requirements include high data rates, high connection density, ultra reliable low latency communication (URLLC) and security. However, traditional radio-frequency (RF) networks, which are already crowded, are unable to satisfy these high demands [7]. Network densification [8], [9] has been proposed as a solution to increase the capacity and coverage of 5G networks. However, with the continuous dramatic growth in data traffic, researchers from both industry and academia are trying to explore new network architectures, new transmission techniques and new spectra to meet these demands.
Light-fidelity (LiFi) is a novel bidirectional, high speed and fully networked wireless communication technology, that uses visible light as the propagation medium in the downlink for the purposes of illumination and communication. It can use infrared in the uplink so that the illumination constraint of a room remains unaffected, and also to avoid interference with the visible light in the downlink [10]. LiFi offers a number of important benefits that have made it favorable for future technologies. These include the very large, unregulated bandwidth available in the visible light spectrum (more than 2600 times greater than the whole RF spectrum), high energy efficiency [11], the straightforward deployment that uses offthe-shelf light emitting diode (LED) and photodiode (PD) devices at the transmitter and receiver ends, respectively, and enhanced security as light does not penetrate through opaque objects [12]. However, one of the key shortcomings of the current research literature on LiFi is the lack of appropriate statistical channel models for system design and handover management purposes.
B. Literature Review
Some statistical channel models for stationary and uniformly distributed users were proposed in [13]- [15], where a fixed incidence angle was assumed in [13], [14] and a random incidence angle was assumed in [15]. However, accounting for mobility, which is an inherent feature of wireless networks, requires a more realistic and non-uniform model for users' spatial distribution. Several mobility models, such as the random waypoint (RWP) model, have been proposed in the literature to characterize the spatial distribution of mobile users for indoor RF systems [16], [17]. However, these studies were limited to RF spectrum where statistical fading channel models were used. Recently, [18], [19] employed the RWP mobility model to characterize the signal-to-noise ratio (SNR) for indoor LiFi systems. In [18], the device orientation was assumed constant over time, which is not a realistic scenario, whereas in [19], the incidence angle of optical signals was assumed to be uniformly distributed, which is not a proper model for the incidence angle, since it does not account for the actual statistics of device orientation.
Device orientation can significantly affect the users' throughput. The majority of studies on OWC assume that the device always faces vertically upward. This assumption may have been driven by the lack of having a proper model for orientation, and/or to make the analysis tractable. Such an assumption is only accurate for a limited number of devices (e.g., laptops with a LiFi dongle), while the majority of users use devices such as smartphones, and in real-life scenarios, users tend to hold their device in a way that feels most comfortable. Such orientation can affect the users' throughput remarkably and it should be analyzed carefully. Even though a number of studies have considered the impact of random orientation in their analysis [20]- [27], all these studies assume a predefined model for the random orientation of the receiver. However, little or no evidence is presented to justify the assumed models. Nevertheless, none of these studies have considered the actual statistics of device orientation and have mainly assumed uniform or Gaussian distribution with hypothetical moments for device orientation. Recently, and for the first time, experimental measurements were carried out to model the polar and azimuth angles of the user's device in [28]- [31]. It is shown that the polar angle can be modeled by either a truncated Laplace distribution for the case of stationary users or a truncated Gaussian distribution for the case of mobile users, while the azimuth angle follows a uniform distribution for both cases. Motivated by these results, the impact of the random receiver orientation on the SNR and the bit error rate (BER) was studied for indoor stationary LiFi users in [32]. Solutions to alleviate the impact of device random orientation on the received SNR and throughput were proposed in [33]- [35]. In [33], the impact of the random receiver orientation, user mobility and blockage on the SNR and the BER was studied for indoor mobile LiFi users. Then, simulations of BER performance for spatial modulation using a multi-directional receiver configuration with consideration of random device orientation was evaluated. In [34], other multiple-input multiple-output (MIMO) techniques in the presence of random orientation were studied. The authors in [35], proposed an omni-directional receiver which is not affected by the device random orientation. It is shown that the omnidirectional receiver reduces the SNR fluctuations and improves the user throughput remarkably. All these studies emphasize the significance of incorporating the random spatial distribution of LiFi users along with the random orientation of LiFi devices into the analysis. However, proper statistical channel models for indoor LiFi systems that encompass both the random spatial distribution and the random device orientation of LiFi users were not derived in the literature, which is the focus of this work.
C. Contributions and Outcomes
Against the above background, we investigate in this paper the channel statistics of indoor LiFi systems. Novel realistic and measurement-based channel models for indoor LiFi systems are proposed, and the proposed models encompass the random motion and the random device orientation of LiFi users. Precisely, the statistics of the line-of-sight (LOS) channel gain are derived for stationary and mobile LiFi users with random device orientation, using the measurements-based models of device orientation derived in [28]. For stationary LiFi users, the model of randomly located user is employed to characterize the spatial distribution of the LiFi user, and the truncated Laplace distribution is used to model the device orientation. For mobile LiFi users, the RWP mobility model is used to characterize the spatial distribution of the user and the truncated Gaussian distribution is used to model the device orientation. In light of the above discussion, we may summarize the paper contributions as follows.
• For stationary LiFi users, two channel models are proposed, namely the modified truncated Laplace (MTL) model and the modified Beta (MB) model. For mobile LiFi users, also two channel models are proposed, namely the sum of modified truncated Gaussian (SMTG) model and the sum of modified Beta (SMB) model. The accuracy of the derived models is then validated using the Kolmogorov-Smirnov distance (KSD) criterion. • The BER performance of LiFi systems is investigated for both cases of stationary and mobile users using the derived statistical channel models. We show that the random orientation and the random spatial distribution of LiFi users could have strong effect on the error performance of LiFi systems. • We propose a novel design of indoor LiFi systems that can alleviate the effects of random device orientation and random spatial distribution of LiFi users. We show that the proposed design is able to guarantee good error performance for LiFi systems under the realistic behaviour of LiFi users. • The proposed statistical LiFi channel models are of great significance. In fact, any LiFi transceiver design, to be efficient, it needs to incorporate the channel model into the design. Therefore, having realistic channel models will help in designing realistic LiFi transceivers.
Outline and Notations
The rest of the paper is organized as follows. The system model is presented in Section II. Section III presents the exact statistics of the LOS channel gain. In Sections IV, statistical channel models for stationary and mobile LiFi users are proposed. Finally, the paper is concluded in Section V and future research directions are highlighted.
The notations adopted throughout the paper are summarized in Table I. In addition, for every random variable X, f X and F X denote the probability density function (PDF) and the cumulative distribution function (CDF) of X, respectively. The function δ(·) denotes the Dirac delta function. The function U [a,b] (·) denotes the the unit step function within [a, b], i.e., for all x ∈ R, U [a,b] (x) = 1 if x ∈ [a, b], and 0 otherwise.
II. SYSTEM MODEL
Consider the indoor LiFi cellular system shown in Fig. 1, which consists of a LiFi attocell with radius R (green attocell), that is equipped with a single access-point (AP) installed at height h a from the ground. The LiFi attocell is concentric with a larger circular area with a radius R e (R ≤ R e ), within where H is the downlink channel gain, S is the transmitted signal and N is an additive white Gaussian noise (AWGN) that is N (0, σ 2 ) distributed. Since LiFi signals should be positive valued and satisfy a certain peak-power constraint [36], we assume that 0 ≤ S ≤ A, where A ∈ R + denotes the maximum allowed signal amplitude. The channel gain H is the sum of a LOS component and a non-light-of-sight (NLOS) component resulting from reflections of walls. However, it was observed in [37] that, for indoor LiFi scenarios, the optical power received from reflected signals is negligible compared to the LOS component, especially if the LiFi receiver is far away from the walls or is located close to the cell center. In this case, the contribution of the NLOS component is very small compared to that of the LOS component. Based on this, only the LOS component of H is considered, the channel gain H is expressed as [37] where, as shown in Fig. 2, m is the order of the Lambertian emission that is given by m = − log (2) log(cos(φ 1/2 )) , such that φ 1/2 represents the semi-angle of a LED; d = r 2 + (h a − h u ) 2 is the distance between the AP and the UE; φ ∈ [0, φ 1/2 ] is the radiation angle; ψ ∈ [0, π] is the incidence angle and Ψ c is the field of view of the PD. In (2), H 0 is where ρ is the electrical-to-optical conversion factor, R p is the PD responsivity, A g is the geometric area of the PD and n c is the refractive index of the PD's optical concentrator.
Based on the results of [28], cos(φ) and cos(ψ) are expressed, respectively, as where (x a , y a , z a ) and (x u , y u , z u ) are the Cartesian coordinates of the AP and the UE, respectively, and as shown in Fig. 3, Ω and θ are the angle of direction and the elevation angle of the UE, respectively. The angle of direction Ω represents the angle between the direction the user is facing and the X-axis, whereas the elevation angle θ is the angle between the normal vector of PD n rx and the Zaxis. Based on Fig. 2, we have (x a , y a , z a ) = (0, 0, h a ) and (x u , y u , z u ) = (r cos(α), r sin(α), h u ). Therefore, cos(ψ) can be expressed as Consequently, the LOS channel gain H is expressed as Based on the above, we conclude that the random behaviour of the channel gain H depends mainly on four random variables, which are r, α, Ω and θ. Precisely, the variables r and α model the randomness of the instantaneous location of the LiFi receiver whereas the variables Ω and θ model the randomness of the instantaneous UE orientation. Additionally, the statistics of the polar distance r and the the elevation angle θ depend on the motion of the LiFi user, either stationary or mobile. Consequently, the statistics of the LOS channel gain H inducibly depend on the LiFi user activity. In the following section, the exact statistics of the channel gain H are derived for the case of stationary and mobile LiFi users.
III. CHANNEL STATISTICS OF STATIONARY AND MOBILE USERS WITH RANDOM DEVICE ORIENTATION
The objective of this section is deriving the exact statistics of the LOS channel gain H for the case of stationary and mobile LiFi users. In subsection III-A, we present the statistics of the four main factors r, α, Ω and θ for each case, from which we derive in subsection III-B the exact statistics of H.
A. Parameters Statistics
From a statistical point of view, the instantaneous location and the instantaneous orientation of the LiFi receiver are independent. Thus, the couples of random variables (r, α) and (Ω, θ) are independent. In addition, based on the results of [18], [38], the random variables r and α are independent, since r defines the polar distance and α defines the polar angle. On the other hand, based on the results of [28], the angle of direction Ω and the elevation angle θ are also statistically independent. Therefore, the random variables r, α, Ω and θ are independent. In addition, for both cases of stationary and mobile LiFi users, the random variables α and Ω are uniformly distributed within [0, 2π] [18], [28], [38]. However, this is not the case for the polar distance r and the elevation angle θ. In fact, as we will show in the following, the statistics of r and θ depend on whether the LiFi receiver is stationary or mobile.
1) Stationary Users: When the LiFi user is stationary, its location is fixed. However, the LiFi user is randomly located, i.e., its instantaneous location is uniformly distributed within the circular area of radius R e . In this case, the PDF of the polar distance r is expressed f r (r) = 2r [38]. Additionally, the authors in [28] presented a measurement-based study for the UE orientation, where they derived statistical models for the elevation angle θ. In this study, they show that, for stationary users, the elevation angle θ follows a truncated Laplace distribution, where its PDF is expressed as such that µ θ = 41.39 • and σ θ = 7.68 • .
2) Mobile Users: For mobile users, and especially in indoor environments, the UE motion represents the user's walk, which is equivalent to a 2-D topology of the RWP mobility model, where the direction, velocity and destination points (waypoints) are all selected randomly. Based on [16], [17], the spatial distribution of the LiFi receiver is polynomial in terms of the polar distance r and its PDF is expressed as 3,5]. Moreover, it was shown in the same measurement-based study in [28] that, for mobile users, the elevation angle θ follows a truncated Gaussian distribution, where its PDF is expressed as such that µ θ = 29.67 • and σ θ = 7.78 • .
B. Channel Statistics
As stated in Section II, the LiFi receiver can be located anywhere inside the outer cell with radius R e . However, it is connected to the desired AP if it is located inside the LiFi attocell, i.e., if r ∈ [0, R]. In other words, in order to have a communication link between the desired AP and the LiFi receiver, the only admitted values of the polar distance r should be within the range [0, R]. Due to this, we constrain the range of r to be [0, R], and therefore, the exact PDF of the polar distance r becomesf r (r) = fr(r) where F r denotes the CDF of r. Consequently, the PDF of the distance d = r 2 + (h a − h u ) 2 is given by On the other hand, consider the random variable cos (Ω − α) appearing in (6). Since Ω and α are independent and uniformly distribution within [0, 2π] and using the PDF transformation of random variables, cos (Ω − α) follows the arcsine distribution within the range [−1, 1]. Thus, the PDF and CDF of cos (Ω − α) are expressed, respectively, as (11) Based on this, the exact PDF of the channel gain H is given in the following theorem.
and the function g H is expressed as shown in (15) on top of this page, in which the function v is expressed , and the function J H is expressed as Proof. See Appendix A.
The exact CDF of the LOS channel gain H is also provided in (40) in Appendix A. On the other hand, note that the function h → F cos(ψ) (cos(Ψ c ))δ(h) expresses the effect of the field of view Ψ c on the LOS channel gain H.
As it can be seen in Theorem 1, the closed-form expression of the exact PDF of the LOS channel gain H in (12) is neither straightforward nor tractable, since it involves some complex and atypical integrals. Due to this, in order to provide simple and tractable channel models for indoor LiFi systems, we propose in the following section some approximations for the PDF of H in (12), for the cases of stationary and mobile LiFi users.
IV. APPROXIMATE PDFS OF LIFI LOS CHANNEL GAIN
In this section, our objective is to derive some approximations for the PDF of H, starting from the results of Theorem 1. The cases of stationary and mobile LiFi users are investigated separately in subsections IV-A and IV-B, respectively.
A. Stationary Users
An approximate expression of the PDF of the LOS channel gain H for the case of stationary LiFi users is given in the following theorem.
Theorem 2. For the case of stationary users, an approximate expression of the PDF of the channel gain H is given by where ν > 0 and g is a function with range [h * min , h max ]. Proof. See Appendix B.
The approximation of the PDF of the LOS channel gain H provided in Theorem 2 expresses two main factors, which are the random location and random orientation of the UE. The functions h → 1 h ν and h → g(h) express respectively the effects of the random location of the receiver and the random orientation of the UE on the LOS channel gain H. At this point, the missing part is the function g that provides the best approximation for the PDF of the LOS channel gain f H . In the following, we provide two approximate expressions for the PDF g.
1) The Modified Truncated Laplace (MTL) Model: Since the function h → g(h) expresses the effect of the random orientation of UE on the LOS channel gain H and motivated by the fact that the elevation angle θ follows a truncated Laplace distribution as shown in (7), one reasonable choice for g is the Laplace distribution. Consequently, an approximate expression of the PDF of the LOS channel gain H can be given by where is a normalization factor given by where G 1 is given in (20) (16), the non-centered moments of the LOS channel gain H are given by whereas by using the approximate PDF of H in (18), the noncentered moments of the LOS channel gain H are given by Therefore, since only three parameters need to be determined, which are (ν, µ H , b H ), they can be obtained by solving the following system of equations 2) The Modified Beta (MB) Model: The exact PDF of the LOS channel gain H involves the integral of a function that has the form (x, y) → f cos(Ω−α) (g(x, y)). Since cos (Ω − α) follows the arcsine distribution and based on the fact that the arcsine distribution is a special case of the Beta distribution, we approximate the function g with a Beta distribution. Consequently, an approximate expression of the PDF of the LOS channel gain H can be given by where α H > 0, β H > 0 and M 2 (−ν, α H , β H ) is a normalization factor given by such that G 2 is given in (26) at the top of the next page, in which 2F1 denotes the regularized hyper-geometric function and .
(27) Based on the above, it remains to derive the parameters (ν, α H , β H ) of f H . Similar to the case of the MTL model, one approach to do this is through moments matching. Specifically, (ν, α H , β H ) can be obtained by solving the system of equations in (23), where for i = 1, 2, 3, m a i is expressed in this case as
B. Mobile Users
An approximate expression of the PDF of the LOS channel gain H for the case of mobile LiFi users is given in the following theorem.
Theorem 3. For the case of mobile users, an approximate expression of the PDF of the channel gain H is given by where, for j = 1, 2, 3, ν j > 0 and g j is a function with range [h * min , h max ].
Proof. See Appendix C.
It is important to highlight here that, for j = 1, 2, 3, the functions h → 1 h ν j and h → g j (h) express respectively the effects of user mobility and the random orientation of the UE on the LOS channel gain H. At this point, the missing part is the functions g j , for j = 1, 2, 3, that provide the best approximation for the PDF of the LOS channel gain f H . In the following, we provide two expressions for each function g j for j = 1, 2, 3. 1
) The Sum of Modified Truncated Gaussian (SMTG) Model:
Since for j = 1, 2, 3, the functions h → g j (h) express the effect of the random orientation of the UE on the channel gain H and motivated by the fact that, for the case of mobile LiFi users, the elevation angle θ follows a truncated Gaussian distribution as shown in (8), one reasonable choice for the functions g j is the truncated Gaussian distribution. Consequently, an approximate expression of the PDF of the LOS channel gain H can be given by where for j = 1, 2, 3, µ H,j ∈ [h * min , h max ], σ H,j > 0 and M 3 (−ν j , µ H,j , σ H,j ) is a normalization factor that is given by ) .
(31) Now, in order to to have the complete closed-form expression of f H , we have to determine the parameters {(ν j , µ H,j , b H,j ) , j = 1, 2, 3}. Similar to the one of the stationary users case, one approach to determine these parameters is through moments matching. Specifically, since only nine parameters need to be determined, which are {(ν j , µ H,j , σ H,j ) |j = 1, 2, 3 }, they can be obtained by solving the following system of equations where, for i = 1, 2, ..., 9, m a i is expressed in this case as 2) The Sum of Modified Beta (SMB) Model: Motivated by the same reasons as for the MB model in Section IV-A1, we approximate each function g j , for j = 1, 2, 3, with a Beta distribution. Consequently, an approximate expression of the PDF of the LOS channel gain H is given in (34) at the top of this page, where α H,j > 0, β H,j > 0 and M 2 (−ν j , α H,j , β H,j ) is given in (25). Finally, it remains now to derive the parameters {(ν j , α H,j , β H,j ) |j = 1, 2, 3 } of f H . Similar to the STMG model, these parameters can be obtained by solving the by solving the system of equations in (32), where for i = 1, 2, ..., 9, m a i is expressed in this case as C. Summary of the Proposed Models
D. Summary of the Proposed Models
A detailed algorithm for implementing the proposed statistical channel models for indoor LiFi systems is presented in Algorithm 1. ii) AP's parameters ρ, φ 1/2 . iii) UE's height h u iv) UE's parameters (R p , n c , A g , Ψ c ).
2.
Calculate H 0 as shown in (3). 3. Calculate F cos(ψ) (cos(Ψ c )) as shown in (13). 4. If the LiFi user is stationary: i) MTL model: a) Estimate the parameters using (23). b) Inject the parameters into the PDF in (18). ii) MB model: a) Estimate the parameters using (23). b) Inject the parameters into the PDF in (24). elseif the LiFi user is mobile: i) SMTG model: a) Estimate the parameters using (32). b) Inject the parameters into the PDF in (30). ii) SMB model: a) Estimate the parameters using (32). b) Inject the parameters into the PDF in (35). end
V. SIMULATION RESULTS AND DISCUSSIONS
In this paper, we consider a typical indoor LiFi attocell [18], [19]. Parameters used throughout the paper are shown in Table II. In Subsection V-A, we present the PDF and CDF of the LOS channel gain H for the case of stationary and mobile LiFi users. In Subsection V-B, we investigate the error performance of Indoor LiFi systems using the derived statistics of the LOS channel gain H. Finally, based on the error performance presented in V-B, we propose in subsection V-C an optimized design for the indoor cellular system that can enhance the performance of LiFi systems.
A. Channel Statistics
For stationary LiFi users, Figs. 4 and 5 present the theoretical, simulated and approximated PDF and CDF of the LOS channel gain H, when a radius of the attocell of R = 1m and R = 2.5m, respectively. For both cases, two different values for the field of view of the UE were considered, which are Ψ c = 90 • and 60 • . These figures show that the proposed MTL and MB models offer good approximation for the distribution of the LOS channel gain H. Analytically, in order to evaluate the goodness of the proposed MTL and MB models, we use the well-known Kolmogorov-Smirnov distance (KSD) [39]. In fact, the KSD measures the absolute distance between two distinct CDFs F 1 and F 2 [39], i.e., Obviously, smaller values of KSD correspond to more similarity between distributions. In our case, the KSD of the MTL and MB models are shown in Table III Table III, where we can see that the KSD of the MB model is lower In other words, when (R, Ψ c ) = (1m, 90 • ), the MB model offers better accuracy than the MTL model. This is mainly due to the assumptions made for both models. In fact, when the radius of the attocell R is small and by referring to (5), the random variable cos (Ω − α) is dominant in cos (ψ). Hence, assuming that the distribution of the random orientation of the UE can be approximated by a Beta distribution makes more sense. For mobile users, the same figure shows that the BER results of the SMTG and the SMB models match perfectly the simulated BER for both cases when (R, Ψ c ) = (2.5m, 90 • ) and when Ψ c = 60 • . However, for the case when (R, Ψ c ) = (1m, 90 • ), we remark that the BER results of the SMB model matches the simulated BER better than the ones of the SMTG model. Similar to the case of stationary users, the SMB model offers better accuracy than the SMTG model when (R, Ψ c ) = (1m, 90 • ) due to the assumptions made for both models. Fig. 8 shows also two important facts about the BER performance of LiFi users. First, it can be seen that the BER performance degrades heavily when either the radius of the attocell R increases or the field of view of the LiFi receiver decreases. Second, the BER saturates as the transmitted optical power increases. These two facts can be explained by the following corollary.
B. Error Performance
Corollary 1. At high transmitted optical power P opt , the average probability of error of the M -ary pulse amplitude modulation (PAM) for the considered LiFi system is given by Proof. See appendix D.
The result of Corollary 1 shows that, even when the transmitted optical power P opt is high, the BER is stagnating at F cos(ψ) (cos(Ψc)) 2 . This result is directly related to the cases when the AP is out of the FOV of the LiFi receiver. On the other hand, based on its expression in (19), F cos(ψ) (cos(Ψ c )) is a function of the attocell radius R and the field of view of the receiver Ψ c . Therefore, since d max increases as R increases, then F cos(ψ) (cos(Ψ c )) is an increasing function in R. In addition, since x → F cos(ψ) (x) is a CDF, it is an increasing function, and due to the fact that x → cos(x) is a decreasing function within [0, π/2], then F cos(ψ) (cos(Ψ c )) increases as Ψ c decreases. The aforementioned reasons explain the bad BER performance of the LiFi system when either the radius of the attocell R increases or the field of view Ψ c decreases.
From a practical point of view, the above performance can be explained as follows. Recall that which is literally the outage probability of the LiFi system, i.e., the probability that the UE is not connected to the AP even when it is inside the attocell. Obviously, for large values of R or small values of Ψ c , the probability that the LiFi receiver is not connected to the AP increases. This is mainly due to the effects the random location of the LiFi user along with the random orientation of the UE and it explains the bad BER performance in this case. The question that may come to mind here is how can one enhance the performance of the LiFi system under such a realistic environment? Recently, some practical solutions have been proposed in the literature to alleviate the effects of the random behaviour of LiFi channel. These solutions include the use of MIMO LiFi systems along with transceiver designs that have high spatial diversity gains such as the multidirectional receiver (MDR) [33], [40], the omnidirectional transceiver [41] and the angular diversity transceiver [42]. In the following subsection, we propose a new design of indoor LiFi MIMO systems that can alleviate the effects of the random location of the LiFi user along with the random orientation of the UE.
C. Design Consideration of Indoor LiFi Systems
The concept of optical MIMO systems has been introduced in practical LiFi systems, where multiple LiFi APs cooperate together and serve multiple users within the resulting illuminated area [11], [34], [43], [44]. Each LiFi AP creates an optical attocell and the respective illumination areas of the adjacent attocells overlap with each other. Consider the indoor LiFi MIMO system shown in Fig. 9, which consists of five APs that correspond to small and adjacent attocells, where each has radius R c . The distance between the AP of the attocell in the middle (green attocell), which we refer to as the reference attocell, and the APs of the remaining adjacent attocells is D c .
Let us assume that a LiFi user is located within the reference attocell, where all five APs serve this user by transmitting the same signal. One way to reduce the outage probability of the LiFi user, i.e., the probability that it is not connected to one of the APs, is through a well designed attocells radius R c and APs spacing D c that guarantee a maximum target probability of error P th e , without any handover protocol or coordination scheme between the different APs. Fig. 10 presents the BER performance of a LiFi user that is located within the reference attocell, where the field of view of the UE is Ψ c = 60 • . Both stationary and mobile cases are considered and different values of R c and D c are evaluated. By comparing the results of this figure and those of Fig. 8, for the case when R = 1m for example, we can see how the coexisting APs can significantly improve the BER performance of the system. In addition, we remark from Fig. 10 that the choice of (R c , D c ) has also a big impact on the BER performance, for example, for the case of a stationary user, the best choice among the considered values is (R c , D c ) = (1m, 1.5m), whereas for the case of a mobile user, the best choice is (R c , D c ) = (1m, 1m). Overall, for a target probability of error P th e = 3.8 × 10 −3 , we conclude that the choice (R c , D c ) = (1m, 1m) is the best choice that guarantees the target performance jointly for both stationary and mobile users. Obviously, the optimal (R c , D c ) depends on the geometry of the attocells and the parameters of the UE as well, such as the height of the AP h a and the height of the UE h u . This problem will be investigated in future works.
VI. CONCLUSIONS AND FUTURE WORKS
In this paper, novel, realistic, and measurement-based channel models for indoor LiFi systems have been proposed. The statistics of the LOS channel gain are derived for the case of stationary and mobile LiFi users, where the LiFi receiver is assumed to be randomly oriented. For stationary LiFi users, the MTL and the MB models were proposed, whereas for the case of mobile users, the SMTG and SMB models were proposed. The accuracy of each model was evaluated using the KSD. In addition, the effect of random orientation and spatial distribution of LiFi users on the error performance of LiFi users was investigated based on the derived models. Our results showed that the random behaviour and motion of LiFi users has strong effect on the LOS channel gain. Therefore, we proposed a novel design of indoor LiFi MIMO systems in order to guarantee the required reliability performance for reliable communication links.
The channel models proposed in this paper, albeit being fundamental and original, they serve as a starting point for developing realistic transmission techniques and transceiver designs tailored to real-world set-ups in an effort to bring the deployment of LiFi systems closer than ever. Thus, investigating optimal transceiver designs and cellular architectures based on the derived channel models that can meet the high demands of 5G and beyond in realistic communication environment can be considered as a future research direction. In addition, the derived channel models are intended for downlink communication in indoor LiFi environments. Therefore, deriving similar models for uplink transmission and for outdoor environments should also be considered in future work.
where cos(ψ) = r cos(Ω−α) sin(θ)+(ha−hu) cos(θ) Equality (40g) follows from the fact that the conditional probability under the integral in (40f) is not null if and only i.e., Furthermore, F cos(ψ) (cos(Ψ c )) in (40) is given as shown in (41) at the top of the next page. Based on this, the corresponding PDF of the LOS channel gain H is obtained by differentiating equality (40g) with respect to h. Using Leibniz integral rule for differentiation, the PDF of the LOS channel gain H is expressed as shown in (46) where f Z is a PDF, with support range [h * min , h max ]. The PDF f Z associated to a random variable Z that is expressed as Note that X and Y are two random variables that reflect the effects of the random spatial distribution of the LiFi user and the random orientation of the UE on the LOS channel gain, respectively. For the case of stationary users, and using the PDF transformation of random variables, the PDF of the random variable X is expressed as Obviously, X and Y are correlated since they are a function of the distance d, which is also a random variable. However, such correlation can be weak in most of the cases. In fact, for high values of d, the effect of random orientation on the LOS channel gain H is negligible compared to the one of the distance, whereas for the case of low values of d, the effect of distance is negligible compared to the one of random orientation. Due to this, as an approximation, we assume that the random variables X and Y are uncorrelated. Based on this, using the theorem of the PDF of the product of random variables [45], the PDF of the random variable Z can be approximated as where f Y denotes the PDF of the random variable Y and it is given in (19) and (22) of [28]. Based on this, the PDF f Z has the form where ν > 0 andf is a function with support range [h * min , h max ] that is expressed as Consequently, by substituting f Z (h) in (47) by its expression and defining the function g, for h ∈ [h * min , h max ], as g(h) = 1 − F cos(ψ) (cos(Ψ c )) f (h), we obtain the result of Theorem 2, which completes the proof.
APPENDIX C PROOF OF THEOREM 3 Using the same notation adopted in Appendix B, the PDF of the random variable X for the case of mobile users is expressed as Therefore, from (49b), the PDF of the random variable Z can be approximated by which has the form where, for j = 1, 2, 3, ν j > 0 andf j is a function with support range [h * min , h max ] that is expressed as (m + 2)R bi+1 y bi+ m+4 m+2 f Y (y) dy.
APPENDIX D PROOF OF COROLLARY 1
Based on PDF of the LOS channel gain H provided in Theorem 1 and the expression of the probability of error of Mpulse amplitude modulation [46]- [48], the average probability of error of the considered LiFi system is expressed as where P e,h (P opt , h) is the instantaneous probability of error for a given channel gain h and γ TX = P elec σ 2 = P 2 opt σ 2 is the transmitted signal to noise ratio, such that P elec is the transmitted signal to noise ratio and σ 2 is the average noise power at the receiver. Now, since the function h → g H (h) 2(M −1) is a smooth function within [h * min , h max ], and using the Lebesgue's dominated convergence theorem, we get 2 . Therefore, we conclude that lim Popt→∞ P e (P opt ) = F cos(ψ) (cos(Ψc)) 2 , which completes the proof. | 2020-01-28T02:00:40.677Z | 2020-01-27T00:00:00.000 | {
"year": 2020,
"sha1": "ccb7066a0df7569a8e7256bda99f4f3bb14ee738",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2001.09596",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ccb7066a0df7569a8e7256bda99f4f3bb14ee738",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
266844551 | pes2o/s2orc | v3-fos-license | Next-to-eikonal corrected double graviton dressing and gravitational wave observables at O ( G 2 )
: Following a recent proposal to describe inelastic eikonal scattering processes in terms of gravitationally dressed elastic eikonal amplitudes, we motivate a collinear double graviton dressing and investigate its properties. This is derived from a generalized Wilson line operator in the worldline formalism by integrating over fluctuations of the eikonal trajectories of external particles in gravitationally interacting theories. The dressing can be expressed as a product of exponential terms – a coherent piece with contributions to all odd orders in the gravitational coupling constant and a term quadratic in graviton modes, with the former providing classical gravitational wave observables. In particular, the coherent dressing involves O ( κ 3 ) subleading double graviton corrections to the Weinberg soft factor. We use this dressing to derive expressions for the waveform, radiative momentum spectrum and angular momentum. In a limiting case of the waveform, we derive the nonlinear memory effect resulting from the emission of nearly soft gravitons from a scattering process.
Gravitational wave observables follow from classical limits of scattering amplitudes.In this regard, there are on-shell amplitude [15] and effective field theory [36] formalisms, the worldline formalism [11], the KMOC formalism [8] and the eikonal approximation [82] to name a few of the consistent approaches for deriving these observables.The eikonal approximation is based on a suitable Fourier transform of scattering amplitudes to impact parameter space and involves a resummation over graviton exchanges.This manifests in an eikonal phase with an all-loop order expansion, from which classical PM observables can be derived.While the eikonal phase is real up to tree and one-loop amplitudes, it has a pure imaginary contribution beginning at two-loops, a consequence of radiative (and, in general, inelastic) effects.This has motivated a generalized eikonal operator ansatz wherein eikonal amplitudes involving inelastic exchanges can be described as elastic eikonal amplitudes with a coherent graviton dressing operator [26,29,83,84].The infrared divergent contribution in the imaginary part of the eikonal is relevant for the 3PM result in that it is equivalent to the 3PM radiation reaction contribution from the real part of the eikonal and the scattering angle [20,23,24,81,82,85].Additionally, the infrared divergent contribution is simply the Weinberg soft graviton factor and has led to a conjectured relationship between soft factors and radiation reaction [81].A coherent dressing constructed from the Weinberg soft graviton factor was considered in [26,29] and recovers the O(G) expression for the leading memory effect, as well as contributions to the 3PM energy spectrum and angular momentum in the ω → 0 limit.The 3PM static contributions result from the O(G) expressions by carrying out the sum over external particles in terms of the impulse up to 2PM order [30].
Inelastic eikonal amplitudes at three loops and higher have thus far not been derived.The gravitational dressing to higher loops can be expected to involve generalizations of the coherent dressing to multiple graviton modes and would be required for 4PM and higher gravitational wave observables.In this paper, we explore the eikonal operator ansatz up to double graviton corrections of the Weinberg soft graviton factor by utilizing the soft factorization of eikonal amplitudes.The Weinberg soft graviton factor is universal for all Lorentz invariant amplitudes and thereby provides the unique leading soft contribution from a single graviton dressing [86].However, apart from the first subleading single soft graviton factor [87], more subleading soft factors and multiple soft particle contributions are theory-specific and involve loop corrections.Hence, the generalization of the single graviton Weinberg dressing in [26,29] to multiple gravitons must be specifically derived from the amplitudes under consideration, which in our case are for eikonal amplitudes.In this context, we note that the generalized Wilson line (GWL) approach, based on the Schwinger formalism for propagators, derives soft factors for eikonal amplitudes by taking into account corrections to the eikonal trajectories of hard external particles [88,89].Besides the eikonal Weinberg single soft graviton factor, the GWL also involve 'next-toeikonal' (NE) corrections from subleading multiple soft graviton contributions, which result from integrating over fluctuations about the straight-line trajectories of external particles in the amplitude.The classical universal terms in the corrected soft factor can be identified in the ℏ → 0 limit.As such, the leading NE correction provides the classical limit to soft factors involving a double graviton vertex for eikonal amplitudes, which can be considered as a subleading correction to the single vertex Weinberg soft graviton factor.We also note that worldline techniques have recently led to a worldline quantum field theory approach to derive gravitational wave observables from gravitationally interacting binary systems [11,64,67].This approach also involves a similar integration over fluctuations of the asymptotic straight-line path trajectories of the external particles.The worldline quantum field theory and GWL approaches agree in their classical limits for spinless external states [89].
We consider the GWL by including the leading NE double graviton correction to find a real-graviton dressing operator similar to squeezed coherent states [90] but with continuous frequency modes.In addition, the dressing a priori is not manifestly gauge invariant but can be made so by requiring the two gravitons therein to be collinear.We accordingly find a collinear double graviton dressing that can be expressed as a product of two exponential operators.One is quadratic in graviton modes, while the other is a coherent dressing operator containing corrections to all odd powers in κ = √ 8πG, with G Newton's constant.We use this dressing to define eikonal amplitudes involving inelastic exchanges up to NE collinear double graviton emissions and establish that only the coherent term contributes to classical gravitational wave observables.Due to involving corrections to all odd powers in κ, the coherent term in the dressing can be used to derive O(G 2 ) (2PM and higher PM) gravitational wave observables.
We assume this dressing provides the imaginary part of the eikonal phase in inelastic eikonal amplitudes.This allows us to derive O(G 2 ) observables from the κ 3 collinear double graviton correction in the coherent dressing.The observables we consider are the waveform, emitted momentum spectrum, and angular momentum, which are derived from expectation values with respect to corresponding soft graviton dressing states.As in the case of observables derived using the Weinberg soft factor dressing in [26,29], our results are sensitive to the i0 prescription in the soft factor poles for each graviton.We find O(G 2 ) results for the memory, emitted momentum spectrum and angular momentum, that depends linearly on an upper cut-off (ω) on the sum over the frequencies (energies) of the two gravitons.While these results correspond to results in a leading soft expansion, they are not 'static' in that they vanish in the strict ω → 0 limit.We also demonstrate that in an appropriate ultrarelativistic limit of the external particles, the O(G 2 ) memory approximates the nonlinear memory effect following the emission of a nearly soft graviton with frequency ω.This provides a consistency check on the double graviton vertex representing the emission of a graviton from a particle following its recoil from a preceding emission.
The organization of our paper is as follows.In Sec. 2, we review the GWL formalism to derive a double graviton dressing from NE corrections of external particles in eikonal amplitudes.We substitute real graviton modes and define a manifestly gauge invariant collinear double graviton dressing.Using the Baker-Campbell-Hausdorff (BCH) formula, the dressing is expressed as a product of exponential operators that are linear and quadratic in graviton modes, respectively.The coherent contribution, i.e., the exponential operator involving one graviton mode, is shown to involve all odd powers in κ corrections of the Weinberg soft factor.In Sec. 3, we consider the dressing as the imaginary contribution in the eikonal phase of inelastic eikonal amplitudes and discuss the expectation values of graviton mode operators with respect to the corresponding dressing states.Non-vanishing classical limits of expectation values are shown to result only from the coherent operator, with O(G 2 ) gravitational wave observables resulting from O(κ 3 ) contributions in the coherent term.In Sec. 4, we derive results for the waveform, emitted momentum spectrum, and angular momentum.We conclude with a discussion on future directions.Some technical details about the general form of the NE gravitational dressing factor and the kinematic form factors for the gravitational wave observables are provided in the Appendices.
Double graviton dressing
We are interested in double graviton contributions to soft factors consistent with the eikonal approach in a gravitationally interacting scalar field theory.Based on the worldline approach for propagators of external particles, we can derive the soft graviton dressing factor as the GWL from a soft expansion of the background gravitational field.The formalism generally provides a dressed propagator with subleading corrections of double and higher graviton vertices to the leading, universal Weinberg soft factor.
In this section, we first review the GWL derivation and essential properties of the dressed propagator.We will then substitute real graviton modes in the dressing operator to derive a subleading double graviton dressing for eikonal scattering processes.Unlike the single graviton Weinberg soft factor, we demonstrate that the double graviton vertex contribution in the dressing is not gauge invariant.We remedy this by considering a collinear limit to construct a gauge invariant double graviton dressing relevant for gravitational wave observables.
Eikonal dressing from generalized Wilson line
In this subsection, we will review the soft dressing for propagators arising from a worldline approach of field theories minimally coupled to gravity following [88,89], which we refer to for further details.The approach is based on the Schwinger proper time formalism, which expresses propagators in quantum field theory as path integrals in quantum mechanics.More specifically, the propagator can be expressed in terms of integrals over the proper time of the trajectory of the external particle.The background is considered in a PM expansion, while fluctuations about the particle trajectories are considered about asymptotic eikonal trajectories.This leads to the derivation of a dressed propagator, with the dressing factor containing multiple vertex contributions from the emission of soft gravitons.
As in [88,89] we consider a minimally coupled massive scalar field coupled to gravity Introducing a weak field PM expansion of the background metric about flat spacetime with κ 2 = 8πG, and expanding Eq. (2.1) up to second order in h µν and its inverse, we find an action of the form and where all indices are contracted with the flat spacetime metric.The expression in Eq. ( 2.3) results from replacing ∂ µ on the scalar field with ip µ , and the px-ordering to keep all momenta to the left1 .Hence turning H into the Hamiltonian operator Ĥ, we note that it governs the evolution of a scalar particle in the background field.This interpretation is made manifest by expressing the dressed scalar propagator ( Ĥ − i0) −1 (with chosen i0 prescription for causality) in the Schwinger representation as a double path integral over position and momentum space with initial position x i and final momentum p f , where T is the Schwinger parameter.
To derive the soft graviton dressing within the eikonal approximation, one considers the worldline of a freely propagating scalar particle subject to small fluctuations arising from background soft gravitons, i.e.
with x(t) and p(t) representing perturbations about the straight-line classical trajectory of the particle, which are subject to the boundary conditions p(T ) = 0 = x(0).The case with x i = 0 in Eq. (2.6) provides a convenient choice to evaluate Eq. (2.5), while x i ̸ = 0 will provide orbital angular momentum contributions of the external particle [88,89,91].In the following, we will adopt the x i = 0 choice for the external particle trajectories.As the unperturbed trajectory results in a strict soft graviton exchange limit, the perturbations allow for a systematic derivation of the eikonal limit and sub-eikonal corrections.The latter result from the external particle recoil due to emitted gravitons and, in general, produce contributions corresponding to multiple graviton vertices.To quadratic order in h µν , we have the leading next-to-eikonal (NE) correction that provides a double graviton emission vertex contribution to the soft graviton dressing.This is derived as a generalized Wilson line (GWL) from the amputated dressed scalar propagator [89,92] lim is the soft dressing of an asymptotic state.The expression in Eq. (2.7) requires integrating over p and x.The p integral is Gaussian, while the integral over x requires the use of correlators which follow from the Green's function for x µ (t).The evaluation of Eq. (2.7) provides the dressing (2.10) The above discussion for deriving GWL for a scalar particle is subtly different from the worldline quantum field theory formalism [11,64,67], in which the dressed propagator is utilized as an efficient tool in reorganizing the Feynman rules for evaluating the scattering amplitudes.On the other hand, the GWL of Eq. (2.10) encodes the collective effect of the background field as a soft dressing associated with the external particles in an amplitude.Consider the scattering of N particles with asymptotic momenta p µ n = η n (E n , ⃗ p n ), where η = +1 for outgoing particles and η = −1 for incoming particles in the all-outgoing convention.Then the overall soft graviton dressing operator denoted by e −∆ is just the product of GWL dressings on each external particle and is given by (2.11) We will next consider real graviton modes in the eikonal dressing operator and study its properties.
Eikonal dressing operator with real graviton modes
We consider two graviton modes with their wavenumbers parametrized by with ω k and ω l denoting their frequencies, k and l their spatial unit normal vectors indicating orientations, and q2 k = 0 = q 2 l .The graviton polarization tensors will be denoted by ε i ,µν (k) and ε j ,ρσ (l).We then have the following conventional definition for two real graviton modes where repeated Latin indices without the explicit summation symbol denotes a sum over polarizations.The on-shell integration measure explicitly has the form (2.15) The indicated ω k integral in Eq. (2.15) is over a soft region with an upper cut-off frequency ω as the maximal resolution of the detector in the sense of Bloch and Nordsieck for dealing with IR divergences [93].When we integrate over several real gravitons, the Heaviside step function argument gets replaced by one with the sum over all ω k being less than ω.
Following the conventions of [26,29,30] 2 , the graviton annihilation and creation operators, respectively a i and a † i in Eq. (2.13) and Eq.(2.14), satisfy the commutation relation with δ ij the Kronecker delta for the polarization indices and δ( ⃗ k , ⃗ k ′ ) the Dirac delta function defined with respect to the on-shell measure for real gravitons To evaluate Eq. (2.11), we consider two soft graviton dressing modes by setting x µ = p µ n t and x ′µ = p n s in Eq. (2.13) and Eq.(2.14) respectively.This gives .18)relevant in this paper for the derivation of classical expectation values from the gravitational dressing.We can go to a momentum description to find expressions with all ℏ dependencies manifest.In the adopted conventions, by replacing where k µ is the corresponding graviton momentum (and similarly l µ for the other graviton), we recover momentum expressions consistent with [8,65,83].The graviton polarization tensors are the same in wavenumber and momentum descriptions.
Substituting Eq. (2.18) in Eq. (2.11), the integrals over time are evaluated using the relations This then yields with the factors f i (k), Ãij (k, l) and Bij (k, l) defined by (2.28) The leading contribution to the dressing from Eq. (2.24) is the Weinberg soft graviton dressing considered in [26,30,86].The contribution from Eq. (2.25) is the double graviton contribution to the soft factor derived from the GWL approach to eikonal scattering.
In arriving at Eq. (2.23), we have omitted a vanishing exponential factor denoted as ∆ Remainder , which is formally given by with m ij (k, l) and n ij (k, l) defined by (2.30) Upon evaluating the integral Eq. (2.29) by utilizing the delta function δ( ⃗ k , ⃗ l), the contributions from m ij (k, l) and n ij (k, l) cancel out leading to a vanishing result.
Hence, the dressing derived from the worldline approach is simply given by Eq. (2.23).It is a unitary operator in that exp − ∆1 − ∆2 † = exp ∆1 + ∆2 . (2.32) However, unlike the Weinberg dressing factor ∆1 of Eq. (2.24), we will see that the double graviton dressing factor ∆2 of Eq. (2.25) is not gauge invariant.Next, we will address this issue to derive the relevant double graviton dressing of the eikonal process.
Gauge invariant dressing from collinear limit
An important property of the Weinberg soft factor is its gauge invariance when physical observables are concerned.In the following, we will discuss this property and consider it for the double graviton contribution.To this end, we implement the gauge transformation of the graviton modes of Eq. (2.13) and Eq.(2.14) by the following transformation on their polarization tensors, respectively, where k µ and l σ takes the form of Eq. (2.12), and ξ µ and ζ σ are reference vectors required to satisfy ξ • q k = 0 and ζ • q l = 0.This condition ensures that the transverse and traceless properties of the polarization tensors are respected.
It is straightforward to see that the gauge invariance of Weinberg soft dressing factor Eq. (2.26) follows from the momentum conservation of the external hard particles Hence, the single graviton contribution to the dressing from Eq. (2.23) is gauge invariant.
We may similarly consider gauge transformations of the double graviton contributions from Eq. (2.27) and Eq.(2.28).The resulting gauge transformations for Ãij (k, l) and Bij (k, l) can be investigated by contracting õνρσ (k, l) and Bµνρσ (k, l) with k µ ξ ν for the first two indices and with l ρ ζ σ for the last two indices.We find As all the terms in Eq. (2.35) do not vanish, the double graviton terms in Eq. (2.27) and Eq.(2.28) are not gauge invariant.One way this can be remedied, which is particularly appropriate in the context of asymptotic gravitational wave observables, is to consider a collinear limit wherein k = l.This can be formally implemented through the use of a delta function over the angular variables, δ(Ω k , Ω l ), which satisfies We accordingly define where õνρσ (k, l) and Bµνρσ (k, l) are as previously defined in Eq. (2.27) and Eq.(2.28), and the factor of (2π) 2 has been included to account for the corresponding inverse factor present in the on-shell graviton measure.The gauge transformations of A ij (k, l) and B ij (k, l) in Eq. (2.37) can be investigated through contractions with A µνρσ (k, l) and B µνρσ (k, l), as in Eq. (2.35).For the first equation in Eq. (2.35) we now find with similar transformations for the other equations in Eq. (2.35).The collinear limit ensured by δ(Ω k , Ω l ) allows us to reduce pn•k pn•(k±l) to a n-independent form of ω k ω k ±ω l , and to interchange q k and q l freely in Eq. (2.35), thus resulting in Eq. (2.38).Note that the first term in the parenthesis of Eq. (2.38) vanishes due to ξ • q k = 0 = ζ • q l , while the last two terms vanish by momentum conservation.Hence the collinear modification results in the gauge invariance of the double graviton soft factor under shifts of the polarization tensor.
The definition in Eq. (2.37) is thus gauge invariant and we can now change the dressing in Eq. (2.23) to with no modification to ∆1 in Eq. (2.24), and ∆ 2 being the collinear limit of ∆2 of Eq. (2.25).Hence ∆1 is the Weinberg soft graviton factor, while ∆ 2 provides a κ 2 correction due to two collinear gravitons.We note that Eq. (2.39) can also be derived from the analysis in the previous subsection by considering Eq. (2.14) with a δ(Ω k , Ω l ) in the integrand to ensure collinearity.
We make some general remarks on the above in light of soft factor results in the literature.
It is known that the double soft graviton factor for field theories is manifestly gauge invariant [94][95][96][97] 3 .From a perturbative field theory perspective, we expect gauge invariance if all terms to the same order in coupling constant are included.The terms appearing in Eq. (2.27) and Eq.(2.28) capture recoil effects on the external particles due to two soft gravitons.More specifically, these represent the seagull and Born contributions noted in [96].In addition to these terms recovered by the GWL approach, there are contributions from two successive single graviton emissions and a double graviton pole, and these terms are required for the gauge invariance of the complete double soft graviton factor.As the GWL soft factor derivation only made use of external particle trajectories in a background gravitational field and not the dynamics of gravitons, these additional pieces are absent in the GWL result.Thus collinearity, and generally the relative orientations of the gravitons, do not appear to have their dynamical origin in the GWL approach.We find that the contributions in Eq. (2.27) and Eq.(2.28) can be rendered gauge invariant by imposing collinearity and hence restricting their relative orientations.This reduces the double soft graviton factor contribution to that in Eq. (2.39), which only involves a double graviton vertex or, equivalently, a collinear seagull contribution.The substitution of collinear gravitons in the GWL soft factor may thus be considered a natural requirement to derive a manifestly gauge invariant dressing to NE orders.
We have also not accounted for orbital angular momentum contributions, as noted through our choice of x i = 0 in the parametrized eikonal trajectory of Eq. (2.6).In [91], the GWL formalism was applied to eikonal scattering amplitudes in QCD and gravity, with the Wilson line dressing considered up to the first single soft subleading correction with a nonvanishing initial offset x i ̸ = 0.This results in a next-to-soft corrected eikonal amplitude, resulting from internal hard contributions as well as external soft emissions.The latter exponentiates and involves single soft graviton corrections (still at order κ) of the known Weinberg soft graviton dressing.The internal contributions, however, do not exponentiate and can be determined by the action of the angular momentum operator acting on the leading eikonal amplitude.The complete next-to-leading order correction of the amplitude in the Regge limit follows from the combined contributions of internal and external emissions, and establishes angular momentum conservation associated with the subleading single soft factor.Interestingly, the internal emission graphs include those with seagull and triple graviton vertices, along with contributions from momentum shifts of the external particles in the amplitude.However, the seagull and other internal emission graphs in [91] are distinct from those considered in this section in two ways.First, the next-to-leading order corrected Born amplitude in [91] involves a single external graviton while Eq.(2.39) involves two external gravitons, with the corresponding double soft factor sensitive to the sum over their momenta.Secondly, the internal emission contributions in the GWL approach do not exponentiate, while Eq.(2.39) is nothing but the GWL dressing to κ 2 order in a collinear limit due to external graviton emissions.Our expression in Eq. (2.39) follows from Eq. (2.10), whose κ 2 double graviton corrections are precisely the quadratic in coupling constant terms not considered in the external emission contributions of [91] (as their analysis was considered to one-loop order).
Expectation values of radiative observables
Following the eikonal operator approach in [26,29,30,82], we now identify the dressing of the previous section as that for the full S-matrix of the hard elastic eikonal scattering process where the real part of the eikonal phase δ is included, in addition to the GWL soft factor ∆ as its imaginary part.However, the exact form of Re2δ will not relevant for our consideration of gravitational wave observables.If we consider Eq. (3.1) in impact parameter space, the external particle momenta in the soft factor should be appropriately identified with derivatives with respect to the impact parameter [26,29,30,82].We briefly explain the '≈' symbol in Eq. (3.1).The complete eikonal amplitude involves an overall factor of (1 + 2i∆(σ, b)), where ∆(σ, b) is a quantum remainder.However, the quantum remainder does not contribute to classical expectation values and thus has not been explicitly considered in the above expression.Additionally, the general result follows from a Fourier transform of the eikonal amplitude from momentum space to impact parameter space.The eikonal operator can involve both soft and non-soft graviton modes, with the latter sensitive to the integration over the conjugate distance x of the exchanged momentum Q in the Fourier transform [30].Hence, while there can exist finite frequency graviton contributions to the dressing, in Eq. (3.1), we consider only contributions from low-frequency gravitons resulting from the soft factor for all eikonal amplitudes.
The leading order κ coherent dressing results from an imaginary contribution to the eikonal phase at two loops.Given the κ 2 dependence in ∆, we expect the above dressing to correspond to three-loop (and higher) eikonal amplitudes.However, inelastic eikonal amplitudes to these orders have thus far not been derived.We may nevertheless use Eq.(3.1) to derive radiative gravitational wave observables following the prescription for soft dressings in [26,29].In this section, we argue that classical radiative observables can be expressed in terms of the expectation values of the corresponding operators in a coherent state.In particular, this coherent state contains the O(κ 3 ) corrections in its exponential factor, which can be used to derive higher PM gravitational wave observables.
We begin by applying the BCH formula on Eq. (2.39) to factorize it into a product of exponential dressings involving single and double graviton modes 4 .
where the subscript k of ∆ k denotes the number of dressing gravitons.∆ 2 is the 2-mode κ 2 operator in Eq. (3.3).The ∆ 1 term, on the other hand, is a 1-mode coherent state containing terms to all odd orders in κ, and its explicit form is provided in Appendix A. The 1-mode coherent term ∆ 1 up to its κ 3 correction is with ∆1 as in Eq. (2.24) and ∆ κ 3 1 being the κ 3 correction to the coherent dressing.
The factorized dressing Eq. (3.2) allows us to identify relevant contributions for classical observables.The dressing is similar to squeezed coherent states [90], generalized to continuous frequency modes and a 2-mode squeezing operator.One might thus expect a contribution from ∆ 2 for certain classical observables.However, for radiative observables considered in this paper, we establish that contributions from the 2-mode dressing ∆ 2 are O(ℏ) due to the normal ordering, and hence can be disregarded if only classical observables are concerned.Therefore, while Eq.(2.39) and Eq.(3.2) are equivalent, the latter manifests the NE double graviton contributions in the gravitational dressing into a 1-mode coherent form relevant for classical observables to all odd powers in κ.
For an operator Q comprising only graviton modes, we have the identity Considering Eq. (3.4) with Eq. (2.39) on graviton creation and annihilation operators, we have in which the terms involving combinations of f i with A ij and B ij result from the successive action of the single graviton mode over the double graviton mode operator from the unfactorized dressing Eq. (2.39).This is equivalent to the result from the factorized dressing Eq. (3.2) -the first lines of Eq. (3.5) and Eq.(3.6) come from the 1-mode coherent dressing ∆ 1 in Eq. (3.2), while the second lines are from the two-mode dressing ∆ 2 .A general expression of the above transformations on graviton creation and annihilation modes to all orders in κ is provided in Appendix A.
We denote the graviton vacuum as |0⟩, and the dressed graviton vacuum by e −∆ |0⟩.Classical gravitational wave observables following known approaches are associated with the expectation value of Q in the out-state S|0⟩.Due to our consideration of graviton mode operator Q and the fact that Re2δ is a c-number phase, this reduces to its expectation value with respect to the dressed graviton vacuum, which we denote by ⟨Q⟩ ∆ , i.e., We will also define expectation values with respect to the single graviton dressed state, i.e., coherent state, as In the following, we establish that all classical observables associated with graviton mode operators Q satisfy ⟨Q⟩ ∆ = ⟨Q⟩ ∆ 1 + O(ℏ).As we do not consider hard particle operators, the expectation values of Q will be taken to be the corresponding radiative observable.Let us now consider specific cases of radiative observables, such as the waveform represented by linear combinations of single graviton modes.For such observables, we can directly consider the expectation values of Eq. (3.5) and Eq.(3.6) to find which are entirely determined by the coherent contributions in the first lines of Eq. (3.5) and Eq.(3.6), with the double graviton dressing corrections included, as expected.Later, these expectation values will be adopted to evaluate the waveforms of the emitted soft gravitons.
We next consider the following operator involving two graviton modes, with C(k) a function of the wavevector k.The graviton number operator is realized with C(k) = ℏ −1 , while the radiated momentum for a single graviton results from C(k) = k µ in our conventions.
The expectation value of Eq. (3.11) can now be determined using Eq.(3.5) and Eq.(3.6).We find ) (3.14) The (quantum) remainder ⟨Q 2 ⟩ Remainder ∆ of Eq. (3.14) is non-vanishing, however, it is subleading in O(ℏ) when compared to ⟨Q 2 ⟩ ∆ 1 of Eq. (3.13).The additional ℏ factor arises from normal ordering when evaluating Eq. (3.14) through the commutation relation for the mode operators in Eq. (2.16) and Eq.(2.17).Hence, in the classical limit with ℏ → 0, only ⟨Q 2 ⟩ ∆ 1 contributes, which is from the 1-mode dressing ∆ 1 .For the examples mentioned previously, when Q 2 is the graviton number operator, ⟨Q 2 ⟩ ∆ 1 ∼ ℏ −1 diverges in the classical limit as expected for coherent graviton emissions.For gives the radiated momentum of the soft gravitons.In both cases, we see that the double graviton dressing functions A ij , B ij and their complex conjugates provide κ 4 corrections through their contractions with the leading Weinberg soft factor f i .
We lastly address the expectation value of Eq. (3.1) with respect to the graviton vacuum, (3.15) The expectation value of the dressing S-matrix is non-vanishing and can be interpreted as the imaginary contribution to the eikonal phase, i.e., ⟨0|S|0⟩ = e i2δ with 2δ = Re2δ + iIm2δ [26].The result for Im2δ follows from the normal ordering of the operators in ∆ 1 and ∆ 2 , and along with the BCH formula we find ) with Im2δ κ 2 in Eq. (3.17) the κ 2 contribution from two Weinberg soft gravitons [26], while Im2δ κ 4 of Eq. (3.18) is a κ 4 contribution from the coherent dressing due to the contraction of the double graviton with two Weinberg soft gravitons.In Eq. (3.16), the O(κ 5 ) terms are subleading κ contributions from the coherent dressing, while O(ℏ 0 ) terms are the superclassical contributions from e −∆ 2 .The O(ℏ 0 ) contributions can likely be identified with super-Poissonian statistics.On the other hand, the ℏ −1 terms in Eq. (3.16) contribute to the usual Poissonian statistics associated with a coherent dressing along the lines discussed in [26,84] Due to the presence of κ 4 (and O(κ 5 )) corrections in Im2δ, we now expect a relationship with O(G 2 ) emitted momentum and energy spectrum.We will return to this in the following section.
Gravitational wave observables
We now address and evaluate specific gravitational wave observables resulting from Eq. (3.2) for scattering events of scalar particles with a soft graviton dressing.For the external massive particles, we adopt an all-outgoing convention with the following parametrization with η a = +1 (η a = −1) for outgoing (ingoing) massive particles and p 2 a = −m 2 a .The graviton momentum will be parametrized as k µ = ω k q µ k as given in Eq. (2.12).This results in For two external particles, we can construct the following Lorentz invariant quantities and our results will be expressed in terms of them.In evaluating the classical gravitationalwave observables from expectation values, one will also deal with contractions over graviton polarization tensors that form the transverse and traceless projection operator defined by with λ µ a reference vector.This projection operator, by definition, depends on the orientation k of the on-shell gravitons, which obeys Eq. (4.7).Moreover, it is transverse to the momentum and reference vector, i.e., Due to the collinear limit in the double graviton dressing, all projection operators in the final result are along a single orientation, which we take to be k.Thus, we can express F µν in Eq. (2.26), and A µνρσ and B µνρσ in Eq. (2.37) as dependent on the graviton orientation k and the energies ω k and ω l .To evaluate observables constructed out of these quantities, we integrate over these kinematic variables of the dressing gravitons to find results that only depend on the external particle momenta Eq. (4.1), their relativistically invariant combinations Eq. ( 4.3) and the total graviton cut-off frequency ω.
Some radiative observables, such as the waveform and angular momentum, are sensitive to the integration around the poles of ω k,l .To fix the related causality issue as suggested in [29], we adopt the Feynman i0 prescription by adding −i0 k,l (+i0 k,l ) to the ω k,l appearing with the polarization ε * (ε).From the definitions of F µν , A µναβ and B µναβ as given in Eq. (2.26) and Eq.(2.37), we define the following associated polarization-projected soft dressing quantities f µν , α αβ µν and β αβ µν , respectively, The collinear δ(Ω k , Ω l ) appearing above reduces the double angular integrals to the one just over Ω k when evaluating the observables.Moreover, these quantities satisfy the following relations, In addition, we define the following quantities, denoted by l ± µν , which appear due to ∆ κ 3 1 and thus on many occasions in later discussions, where5 and with as the λ µ -independent part of the projection operator.Hence, the terms appearing in the sum of Eq. (4.14) are effectively fixed with respect to the de Donder gauge.
We highlight a notational choice introduced in Eq. (4.14) which we will implement for the remainder of the paper.The label n will denote the external particle with the double graviton vertex, while labels i and j will be reserved for particles on which Weinberg soft gravitons are attached.In the following, we carry out the sum over all external particles with each label considered generally distinct.However, the collinear condition on the double graviton vertex will restrict the sum to specific limits of the kinematic variables of the external particles they attach.We will return to this limiting procedure in the discussion section of this paper.In the following subsections, we derive general results for the waveform, emitted momentum spectrum and angular momentum.
Waveform of soft bremsstrahlung and memory effect
From the metric in Eq. (2.2), the waveform of soft gravitons is given by the expectation value W µν (x) := 2κ⟨h µν (x)⟩ ∆ , with the graviton mode given by Eq. (2.13) Being interested in the waveform observed at asymptotic infinity, we consider the spacetime coordinates as x = (u + r , rx), with u the retarded time, r the radial distance and x the angular orientation of the detector placed at large r.Using Eq. (3.9) and Eq.(3.10) in Eq. (4.17), and subsequently using the definitions given in Eq. (4.9), Eq. (4.10) and Eq.(4.11), we find The Heaviside step functions in Eq. (4.18) and Eq.(4.19) restrict the ω k,l integrals to the soft region 6 .The explicit use of the step function is not necessary in the case of Eq. (4.18), 6 For the two graviton emission case, we have the step function Θ(ω − ω k − ω l ) to restrict the integration bounded by the soft region.In later manipulations, we further use Θ(ω k ) to rewrite the ω k integral as and likewise for the ω l integral.
since the integral picks up the contribution from ω k → 0 and was evaluated in [26,29].This provides the leading contribution to the linear memory effect.The evaluation of Eq. (4.19) will provide the next-to-eikonal correction of Eq. (4.18) to O(G 2 ) and its corresponding memory effect.We note that the indicated counting in powers of G follows conventions for PM waveforms from amplitudes.More specifically, Eq. (4.18) and Eq.(4.19) would respectively correspond to soft contributions to the post-linear (order G 2 ) and post-postlinear contribution (order G 3 ) terms of the classical PM waveform7 As the soft graviton dressing is an IR effect, we will measure the corresponding waveform in the large r limit, for which the angular integration can be carried out by saddle point approximation, e.g., see [29].Denoting the angular position of the detector relative to the source by x, the waveform Eq. ( 4. 19) at O(G 2 ) can be reduced to with the graviton now oriented along the detector k µ = ω k (1 , x), as required by the angular saddle.From the transformations noted in Eq. (4.12) and the definitions in Eq. (4.13), we have Hence, considering ω k → −ω k and ω l → −ω l in the last term of the last line of Eq. (4.21), we get From the expressions for l ± µν in Eq. (4.13) and using Eq.(4.2) we find which follows from using the transversality property of the Weinberg soft factor and the projection operator in Eq. (2.34) and Eq.(4.8), respectively.We can further simplify the ω k and ω l independent pieces in Eq. (4.24) 8 , so that the expression of the integrand of W G 2 µν (x) can be further reduced to . (4.26) We will now perform the ω k and ω l integrals in the last two lines of Eq. (4.26).The ω k integral can be evaluated by first rescaling −η n ω k → ω k so that the integrand has simple poles in the upper half-plane.The residue theorem applied to the resulting ω k integral then provides where in the last equality of Eq. (4.27) we used the property that Θ(ω l )Θ(−ω l ) = 0, and used the step functions to express the ω l integral symmetrically.The overall factor of Θ(η n u) comes the requirement that η n u be positive in e −ηnu0 k .The ω l integral in the last equality of Eq. (4.27) can be evaluated using the identity where PV denotes the principal part of the integral and c a constant.We thus have with W G 2 ;αβ TT (x) the transverse-traceless part of W G 2 µν (x), and W αβ n,i (p n , p i ; x) a relativistically invariant combination of the external momenta and orientation from the source to the detector It is instructive to consider Eq. (4.30) in the small ωu limit to find (4.32) The second term in the parenthesis can be left small for all u by an asymptotic double scaling limit: ω → 0 and u → ±∞ but keeping ωu := ±ϕ 0 finite and small.A frame-independent definition of the memory effect follows from considering the difference of Eq. (4.32) between the asymptotic future and asymptotic past [98] ∆W G 2 ;αβ TT We consider the O(G 2 ) gravitational memory effect Eq. (4.33) in the simplified case of n = i, where the kinematic factor simplifies to and hence apart from the O(ϕ k≥2 0 ) corrections, the leading contribution to the O(G 2 ) gravitational memory effect from Eq. (4.32) is Before proceeding, we will briefly discuss the difficulty in comparing Eq. (4.32) and Eq.(4.35) with the current literature.Both results have an overall dependence on ω (the cut-off on the sum over graviton energies) as a consequence of integrating over the two frequencies in the double graviton contribution of the dressing.At present, the PM waveform is known up to one-loop [69-72, 74, 77-80], which provides the leading correction to the previously known tree-level result [22,64,73,99].We recall that the tree level and one-loop waveform results respectively correspond to the post-linear (order G 2 ) and post-post-linear contribution (order G 3 ) terms in the classical PM waveform [20,74].An analytic expression for the waveform W αβ can be determined from a soft expansion [74,77,78,80] 9 The pieces non-analytic in ω are constrained by classical single soft theorems [100,101], while the ω ln ω term additionally receives a one-loop contribution that provides a check on the one-loop corrected waveform.Our result Eq. (4.32) could, in principle, be compared with the subleading finite frequency contribution in the soft expansion.However, this contribution (the coefficient of ω) is currently absent in the literature (the • • • of Eq. (4.36)) and involves a non-trivial computation that lies beyond the scope of our paper.
There are known results for the linear memory effect derived from tree-level and one-loop corrected waveforms.The leading single soft contribution to the linear memory effect from the Weinberg dressing [26] agrees with the tree-level contribution in [64].Likewise, the leading soft limit of the one-loop corrected waveform provides the O(G 2 ) contribution to the linear memory [71,77,80].These results are independent of the graviton frequency as a consequence of the leading single soft graviton factor.However, our result notably involves an overall dependence on the cut-off frequency for the two soft gravitons ω, while also providing a non-vanishing time-independent contribution to Eq. (4.33), which is nothing but the standard definition of memory effect.
Hence Eq. (4.32) in complete generality would provide a subleading contribution in the gravitational wave memory.The more general procedure involves contracting Eq. (4.32) with polarization tensors to find a dependence on the impact parameter and impulse.As a consequence, we will find an O(G 2 ) result, that will remain difficult to compare with the known literature.In the following, we proceed differently and consider the simplified expression in Eq. (4.35).The transverse-traceless components will be spatial.Using p n = η n (E n , ⃗ p n ) and defining ⃗ v n = ⃗ pn En , we may then express Eq.(4.35) as with I , J denoting spatial indices.We will now show that this expression can be related to the nonlinear gravitational memory effect.
We recall that the nonlinear gravitational memory is a hereditary effect purely resulting due to the gravitational emission from a scattering event with the expression [102-105] with dE GW dΩ the differential energy distribution of the gravitational waves over the celestial sphere coordinated by n (with components nI ) the unit spatial vector of the emitted gravitational waves, and x as before specifying the location of the detector on the celestial sphere.To relate Eq. (4.38) to Eq. (4.37), we follow in the spirit of [104], which relates the linear gravitational memory effect to the nonlinear one.First, we need to replace dE GW dΩ in Eq. (4.38) by the O(G) result of (nearly) soft gravitons described in terms of the momenta of the external particles and the graviton.Next, we need to take the ultrarelativistic limit of the massive particles so that they mimic a massless source similar to gravitons that source the nonlinear gravitational memory effect.This can be done by replacing velocity vectors v I n of the ultrarelativistic external particles with nI , the unit spatial vector of the emitted graviton.By the above procedure, we will show that an ultrarelativistic approximation of Eq. (4.38) can be used to arrive at Eq. (4.37).
Before proceeding, we elaborate on why Eq. (4.37) might be expected to produce a nonlinear memory effect.First, recall that the O(G 2 ) waveform Eq. (4.30) is obtained by taking the expectation value of a single graviton as in the O(G) case.The contribution from the n and i vertices can be traced back to the process with one of the emitted gravitons from the double graviton vertex n being absorbed as a Weinberg soft graviton at vertex i, and with the other graviton from vertex n as the emitted gravitational radiation 10 .Thus, the case with n = i can be interpreted as the waveform emitted from a particle, which follows its recoil due to a soft graviton.Besides, while we cannot strictly take the massless limit of an external particle to exactly mimic a massless graviton, we may consider the ultrarelativistic limit.In this limit, the external particle can be nearly collinear with the emitted graviton, so that we expect the resulting memory expression in Eq. (4.37) to be proportional to the nonlinear memory effect.
To demonstrate this, we use the 1PM expression for E GW soft from [29] with ω the cut-off frequency, which in the following we do not take to vanish.Hence, the result will be more strictly for nearly soft gravitons, which are required to produce a recoil of the external particles.We accordingly consider the i = j contribution in Eq. (4.39) and take the ultrarelativistic limit by requiring Here λ represents a cut-off on the vanishing mass of the external particles.In this limit we find the following result for dE GW soft dΩ k from Eq. (4.39) We note that dE GW soft dΩ k is a local observable determined from the corresponding global observable in Eq. (4.39) [106].While classical global observables for gauge and gravitational theories have certain mass singularities and divergences in the ultrarelativistic limit, their corresponding local differential expressions are smooth in this limit.The absence of any divergences in Eq. (4.40) as λ → 0 is consistent with this property and allows for our analysis in this limit.We refer the interested reader to [106] for further details and to [81] for the high energy limit applied to energy spectrum and waveform derived from the coherent dressing involving Weinberg soft gravitons.For our present purpose, we note that Eq. ( 4.40) provides the nearly soft energy distribution from a process involving ultrarelativistic particles with |⃗ v i | = 1 − λ.As these velocities are close to the speed of light, we can approximate the nonlinear memory effect in Eq. (4.38) by The expression in Eq. (4.41) results from Eq. (4.38) by replacing the energy distribution with that from Eq. (4.40) and taking nI ≈ v I n , and then by summing over all external particles to obtain the total contribution.Hence ∆ W IJ G;nonlinear (x) in Eq. ( 4.41) represents an ultrarelativistic particle approximation of the nonlinear memory effect expression W IJ G;nonlinear (x) in Eq. ( 4.38) sourced by nearly soft gravitons.Since we assume ⃗ v n is nearly collinear with the graviton, we can explicitly perform the integral over Ω k by the saddle approximation, which introduces a factor of 2π from the azimuthal integral, and a condition of k = x for the polar integral, with x as before the angular position of the detector.Hence to leading order in λ, Eq. (4.41) evaluates to Since the above followed from Eq. (4.41), its massless limit will agree with the nonlinear memory effect in Eq. (4.38).However, we can also establish that in the vanishing mass limit λ → 0 of the external particles in Eq. (4.42), this nonlinear gravitational memory effect sourced by the ultrarelativistic particles can be identified with the O(G 2 ) gravitational memory effect Eq. (4.37) due to the recoil of external particles by a nearly soft graviton through the following relation The overall factor of 4π is on account of a specific angle towards the detector as opposed to an isotropic integration over angles.While this correspondence is established in a very restricted setting, it shows that the memory effect in Eq. (4.37) can be associated with emissions from a particle following its recoil due to soft gravitons.
Radiative momentum spectrum
We will now consider the radiative momentum spectrum of soft gravitons.This starts with the classical 4-momentum of emitted soft gravitons up to O(G 2 ), and the soft radiative momentum spectrum is defined by dP α dω , where ω is the cut-off frequency or the resolution power.The formal expression of P α has been given in Eq. (3.13) with C(k) = k α := ω k q α k , with q α k = (1, k).We first simplify Eq. (3.13) by using Eq.(4.9), Eq. (4.10), Eq. (4.11), and Eq.(4.13) to express P α in terms of f µν (ω k , k) and l ± µν (ω k , ω l , k).Using the fact that the ω dependence in P α only appears in the form of step functions such as Θ(ω − ω k ) or Θ(ω − ω k − ω l ), it follows that the derivative of P α with respect to ω replaces these step functions by corresponding delta functions.After these manipulations, we find with ) The delta functions eliminate the subtlety of vanishing frequency; thus the integrals over ω k,l can be carried out without needing a i0 prescription.This is different from the evaluations of waveforms and angular momentum.The integrals over ω k and Ω k of Eq. (4.46) have been carried out in [26] to express dP α;G dω in terms of kinematic variables of external particles.Here, we cite its radiative energy spectrum for later comparison to our result at where ∆ ij is defined in Eq. (4.3).In the following, we will focus on carrying out the integrals of dP α;G 2 dω to express it in terms of the external particle kinematic variables.First, the terms appearing in the parenthesis of Eq. (4.47) can be shown to provide with We note that the function H nij of the relative velocities for the external particles is symmetric in i and j, i.e., H nij = H nji .The i and j indices are particle labels on which the Weinberg soft gravitons are attached.However, there is no symmetry of H nij associated with the particle label n for the double graviton vertex.This reflects an underlying feature in the kinematic factors for the momentum and angular momentum results.
Substituting Eq. (4.49) in Eq. (4.47), we find a frequency integral which can be evaluated to yield a simple result, Combining the results of Eq. (4.49) and Eq. ( 4.51), we can reduce Eq. ( 4.47) to where the 4-vector notation F nij , ⃗ G nij indicates that its temporal component F nij determines dP 0;G 2 dω and hence the radiated energy spectrum, while the spatial 3-vector ⃗ G nij determines the radiated spatial momentum, and these terms have the definitions The angular integrals of Eq. (4.53) and Eq.(4.54) have been carried out in Appendix B. Both F nij and ⃗ G nij are completely symmetric under the interchange of the external particle labels n, i and j.While the expression for ⃗ G nij is quite involved, F nij is considerably simpler with the result, Thus, using Eq.(4.50) and Eq.(4.55) we can obtain the explicit expression for the soft radiative energy spectrum at O(G 2 ) This result can be directly related to the κ 4 correction of Im2δ.Using the definitions in Eq. (4.9), Eq. (4.10), Eq. (4.11), and subsequently Eq. (4.50) and Eq.(4.53) in Eq. (3.18) we find where we made use of the step function in the limits of the ω k and ω l integrals, and the integrand follows from the absence of any i0 poles being considered in the derivation (as with the energy spectrum).We now note that the ω k and ω l integrals do not involve any divergences and can be evaluated without dimensional regularization.We have and hence using Eq.(4.58) in Eq. (4.57) gives us the result Comparing Eq. (4.59) with Eq. (4.56) gives the relation This provides a generalization to O(G 2 ) of the relationship between the imaginary part of the eikonal and the emitted energy spectrum in [26,81] in the infrared limit, which we recall here for comparison, with ϵ = 4−d 2 the dimensional regularization parameter.While the infrared diverges in Im2δ κ 2 are related to the zero frequency limit of dE G dω as shown in Eq. (4.61), we note that Eq. (4.60) holds for all subleading ω.More specifically, Eq. (4.60) provides a O(G 2 ) and higher relation between a subleading soft radiation energy spectrum and the imaginary part of the eikonal phase from next-to-eikonal corrected soft factors.
Radiative angular momentum
We lastly consider the case of angular momentum J αβ = L αβ + S αβ , where L αβ and S αβ respectively denote the orbital and spin angular momentum and have the following operator expression in the de Donder gauge, with P µνρσ given in Eq. (4.16).Our symmetrization notations and conventions for the left-right derivative are In the following, we consider the orbital and spin contributions separately to briefly discuss a gauge invariance issue.We define the expectation value with respect to the dressed states as L αβ := ⟨L αβ ⟩ ∆ and S αβ := ⟨S αβ ⟩ ∆ .We further expand the expectation values in terms of their G and G 2 contributions, as denoted by ), and after straightforward manipulations we find for the orbital angular momentum and for the spin contribution.The step functions restricting the sum of ω k and ω l to be bounded by ω have been considered explicitly in the integration limits of Eq. (4.66) and Eq.(4.68).This will be particularly useful in considering the action of the operator k [α in Eq. (4.66).
The O(G) terms Eq. (4.65) and Eq.(4.67) have been considered and evaluated in [29].These two terms are separately gauge non-invariant due to contributions from the projection operator of Eq. (4.5) hidden in f µν .However, the gauge non-invariant pieces cancel out and the resultant gauge invariant sum gives [29] To discuss the gauge issue further for the O(G 2 ) terms Eq. (4.66) and Eq.(4.68), we make explicit the gauge vector λ µ terms from the projection operator of Eq. (4.5) and Eq.(4.6) in the the expressions of f ρσ (ω k , k) and l ± ρσ (ω k , ω l , k) as follows: ) and Plugging the above into Eq.(4.66) and Eq. ( 4.68), we can single out the λ µ -dependent and hence the gauge non-invariant pieces to find where the gauge invariant (λ µ -independent) pieces are given by and the common gauge non-invariant (λ µ -dependent) piece is given by with We now see that the gauge non-invariant pieces in Eq. (4.72) and Eq.(4.73) cancel out, and the total angular momentum ) is gauge invariant and with the general form in [28].
Making the transformation ω k → −ω k and ω l → −ω l in the integrands of Eq. (4.74) and Eq.(4.75), and using the properties in Eq. (4.12) with the following identity to make the integral over ω l symmetric, we obtain Using Eq. (4.14), Eq. (4.28) and Eq. ( 4.53), we can carry out the integrations of Eq. (4.80) and obtain the final expression for the gauge-invariant contribution to the spin angular momentum, where F nij is given by Eq. (4.55).In deriving Eq. (4.81), we made use of the antisymmetry in the particle labels within the sum due to p For the orbital angular momentum, we first note that from the relation k µ = ω k q µ k in Eq. (2.12) that Hence Eq. (4.82) can be expressed as Note that the second term on the R.H.S. of Eq. (4.84) satisfies which can be used to simplify the momentum-derivative in Eq. (4.79) as follows To arrive at the above, we again used Eq.(4.9), Eq.where G I nij := ( ⃗ G nij ) I of Eq. (4.54) with its expression given in Appendix B.2.We note that Eq. (4.90) represents a mass-dipole contribution to the angular momentum [28,29].From the derivation, Eq. (4.90) is due to the different energies of the two collinear gravitons from the double graviton vertex.
We finally come to the second piece of orbital angular momentum Eq. (4.89).Using Eq. (4.9), Eq. (4.14) and Eq.(4.50), and carrying out the integrals over ω k and ω l as above, we can simplify it to the following: with We can carry out the integral Eq. (4.92) along similar lines of [29].This follows from Eq. (4.92) being a Lorentz vector and hence having the general solution The expressions for the coefficients in Eq. (4.91) can be readily derived from Eq. (4.93) . (4.94) Each K a • p is a special case of the integral in Eq. (4.53).From Eq. (4.92) and using which provides the following relations Lastly, by substituting Eq. (4.93) in Eq. (4.91), we have All coefficients can be determined by using Eq.(4.96) in Eq. (4.94), and we find where We used the property p
Summary and Discussion
In this paper, we considered the eikonal operator formalism in a soft expansion up to double graviton contributions.This provides a generalization of the leading soft limit from known single-mode gravitational dressings to non-linear graviton modes.Our result followed from the generalized Wilson line formalism for all eikonal amplitudes from dressed Schwinger propagators, which we reviewed in Sec. 2. On substituting two real graviton modes, we find the Weinberg soft graviton factor as the leading term and a subleading double graviton vertex contribution that accounts for the recoil of external particles due to radiative exchanges.We subsequently demonstrated that collinear gravitons provide a gauge invariant dressing, which we consider to be those for real elastic eikonal amplitudes.In Sec. 3, we considered the expectation values of the dressing and their relationship with radiative observables in further detail.The dressing can be expressed as the product of two exponential operators, whose exponents involve a single graviton mode and two graviton modes.The former describes an effective coherent contribution to the dressing, with corrections to all odd powers in the gravitational coupling κ.All classical observables were shown to result from expectation values with respect to this coherent contribution to the gravitational dressing.In Section 4, we derived the waveform, radiative momentum spectrum and angular momentum, resulting from the κ 3 correction in the coherent dressing of the Weinberg soft factor.These observables are linear in ω, a cut-off on the combined frequencies of the two gravitons.While the observables we derive are soft, they are not considered in the strictly soft limit of ω → 0 and hence do not correspond to 'static' observables as with the Weinberg soft factor.Nevertheless, the results we derive for the waveform and angular momentum are sensitive to the i0 prescription for the two gravitons in the double graviton factor.
The waveform derived in 4.1 results from the expectation value of a single graviton mode operator, and its asymptotic expression for a distant observer was evaluated in a saddle approximation along the direction from the source to the detector.We find a general result in Eq. (4.30) with a retarded time u dependence.Expanding the waveform in the low-frequency limit and considering ωu a small constant, we then found a soft waveform in Eq. (4.32) linear in ω.The asymptotic difference of this waveform from u → ∞ to u → −∞ provides a O(G 2 ) memory effect, which we considered in a collinear limit of the external particles in Eq. (4.35).We further established that this result agrees with an ultrarelativistic particle approximation of the nonlinear memory effect, in which the differential radiative energy distribution over angles in the usual nonlinear memory effect formula Eq. (4.38) is taken to be that for soft gravitons sourced by ultrarelativistic particles.Apart from realizing the nonlinear memory effect associated with soft gravitons, this demonstrates that the O(G 2 ) memory due to the double graviton vertex is associated with the recoil of external particles following the prior emission/absorption of gravitons.It will be interesting to understand the relationship of our O(G 2 ) waveform contribution with the recently derived one-loop corrected waveform [69][70][71][72]74] and its analytic expression using the subleading single soft graviton expansion [77,78,80] We also derived double graviton corrections of the Weinberg soft factor contributions to the radiative momentum spectrum and angular momentum.These observables formally involve a sum over three particles with a double graviton vertex and two single Weinberg soft graviton vertices.The O(G 2 ) radiative momentum spectrum was derived in Eq. (4.52).The O(G 2 ) correction to the radiative energy spectrum was shown to be related to a corresponding O(G 2 ) correction to the imaginary part of the eikonal phase in Eq. (4.60).This provides an extension of its O(G) version discussed in [26,81].However, as Eq.(4.60) holds outside the strict soft limit, this can also be considered an extension to subleading soft order.We also showed that the radiative angular momentum at O(G 2 ) involving the double graviton vertex is given by the gauge invariant combination of the spin angular momentum Eq. (4.81) and orbital angular momentum contributions Eq. (4.90) and Eq.(4.98), of which Eq. (4.90) is a mass-dipole orbital angular momentum term.It will be interesting to explore the relationship between the radiative angular momentum at O(G 2 ) found in this work and the ones in other literature by different approaches [20,28,55,58,107,108].
The κ 3 correction in the coherent dressing results from contracting one of the two gravitons in the double graviton vertex with a single Weinberg soft graviton.Consequently, all O(G 2 ) observables considered in Sec. 4 are derived from a coherent operator involving a single graviton.Our results are consistent with the observations in [30,83,84] -that gravitational waveforms result from coherent states and that the number operator expectation value Eq. (3.19) has a leading Poissonian distribution (O(ℏ −1 )), with super-Poissonian contributions classically suppressed (O(ℏ 0 )).As a consequence, the radiated momentum P α is a classical observable since it is related to the number operator by ℏk α .More specific to the eikonal operator formalism, we find that the number operator expectation value Eq. (3.19) and radiated energy spectrum Eq. (4.60) are related to the imaginary part of the eikonal in a similar fashion as [26,81].We lastly note that the operator in Eq. (3.2) can be considered a continuous frequency version of squeezed coherent states, with exp[−∆ 2 ] as the squeezing operator.Notably, squeezed coherent states are minimum uncertainty states with the squeezing operator responsible for super-Poissonian statistics [90], and it would be interesting to consider the implications of such states on the analysis of [83].
Another aspect of our double soft graviton consideration was the collinear condition, which restricted the orientations of gravitons in the double graviton vertex to be the same.By following the conventional prescription of summing soft factors over all external legs, our results, in general, contain an additional sum over all external particles when compared with the corresponding observables in [26,29].Below, we will try to argue that the general sum over external legs for the radiative momentum spectrum and angular momentum in Sec. 4 must be further restricted to be consistent with the collinear emissions from a double graviton vertex.In Fig. 1, we consider an example of the general sum over the indices of the external legs in a 2-2 scattering process.As the two gravitons from the double graviton vertex should be collinear to ensure gauge invariance, this implies that they should also end on the same external leg, which is equivalent to imposing i = j.Thus, we shall not treat the indices n, i and j distinct in the sum since they correspond to non-collinear gravitons connected to the double vertex, as illustrated in the left diagram of Fig. 1.Instead, we shall treat i = j so that it leads to a single vertex for two gravitons on both particles, as illustrated in the right diagram of Fig. 1.Hence, it is more reasonable to require the two soft Weinberg graviton vertices to have a one vertex limit, which can be implemented by including η i δ ij in the sums over three particle labels.We will explore classical observables in this limit through future work.
There are several avenues for future research.One involves performing the sums described above to derive 4PM and higher gravitational wave observables.The analysis can be carried out along similar lines as those for the Weinberg soft factor dressing for eikonal amplitudes in [30].This, in particular, involves substituting expressions for the relative velocities σ ab in terms of the PM expansion for the impulse following eikonal kinematics.It will also be interesting to consider the implications of the quadratic graviton mode operator e −∆ 2 in Eq. (3.2) for gravitationally interacting bodies more generally.While this operator will not contribute to classical radiative observables, it may be interesting to consider in other areas.In this regard, we note that entanglement entropy can be used to constrain scattering amplitudes [109][110][111][112][113] and more specifically amplitudes with a Weinberg soft graviton dressing [114][115][116].It will be interesting to investigate dressings with non-linear graviton modes on these results.graviton creation and annihilation modes.The only contributions involving more than one ∆1 in the nested commutators is of the form ∆1 ∆ n 2 ∆1 , which involve no graviton modes.We find that Eq. (A.4) can be expressed as where we have explicitly separated the n = 0 contribution in ∆ 1 , which is the single graviton term of the unfactorized gravitational dressing.We hence find that − ∆ can be written as a sum over a single graviton mode and a zero graviton factor known up to certain constants c n , with the first few constants being However, in the following, we show that ∆ 0 vanishes, and hence the dressing exp[− ∆] can be written as a normalized single graviton dressing.We also note that since ∆1 and ∆ 2 are respectively O(κ) and O(κ 2 ), Eq. (A.5) provides a single graviton (coherent) dressing to O(κ 2n+1 ) for all positive integers n.
Using the expressions for ∆ 2 and ∆1 , we determine the following result for ∆ 1 and ∆ 0 with S ij;n (k , l) and C ij;n (k , l) defined in terms of a products over an integrated expression
B Evaluation of F inm and ⃗ G inm
In the following, we will always consider the graviton orientation to be given by k = 1 − y 2 cos ϕ , 1 − y 2 sin ϕ , y , (B.1) with y = cos θ .The parameterizations adopted for ⃗ p n , ⃗ p m and ⃗ p i in the two subsections will be treated differently.We will however always use the freedom to have all the hard particle momenta in a 2-dimensional plane, which we consider along the 'x-z' plane throughout.The integrals we consider will also have a non-trivial integration over ϕ, and will make use of the following result We then have the following integral for Eq.(4.53) + 1 From Eq. (4.3) and −p 2 a = E 2 a − ⃗ p 2 a = m 2 a for a = n, i, j, we then find that the parametrization Eq. (B.3) gives the following relations where ∆ ab is defined in Eq. (4.3).
B.2 Evaluation of ⃗ G inm
We now consider the derivation of Eq. (4.54).Start with the following decomposition: In the above, we used the rotational symmetry of the integral to express it as a linear combination over the hard particle orientations in the last equality.Defining pa • pb = cos θ ab as the relative angle between two hard particle orientations pa and pb , we then find the following solutions for a nij , b nij and c nij from Eq. (B.8)In the above, we have omitted the sub-indices of ⃗ G nij to avoid clutter of expressions.
Thus, the expression for ⃗ G nij is completely specified from its contraction with the hard particle orientations.In the following, we consider the derivation for which cover the cases for a = n, i, j by an appropriate choice of indices.We can carry out the integration using the parametrization
1 ,
(4.14) and Eq.(4.50) for simplification.Performing the momentum derivatives in Eq. (4.88), and then the ω k integral by principal value identity Eq. (4.28), and using the defining equation of ⃗ G nij , we arrive at the final expression of Lαβ;G 2 α [a p β b] to simplify terms inside the sum of Eq. (4.98).We note that the result involves no contribution from external particles involving only Weinberg gravitons, i.e. p final result for the angular momentum is given by the sum of Eq. (4.98), Eq. (4.90) and Eq.(4.81).
Figure 1 .
Figure1.Scattering process due to soft graviton exchanges involving a double graviton vertex and two single graviton Weinberg vertices.The particles with a double graviton vertex are labeled by the index n and the ones with a single graviton vertex by i and j.The left subfigure is a general diagram with i ̸ = j, which is inconsistent with the collinear requirement on the double graviton vertex.The right subfigure is our resolution by requiring i = j, which collapses the two Weinberg vertices into one by implementing an additional factor η i δ ij . | 2024-01-09T06:49:09.543Z | 2024-01-08T00:00:00.000 | {
"year": 2024,
"sha1": "e29ab8cff6dede95fdb075e8e0cb48c7a3143b77",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP06(2024)015.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a95c1f5850e3ed65dbb998904be2b5506e8aa4b9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
244750457 | pes2o/s2orc | v3-fos-license | Serum Ferritin and D-Dimer as Possible Risk Factors in Ischaemic Stroke in Cancer Patients
Background: Ischemic stroke is frequently encountered in patients with malignant disease. The pathophysiology of stroke in such cases differs from other subjects with no malignant disease. This study was conducted to compare serum levels of ferritin and d-dimer in cases with ischemic stroke in cancer versus non-cancer patients. Patients and methods: The data of consecutive 264 patients presented with ischemic stroke, confirmed by clinical examina-tion and radiological investigations, were retrospectively reviewed. The included cases were divided into two groups: Group A (non-cancer with stroke, 210 cases) and Group B (cancer with stroke, 54 cases). The collected data included patient demographics, systemic comorbidities, disease and tumor characteristics, in addition to platelet count, serum ferritin and d-dimer. Results: Age, gender, and systemic comorbidities were statistically comparable between the two groups. Additionally, the etiology of stroke and its disability were not statistically different between the two groups. However, the incidence of mortality significantly increased in Group B (25.93% vs. 7.14% of Group A, p = 0.005). Both serum ferritin and d-dimer showed a significant increase in association with cancer (Group B). The former had mean values of 294.54 and 867.87 ng/ml, while the latter had mean values of 463.83 and 888.13 ng/ml in the same two groups, respectively. Conclusion: Serum ferritin and d-dimer showed a significant rise in cancer-associated ischemic stroke. This confirms the role of the hypercoagulable state, associated with malignancy in the development of this morbidity.
Introduction
Ischemic stroke and cancer share common risk factors for ischemic stroke in cancer patients [1]. Autopsies taken from cancer patients revealed that about 14.6 percent had cerebrovascular pathology, and 7.4 percent had clinical manifestations of stroke, according to a previous report [2].
The etiology of cancer-associated ischemic stroke could greatly differ from others who do not have malignant disease. In patients with malignancy, stroke may occur due to the hypercoagulable state associated with cancer, direct tumor compression, non-bacterial thrombotic endocarditis, or as a complication from anti-cancer therapy [3] [4] [5].
Coagulopathy-related pathologies can also potentiate ischemic stroke in cancer patients. D-dimer, a by-product of the blood clotting and break-down process, is a direct measure of active coagulation. It has been widely employed as a reliable test of hypercoagulability in numerous earlier investigations [9] [10]. Some studies noted its significant rise in cancer-associated ischemic stroke compared to non-cancer patients [11] [12].
The largest iron reserve in the human body is ferritin, which is one of the acute-phase proteins [13]. Acute-phase response proteins are thought to play a crucial role in the pathological process of ischemic stroke. This was confirmed by previous research, which showed that high ferritin levels are strong predictors for ischemic stroke [14]. More recent research notes significant elevation of the previous markers associated with stroke in lymphoma patients compared to those who did not [15].
This study evaluated serum levels of ferritin and d-dimer in cases with ischemic stroke in cancer patients compared to non-cancer patients.
Patients and Methods
This retrospective study was conducted at Mansoura University Neurology Department. We retrospectively reviewed the data of ischemic stroke patients presented and admitted to our department between January 2018 and December 2020.
We included patients diagnosed with acute ischemic stroke during the study period. The diagnosis of ischemic stroke was established when a new vascular lesion was detected on brain imaging (computed tomography CT or magnetic resonance imaging MRI) explaining patient clinical manifestations. Contrarily, cases with recurrent stroke, transient ischemic attacks (TIA), negative brain radiology, primary brain tumor or brain metastasis were excluded.
A total of 264 cases were included in the current study, of whom 210 patients had a stroke without cancer (Group A), and the remaining 54 patients had a stroke in association with cancer (Group B).
Regarding the ethical consideration, it gained approval from the local ethical committee and Institutional Review Board (IRB) of Mansoura University. Additionally, informed written consent was signed by all participants (or their relatives in case of disturbed conscious level) after an explanation of the benefits and possible complications of each intervention performed.
The patient evaluation included detailed history taking, thorough clinical examination and routine laboratory investigations. Patient history included personal history, current complaint with its analysis, and detailed medical history regarding diabetes, hypertension, ischemic heart disease, cardiac dysrhythmia, or previous stroke events. Cancer-related data included the primary tumor site, duration of disease, and the presence of disseminated disease. Full neurological assessment was done for all cases, and the National Institutes of Health Stroke Scale (NIHSS) [16] was calculated for all subjects at admission.
The radiological investigation included brain CT and/or MRI was ordered to confirm the diagnosis. This was ordered for all patients. Other radiological investigations included transthoracic echocardiography, carotid doppler study, and transcranial doppler, and they were ordered in most of the cases to determine stroke etiology. The etiology of stroke was classified according to the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification criteria [17]. The previous data were collected to be analyzed and compared between cancer and non-cancer patients. Our primary objective was to compare ferritin and d dimer levels between the two groups. Other objectives included the difference in clinical characteristics and mortality between the two groups.
To analyze the data, we used the SPSS 26 statistical analysis package from IBM/SPSS Inc. in Chicago, IL. Patient baseline characteristics were presented as either frequencies and percentages (%) or mean and standard deviations (SD).
To compare data, we applied the Chi-Square test (or Fisher's exact test) to compare two independent groups of qualitative data, whereas we used the Mann-Whitney U test and independent-Samples t-test to compare two groups of non-parametric and parametric quantitative data, respectively. A p-value less than 0.05 was considered statistically significant.
Results
As shown in Table 1, patient demographics, including age and gender, were statistically comparable between the two groups (p > 0.05). The prevalence of smoking was 41.9% and 37.04% in Groups A and B, respectively. Regarding systemic comorbidities, it was more or less similar between the two groups, with no significant difference detected. Hypertension was the most common comorbidity in the two groups, followed by diabetes mellitus. Other comorbidities included atrial fibrillation, dyslipidemia and coronary artery disease. NIHSS had mean values of 14.35 and 14.25 in the two study groups, respectively (p = 0.260). Additionally, the etiology of stroke was also comparable between the two groups according to the TOAST classification (p = 0.288). In-hospital mortality was encountered in 15 (7.14%) and 14 (25.93%) patients in the same groups, respectively, with a significant increase in association with malignancy.
When it comes to laboratory parameters, platelet count was statistically comparable between the two groups (p = 0.132), which had mean values of 306 and 324 × 10 3 /ml in groups A and B, respectively. On the other hand, both serum ferritin and d dimer showed a significant increase in association with cancer (Group B) (p < 0.001). The former had mean values of 294.54 and 867.87 ng/ml, while the latter had mean values of 463.83 and 888.13 ng/ml in the same two groups, respectively. The previous data are shown in Table 2.
In Group B, primary cancer sites were distributed as follows; gastrointestinal Table 3 summarizes these data.
Discussion
Cerebrovascular disease and cancer are two of the most common diseases that contribute to death or disability around the world. The risk of an ischemic stroke in patients with malignant disease is substantially higher than in the general population, as has been documented in the literature [18] [19] [20]. This should encourage us to increase the study into the underlying causes and mechanisms of cancer-associated ischemic stroke.
This study was conducted to evaluate serum levels of ferritin and d-dimer in cases with ischemic stroke in cancer patients compared to non-cancer patients.
We reviewed the data of consecutive 264 patients, who were divided into two groups; Group A (210 non-cancer patients with stroke) and Group B (54 cancer patients with stroke).
According to the previously mentioned results, we did not detect any significant difference between the two study groups in terms of patient demographics and most of their clinical risk factors and should nullify any bias that might have skewed the results in favor of one group rather than the other one.
In the current study, current smoking was reported by 41.9% and 37.04% of patients in the two groups, respectively (A and B), with no statistical difference detected (p = 0.257). Likewise, other authors denied any significant difference between cancer and non-cancer groups regarding the prevalence of smoking (p = 0.40), which was reported by 17.9% and 21.8% of patients in the two groups, respectively [1]. Smoking is a significant risk factor for malignancy and ischemic stroke, and it is also the underlying etiology of the vast majority of ischemic strokes in cancer patients [21] [22].
Our findings showed no difference between the study groups regarding the prevalence of hypertension (p = 0.126). It was detected in 70.95% and 62.96% of patients in groups A and B, respectively. Romeiro and his associates reported that hypertension was present in 71.43% and 79.47% of patients in cancer and non-cancer groups, respectively, with no statistical difference detected (p > 0.05) [23]. On the other hand, another study reported a higher prevalence of hypertension in the non-cancer group (78.8% compared to 54.5% in the cancer group p < 0.01) [1].
In the current study, diabetes mellitus was present in 40.48% and 46.3% of patients in Groups A and B, respectively (p = 0.158). Stefan and his associates also reported no significant difference between the two groups regarding the prevalence of diabetes (p > 0.05), which was 24% and 28% of patients in cancer and non-cancer groups, respectively [24]. Other authors confirmed the previous findings [25]. In contrast to the previous findings, another study reported a higher prevalence of diabetes in the non-cancer group (33.77% compared to only 16.07% of patients in the cancer group, p < 0.05) [23].
In our study, atrial fibrillation was detected in 28.1% and 24.01% of patients in groups A and B, respectively, which was statistically insignificant between the two study groups (p = 0.214). In another study, authors reported that atrial fibrillation was encountered in 15.25 and 20.6% of patients in cancer and non-cancer groups, respectively, with no statistical difference between the two groups (p = 0.382) [26]. Contrarily, Kim and Lee reported a higher prevalence of the same pathology in the non-cancer group (p < 0.01), as it was present in 23.1% of cases in that group, compared to only 9.6% in the cancer group [1].
In the current study, the prevalence of dyslipidemia did not significantly differ between the two groups (p = 0.178), as it was present in 17.62% and 12.96% of patients Groups A and B, respectively. Romeiro et al. also reported a comparable prevalence of dyslipidemia between cancer and non-cancer groups (p > 0.05).
Nevertheless, the reported incidence was higher than ours, as this morbidity was detected in 39.29% and 52.89% of patients in cancer and non-cancer groups, respectively [23]. However, other authors reported a significant difference between the same two groups regarding dyslipidemia (p < 0.01), which was present in 6.4% and 39.7% of patients in cancer and non-cancer groups, respectively [1].
Our findings showed that coronary artery disease was present in 10.48% and 12.5% of patients in Groups A and B, respectively, which yielded no significant difference between the two groups on statistical analysis. In line with our findings, Sorgun et al. reported that the same cardiac morbidity was noted in 21.7% and 23.4% of patients in cancer and non-cancer groups, respectively (p = 0.799) [26]. Other authors confirmed the previous findings [24].
In the current study, the etiology of stroke was also comparable between the two groups according to the TOAST classification (p = 0.288). Likewise, Bang and his associates reported similar stroke subtypes between cancer and non-cancer patients [13]. Moreover, Zhang et al. confirmed the previous findings as TOAST classification was not statistically significant between cancer and non-cancer stroke patients [27]. However, in another study, cancer patients with stroke showed a higher incidence of large-artery atherosclerosis and undetermined etiology and lower incidence of small artery occlusion and cardioembolism when compared to non-cancer patients (p = 0.02) [1]. The difference in stroke etiology between different studies could be explained by different sample sizes and patient characteristics.
In our study, platelet count was statistically comparable between the two groups (p = 0.132), and this is in agreement with Sorgun and his colleagues, who reported that the same parameter was statistically comparable between the two groups (218 vs. 229.5 × 10 3 /ml in cancer and non-cancer groups respectively) [26].
Our findings showed significant elevation of serum d dimer levels in association with cancer. It had mean values of 463.83 and 888.13 ng/ml in groups A and B, respectively (p < 0.001). A previous study also confirmed the previous findings, as the same parameter had mean values of 2370.5 and 324.2 ng/ml in cancer and non-cancer groups, respectively (p < 0.001) [1].
Sorgun et al. reported that d dimer had median values of 1519 and 590 ng/ml in cancer and non-cancer groups, respectively, with a significant rise in cancer patients (p < 0.001) [26]. An additional study confirmed the previous findings [28].
Multiple studies have incriminated both hypercoagulability and embolism in the development of ischemic stroke in cancer patients. Cancer patients with ischemic stroke (who had no other common stroke risk factors) had significantly higher levels of d dimer that corresponded with the rise in embolic signal during transcranial Doppler monitoring. [11] [13] [29].
When it comes to serum ferritin in the current study, it showed a significant increase in association with cancer (p < 0.001). It had mean values of 294.54 and 867.87 ng/ml in Groups A and B, respectively. To the best of our knowledge, there is a clear paucity of studies handling the role of serum ferritin in cancer-associated ischemic stroke. This represents a point of strength in favor of our study.
Wei and his coworkers also reported that high ferritin level was a strong predictor for ischemic stroke development in patients with non-Hodgkin lymphoma (p < 0.001). It had mean values of 564 ng/ml in cases that developed stroke, compared to 323.41 ng/ml in cases that did not develop that complication [15]. In cancer patients, elevated serum ferritin, a protein functioning as iron storage, is associated with inflammation and is also linked to a hypercoagulable condition [30] [31] [32].
Serum ferritin's precise mechanism in a hypercoagulable condition is yet unknown, but we hypothesized that it could cause a significant inflammatory response, resulting in hypercoagulability, which in turn enhances the development of ischemic stroke in patients with malignant neoplasms.
To date, a number of studies have sought to determine which types of cancer are most closely associated with stroke [33]. According to a previous study, among the included 1274 stroke patients, 13% of cases were also diagnosed with cancer, with the most common cancer kinds being urogenital, breast, and gastrointestinal [24]. Furthermore, a greater risk of stroke was observed in cancer patients who had been diagnosed with lung, pancreatic, colorectal, breast, or prostate cancer [34].
Our findings showed a significant increase in mortality rate in association with cancer (p = 0.005), which was encountered in 25.93% of cancer patients, compared to 7.14% of non-cancer cases. Similarly, other authors reported that the presence of cancer was associated with a significant rise in in-hospital mortality in patients with stroke (p = 0.013). Mortality occurred in 21.7% and 9.9% of cases in cancer and non-cancer groups, respectively [26].
Our study has some limitations. It was conducted in a single center in nature, and the relation between the two study's biomarkers and patient prognosis should have been evaluated. These cons should be handled in the upcoming studies.
Conclusion
Based on the previous findings, serum ferritin and d-dimer showed a significant increase in cancer-associated ischemic stroke. This confirms the role of the hypercoagulable state, associated with malignancy in the development of this morbidity. | 2021-12-01T16:31:18.755Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "a3069cc10919ed6861e09b1737cf094793cfeebc",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=113585",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2b8422694ffc63fd6138ff4e02cbfc37af7539b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
118712106 | pes2o/s2orc | v3-fos-license | Usage of Decision Support Systems for Conflicts Modelling during Information Operations Recognition
Application of decision support systems for conflict modeling in information operations recognition is presented. An information operation is considered as a complex weakly structured system. The model of conflict between two subjects is proposed based on the second-order rank reflexive model. The method is described for construction of the design pattern for knowledge bases of decision support systems. In the talk, the methodology is proposed for using of decision support systems for modeling of conflicts in information operations recognition based on the use of expert knowledge and content monitoring.
Introduction
Data sources make an essential effect on the people. During last years, it has become evident that mass media may be efficiently used for spreading of misinformation [1]. In addition, the social experiments show that many people believe to unconfirmed news and continue to spread it. For example, in [2] the review is presented of known false opinions and misinformation in the American society. In [3] the social experiments are described in which the susceptibility of people to political rumors about the health care reform approved by the USA Congress in 2010 is studied. In dependence on the way of information presentation, 17-20% of the experiment participants believed in the rumors, 24-35% of participants did not have a definite opinion, and 45-58% of the questioned rejected them. networks, forums, etc.) is meant which targets the modification of the social opinion about the certain subject (person, organization, institution, country, etc.). IO can consist of the system of information attacks and can take a long time. For example, distribution of rumors about problems in a bank can provoke its depositors to take back the deposits and in turn can lead to its bankruptcy. In general, to IO the misinformation events are related.
In [4] it is shown that IO is related to so-called weakly structured subject domains since they possess some specific for such areas features: uniqueness, impossibility of function target formalization, and, as a consequence, impossibility of construction of an analytical model, dynamicity, incomplete description, presence of the human factor, absence of standards. Such subject areas are treated using the expert decision support systems (DSS) [5].
By a conflict, a subject opposition in a search for a scarce resource in the media is meant. In the case of IO, the conflict is the effect to the audience.
Conflicts Modelling Approaches
There exist a number of approaches and models from theory of games for conflict modelling. The developed mathematical models of the conflict help their participants to build the optimum strategy. This is also related to the theory of games suggested by O. Morgenstern and J. von Neumann [6]. This theory was developed in many papers in which different aspects of the opposition process were considered [7,8]. The Nash equilibrium [9] as a method of resolving non-cooperative games should also be mentioned. In addition, the papers appeared, in which the opposition process is considered not as the measure for the victory of one player over another, but the tool to determine the ways of interaction of parties [10]. Shelling was one of the first who applied the theory of games to international relations considering the armament race in [10]. In this paper, he considered the long-term conflicts and concluded that establishment of continuous friendly relations between the parties can lead to higher profit (even with account of higher losses during this period), than in short-term relations.
In classical works on game theory, the player's profit is determined by the constant predetermined payment matrix, which in many cases is quite difficult to obtain. At present, the method of target dynamic estimation of alternatives [11], based on the application of a hierarchy of goals, is widely used for decision-making in complex target programs. This method allows to calculate the effectiveness of each alternative (in this case -a possible player's turn). Analytical expression to describe the winning players in most cases is difficult (sometimes impossible), and the use of a hierarchy of goals is very convenient for describing the preferences of players. It is reasonable to use this model for conflicts modelling, taking into account conflicts subjects's reflexion, and departure from a certain sequence of steps and a transition to a scenario in which each of the subjects performs a complex of actions in dynamics. Reflexive models [12,13] allow also taking into account the subjectivity of the opposite parties and the presence of compromises in some items.
The complete model describing the readiness of the subject with reflexion to accomplish some action (model of reflexion subject choice based on the second-order rank reflexive model [12]) is described by the following function (1): where A is the subject's choice readiness; a 1 is the influence of the environment on the subject; a 2 is the psychological setting of the subject (the influence of the environment expected by the subject); a 3 -the subject's intensions.
The equation describing the self-esteem of the subject in the situation is as follows (2): The self-esteem of the subject in the situation means the appearance of "self-image" in "self-image", when the subject estimates his own image of the situation, his imagination about himself and his intentions.
Let us consider the general scheme of interaction of two subjects A and B in conflict conditions, being subjected to the same influence of the external environment [12,13]. Subject A supposes that his counterpart B also possess the reflexion, i.e. has his own imaginations about the environment effect, his plans and wishes in the situation. In this case subject A in some manner is interpreting own relations with subject B and his ideas about these relations. Then the subject A choice readiness is described by the following function (3): where A is the subject A choice readiness; a 1 is the influence of the environment on both subjects; a 2 is expected by subject A influence of the environment; The expression which describes the self-esteem of subject A in the conflict situation with subject B is as follows (4): where A 1 is the self-esteem of subject A in the conflict situation with subject B.
The subject B choice readiness is described by the following function (5): The expression which describes the self-esteem of subject B in the conflict situation with subject A is as follows (6):
B 1 = (c 3 & d 3 → c 2 ) ˅ (c 4 & d 4 → d 2 )
where B 1 is the self-esteem of subject B in the conflict situation with subject A.
Let us consider how the described above model of reflexive behavior of subjects in the conflict can be used for construction of the knowledge base (KB) for DSS.
Reflexive Model of Conflict in Knowledge Bases of Decision Support Systems
The result of the conflict of subjects will depend on the degree of goal achievement for each subject of the conflict. The winner will be the subject with higher value of the goal achievement degree. Thus, the model of the subject area of the conflict should be constructed in DSS KB in order to calculate the respective values of goal achievement degrees.
Based on the features of target decomposition in construction of a goal hierarchy graph and applying the method of target dynamic estimation of alternatives [11] one can assign some logical operations to DSS KB objects and to their relations. By means of DSS, in framework of which the above-mentioned method has been implemented, it is possible to model logical "or" (˅) as subgoals of one goal, logical negation as a negative influence of the respective goal, "XOR" as groups of goal compatibility.
Analyzing the presented above reflexive model of the conflict of two subjects we can suggest the following design pattern for DSS KB (Fig. 1). Black solid arrows indicate the positive influence of goals and the dashed red arrows indicate the negative influence. Titles of the goals correspond to designations in the above equations. In construction of this design pattern for DSS KB (Fig.1) the above-mentioned features of modelling of logical operations has been taken into account. For assignment of the logical equation with objects and relations in DSS KB the functions describing the choice readiness of subjects A and B were transformed as follows in (7) and (8): For assignment of the logical equation with objects and relations in DSS KB the functions describing the self-esteems of the subjects in the conflict situation were transformed as follows in (9) and (10): Since the winner will be the subject with higher value of the goal achievement degree, let us assume that the main goal of the hierarchy G (Fig.1) is affected positively by the goal of subject A, and negatively by the goal of subject B. Then if the value of the achievement degree of the main goal G is above zero, then subject A wins in the conflict, if this value is less than zero, then subject B is the winner, and in the case of zero value there is a draw.
DSS allows also to calculate separately the goal achievement degrees for subject A and subject B.
Obtained in such a way DSS KB design pattern should be complemented to the full range KB using the methods of DSS KB construction in identifying of information operations [14,15]. Supplementation of KB with the use of expert knowledge only, as is described in [15] requires the group of experts. The work of experts is rather expensive and requires much time. Also for small expert groups the competency of experts should be taken into account [16], leading to additional time expenses for the search of extra information and for its processing. Therefore, expert information should be used moderately in the process of DSS KB construction for IO recognition.
Methodology for Conflicts Modelling by Decision Support Systems in Information Operations Recognition
Thus, the essence of the suggested method for modelling of conflicts using DSS during IO recognition is as follows: 1) The preliminary study of the IO object is carried out, the subjects of the conflict are determined, together with their goals, related to the subjects of the conflict, persons, organizations, companies. In the process of informational and analytic research a common problem, often faced by analysts is compiling a ranking of a set of objects or alternatives (products, electoral candidates, political parties etc) according to some criteria. For this task should apply special approaches for considering importance of information sources during aggregation of alternative rankings [17].
2) The respective design pattern for DSS KB is selected and modified if necessary. In modification of the design, one should take into account the features of modelling of logical operations in DSS KB.
3) The selected design pattern for DSS KB is complemented to full-range KB. The group expertise on determination and decomposition of IO goals is carried out. Thus, the decomposition of IO as of complex weakly structured system is taking place. For this purpose, the system for distributed acquisition and processing of expert information (SDAPEI) is used.
4) The respective KB is complemented using DSS tools taking into account the results of the group expertise, carried out by means of SDAPEI, and available objective information. For clarification of queries to content monitoring systems (CMS) and for complementation of DSS KB with lacking objects and links the keyword network of the subject area [14] of respective IO is used. 5) Using CMS tools the analysis of dynamics of the thematic data stream is carried out. DSS KB is complemented with partial influence coefficients. 6) Using DSS tools based on the constructed KB the recommendations are calculated.
Recommendations (in the form of dynamic estimations of efficiency of topics related to the IO object and values of the goals achievement degrees of subjects of the conflict), obtained by the above described technique, are used for the estimation of the IO-related damage [15], and for organization of the information counter-actions with account of information sources, as well as to predict result of the conflict.
The suggested method of modelling the opposition of two subjects can be used for IO recognition for modelling of confrontation of lobbyists and their opponents, for example, in such an event as Brexit [18].
Conclusions
In the talk, the advantages of the use of DSS in modelling of conflicts during the information operations recognition are substantiated. An information operation is considered as a complex weakly structured system.
The model of the conflict between two subjects is presented based on the second-order rank reflexive model.
The method is described enabling the construction of DSS's knowledge bases based on the model of the conflict between two subjects.
The method of application of the DSS for modelling the conflicts in information operation recognition is suggested. The ranges of applicability of this method are suggested. | 2019-04-16T16:58:51.000Z | 2019-04-16T00:00:00.000 | {
"year": 2019,
"sha1": "a67154ac6a098c48f3e059bb176a90f7fadf0402",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a67154ac6a098c48f3e059bb176a90f7fadf0402",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
156026305 | pes2o/s2orc | v3-fos-license | Oscillatory and Asymptotic Behaviour of Solutions of Two Nonlinear Dimensional Difference Systems
This paper deals with the some oscillation criteria for the two dimensional difference system of the form: 0 , 1, 2,3, n n n n n n x b y y a x n N α β ∆ = ∆ = − ∈ = . Examples illustrating the results are inserted.
Introduction
Consider a nonlinear two dimensional difference system of the form In the last few decades there has been an increasing interest in obtaining necessary and sufficient conditions for the oscillation and nonoscillation of two dimensional difference equation.See for example [1]- [10] [11] and the references cited therein.
Further it will be assumed that { } n b is non-negative for all 0 n n ≥ , ( ) The oscillation criteria for system (1.1), when studied in [12].Therefore in this paper we consider the other case that is and investigated the oscillatory behaviour of solutions of the system (1.1).Hence the results obtained in this paper complement to that of in [12].
We may introduce the function n A defined by ( ) Throughout this paper condition (1.2) is tacitly assumed; n A always denotes the function defined by (1.3).
In Section 2, we establish necessary and sufficient conditions for the system (1.1) to have solutions which behave asymptotically like nonzero constants or linear functions and in Section 3, we present criteria for the oscillation of all solutions of the system (1.1).Examples are inserted to illustrate some of the results in Section 4.
Existence of Bounded/Unbounded Solutions
In this section first we obtain necessary and sufficient conditions for the system x y .such that ( ) as n → ∞ , where Proof.We may assume without loss of generality that and let ( ) be large enough such that ( ) and ( ) Let B be the space of all real sequences { }, n y y n N = ≥ with the topology of pointwise convergence.We now define X to be the set of sequences x B , , .
where ( ) sup : and define Y to be the set of sequences y B ∈ .
Such that
, .
and ( ) Clearly X Y × is a bounded, closed and convex subset of B B × .
First we show that T maps X Y we have and so, using (2.6) and (2.7), we see that ( ) .
Finally, in order to apply Schauder-Tychonoff fixed point theorem, we need to show that ( ) In view of recent result of cheng and patula [8] it suffices to show that ( ) , , , , are uniformly cauchy and so ( ) Therefore by Schauder-Tychonoff fixed point theorem, there is an element ( ) T x y x y = . From (2.12), (2.13) and (2.14) as n → ∞ .The proof is left to the reader.
Before stating and proving our next results, we give a lemma which is concerned with the nonoscillatory solution of (1.1).
x y be a solution of (1.1) for ( ) and , where θ is a nonnegative constant.
This lemma has been proved by Graef and Thandapani [3] and is very useful in the following theorems.In our next theorem, we establish a necessary condition for the system (1.1) to have nonoscillatory solution satisfying condition (2.17).
Proof.Let ( ) x y be a nonoscillatory solution of the system (1.1) for ( ) . Since n b is not identically zero for ( ) , .
, from the first equation of system (1.1), we obtain and hence ( ) ( ) .
From the second inequality of (2.21) and the following inequality ( ) ( ) ( ) where "d" being the constant, we see that as n → ∞ , from the first equation of system (1.1), we obtain for n N ≥ ( ) ( ) ( ) which in view of boundedness of n x , implies that The inequalities (2.24) and (2.25) clearly imply (2.20).This completes the proof.
we conclude this section with the following theorem which gives a necessary condition for the system (1.1) to have a nonoscillatory solution of the form Using the inequality ( ) Because of condition (3.1), the last inequality implies ( ) Next from the second inequality (2.21), we have ( ) ( )( ) ( ) , .
Again using the argument as in the proof of Theorem 2.5, we obtain ( ) for all n N ≥ .So by condition on (3.1), we have .
1 n + to j, we obtain ∈ , α and β are ratio of odd positive integers.By a solution of Equation (1.1), we mean a real sequence { }
of the boundedness of n x implies that | 2019-05-17T14:13:36.932Z | 2019-04-30T00:00:00.000 | {
"year": 2019,
"sha1": "9bfaa951514e26f47779b3f3288b05acd48f6990",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=92204",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cb00a679cc840cdb931ecd28db280f8ce8a93483",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
236452928 | pes2o/s2orc | v3-fos-license | Field evaluation of the gut microbiome composition of pre-school and school-aged children in Tha Song Yang, Thailand, following oral MDA for STH infections
Soil-transmitted helminths, such as roundworms (Ascaris lumbricoides), whipworms (Trichuris trichiura) and hookworms (Necator americanus and Ancylostoma spp.), are gastrointestinal parasites that occur predominantly in low- to middle-income countries worldwide and disproportionally impact children. Depending on the STH species, health status of the host and infection intensity, direct impacts of these parasites include malnutrition, anaemia, diarrhoea and physical and cognitive stunting. The indirect consequences of these infections are less well understood. Specifically, gastrointestinal infections may exert acute or chronic impacts on the natural gut microfauna, leading to increased risk of post-infectious gastrointestinal disorders, and reduced gut and overall health through immunomodulating mechanisms. To date a small number of preliminary studies have assessed the impact of helminths on the gut microbiome, but these studies are conflicting. Here, we assessed STH burden in 273 pre-school and school-aged children in Tha Song Yang district, Tak province, Thailand receiving annual oral mebendazole treatment. Ascaris lumbricoides (107/273) and Trichuris trichiura (100/273) were the most prevalent species and often occurred as co-infections (66/273). Ancylostoma ceylanicum was detected in a small number of children as well (n = 3). All of these infections were of low intensity (<4,999 or 999 eggs per gram for Ascaris and Trichuris respectively). Using this information, we characterised the baseline gut microbiome profile and investigated acute STH-induced alterations, comparing infected with uninfected children at the time of sampling. We found no difference between these groups in bacterial alpha-diversity, but did observe differences in beta-diversity and specific differentially abundant OTUs, including increased Akkermansia muciniphila and Bacteroides coprophilus, and reduced Bifidobacterium adolescentis, each of which have been previously implicated in STH-associated changes in the gut microfauna.
Introduction Soil-transmitted helminths (STHs), namely roundworms (Ascaris spp.), whipworms (Trichuris trichiura) and hookworms (Ancylostoma spp. and Necator americanus), are among the most common parasitic causes of neglected tropical disease worldwide [1]. These helminthiases disproportionately affect children, in whom they cause a global estimated burden of 1.2 million disability adjusted life years (DALYs) [2]. Depending on the infective species, infection intensity and host nutritional and immunological status, symptoms include malnutrition, diarrhea, abdominal pain and anaemia, which may lead to stunting, wasting and impaired physical or cognitive development [3]. In school-aged children specifically, STH infections have been shown to cause cognitive and educational deficits [4].
Since 2012, in response to the UN Millennium Development Goals, there has been a concerted global effort to reduce STH burden in children through annual or biannual treatment with benzimidazoles treating >75% of at-risk, school-aged children in endemic regions [5]. These programs have reduced the intensity of infection and disease-related morbidity [6]. However, quantifying the magnitude of this change and its sustainability is limited by the availability of methods with adequate sensitivity to diagnose infection under conditions of low worm burden and low population prevalence [7]. As endemicity rates fall, there is a need to move from diagnostic tools such as the Kato-Katz thick smear (KKTS) method toward molecular-based methods such as quantitative PCR [8,9].
In addition to the direct consequences of infection, gastrointestinal pathogens may alter the host gut microbiome [10], and impact on bacterial diversity and abundance [11][12][13], brain function [14][15][16], digestive health [17,18], immune function [19][20][21], and development [17,18]. Although this impact is well studied for viral, bacterial and protistal causes of diarrhoeal disease, the effects of STH infections on the gut microbiome are not well characterised and appear complex [10]. Several studies indicate that STH increases bacterial diversity and richness [11][12][13]. But not all STH studies support these findings [22], and their potential pathology has not been determined [16]. One study found that gut microbiome composition was predictive of an STH infection [23]. However, ascribing cause and effect from this finding is not simple, as gut microbial composition may influence susceptibility to STH infection [24]. Additional studies of microbiome composition in helminth infections are needed. One challenge faced by such studies in highly endemic populations is that there is no data to differentiate acute from chronic effects of these parasites on gut microfauna, and little to indicate if uninfected individuals at the time of study have been affected by prior helminth infection or represent an appropriate 'healthy control' for microbiome characterisation.
Here, we undertook a cross-sectional study of STH infection and faecal microbiome composition in a population of pre-school and school-aged children in a remote community in Tha Song Yang district, Tak province, Thailand. Although Thailand has comparatively low national STH prevalence, the country has many rural and refugee communities, in which lowintensity infections remain highly endemic despite on-going oral MDA programs [25,26].
Ethics statement
Oral informed consent was received from all parents or legal guardians for participants under the age of 18 years. Ethics approval, including approval of oral informed consent, was provided by Mahidol University (FTM ECF-019-05) as well as by the Human Research Ethics Committee of the Walter and Eliza Hall Institute of Medical Research (HREC Project #16/10).
Study area, faecal sample collection and processing
A total of 273 faecal samples from 2-to 6-year-old pre-school and school-aged children were collected from the 28 th to 30 th of March 2017 among eleven day-care centres in the Mae Song subdistrict, Tha Song Yang district, Tak province, Thailand. In addition, the local field team recorded epidemiological information for each child, such as age, sex, school, date of birth, height and weight. An overview of the entire process can be found within the supporting information files (S1 Fig). Stool collection kits consisting of a faecal collection container, disposable gloves, a zip-locked bag and study information sheet were distributed to each child's caregiver. Within 24 hours of distribution caregivers were requested to return a double-contained stool sample (30 mL max volume). For each child a de-identified participant ID, the date and time of stool collection and processing were recorded. Stool samples were returned within an average of 12 hours and 1 gram of sample was immediately examined for STH infections by direct duplicate Kato Katz thick smear performed by two independent microscopists. The remainder of each stool sample was transferred to the laboratory field site in Mae Sot district, Tak province, Thailand within 24 hours on wet ice. There, stools were pre-processed for international transport according to their classification using the Bristol Stool Chart [27]. Type 5-7 stools were centrifuged at 14,000 x g for 5 minutes, the supernatant discarded and part of the pellet stored in DESS at room temperature (1:1 sample to preservative ratio; 20% dimethyl sulfoxide, 0.25 M ethylenediaminetetraacetic acid, saturated sodium chloride) [28]. Type 1-4 stools were stored directly in DESS without the requirement for an additional centrifugation step. Preserved stools were transported to the main laboratory located at the Department of Helminthology, Faculty of Tropical Medicine, Mahidol University (Bangkok, Thailand) within 72 hours on wet ice, where they were split into two 1-1.5 gram aliquots in 2 mL universal Nunc tubes and stored at 4˚C. Cold stored DESS preserved stools were shipped to the Walter and Eliza Hall Institute of Medical Research on cold packs. Upon arrival, stool samples were longterm stored at -20˚C.
DNA extraction
Stool samples were thawed at RT, homogenized using a sterile wooden spatula and DESS preservative removed by resuspending each sample twice in 50 mL sterile water, centrifuging at 2,000 x g for 3 minutes and decanting the supernatant. Up to 150 mg of the faecal pellet was used for DNA extraction using the Bioline ISOLATE Fecal DNA Extraction kit according to the manufacturer's instructions (Bioline, Australia). Extracted DNA was assessed for quality and quantity using the Qubit 2.0 fluorometer (Thermo Fischer Scientific, Australia).
STH infection diagnosis and microbiome characterisation
A previously published multiplexed-tandem qPCR assay [29] was used to qualitatively and quantitatively detect STH infections among all faecal samples. We used this test to screen for roundworm (Ascaris lumbricoides), whipworm (Trichuris trichiura) and hookworm (Ancylostoma spp. and Necator americanus) species following the manufacturer's instructions (Aus-Diagnostics, Ltd. Py., Australia).
The faecal microbial community for DESS preserved samples was characterised by 16S rRNA gene sequencing. Each stool sample was sequenced in duplicate using the universal bacterial 16S V4 primers (515F 5'-CTGAGACTTGCACATCGCAGCGTGYCAGCMGCCGCG GTAA-3' and 806R 5'-GTGACCTATGAACTCAGGAGTCGGACTACNVGGGTWTCTA AT-3') on the Illumina MiSeq platform (Illumina, USA) [30,31]. Each 96-well PCR plate contained a negative (dH 2 O) and positive (ZymoBIOMICS microbial community DNA standard, Zymo Research, Integrated Sciences, Australia) control, with individual reactions conducted under the following conditions: 10 μl OneTaq master mix with standard buffer (2x) (New England Biolabs, USA), 0.4 μl of each primer (10 μM) and 2 μl DNA template made up to a total reaction volume of 20 μl with water were amplified for 3 minutes at 95˚C, 45 seconds at 95˚C, 60 seconds at 50˚C and 90 seconds at 72˚C in 20 cycles with a final extension time of 10 minutes at 72˚C. Following first round amplification all PCR products were diluted 1:5 in molecular grade H 2 O and diluted products used as a template for a second PCR amplification using unique adapter barcodes (8-mer) for each sample. Each PCR reaction was conducted under the following conditions: 15 μl OneTaq master mix with standard buffer (2x) (New England Biolabs, USA), 1.5 μl of each primer (10 μM) and 10 μl of diluted PCR product made up to a total reaction volume of 30 μl using water were amplified for 3 minutes at 95˚C, 45 seconds at 95˚C, 60 seconds at 55˚C and 90 seconds at 72˚C in 25 cycles with a final extension time of 10 minutes at 72˚C. Quality control was performed using a representative selection of PCR products from all rows and columns on each plate through size separation on a 1.5% agarose gel. Following amplification, 5 μl of each product was pooled into the final sequencing library and any artifacts removed using the NucleoMag NGS clean-up and size-select kit (GraphPad Software Inc., USA) and stool samples deemed infection positive or negative as per MT Analysis Software output (AusDiagnostics Pty. Ltd., Sydney). Kappa statistics (interrater reliability), sensitivity, specificity and correlation with epidemiological variables were calculated to evaluate performance of the MT-PCR method compared to the gold standard KKTS using Stata 12.1 (STATA Corp, USA). Impacts of STH infections defined as stunting and wasting were calculated as height-for-weight, height-for-age and weight-for-age z-scores compared to the 2006 WHO child growth standard [32]. Anthropometric z-score calculations were performed in Stata 12.1 (STATA Corp, USA) using the "zscores06" command [33].
Gut microbiome characterisation of the V4 16S rRNA gene. Following comparative analysis of three-way sample preservation as described elsewhere [34], we examined STH infection prevalence and intensity based on DESS preserved stools only. For microbiome characterisation, we tested for sequence batch effects in the initial 48 and subsequent 225 sequenced DESS preserved samples. We did this by merging both datasets before the OTU clustering step in vsearch and plotted alpha-(observed richness p-value 0.9654; inverted Simpson p-value 0.9088) and beta-diversity (p-value 0.001). Because the intra-sample diversity showed no significant change among sequencing runs, we combined the data for all subsequent analyses.
The microbial community diversity and composition was analysed using an established protocol [35]. A QIIME1 mapping file containing metadata for each replicate according to their unique index barcode was generated [36], followed by merging of paired-end reads using PEAR [37] with the following parameters: minimum overlap 100 bp, maximum assembly length 600 bp and minimum assembly length 80 bp. Resulting sequences were demultiplexed, orientation adjusted (p = 0.90, Phred quality score cut-off 29, base call accuracy>99.9%) and the 16S V4 rRNA gene universal primers trimmed (https://github.com/PapenfussLab/Jocelyn_ tools/blob/master/amplicon_tools/trim_fasta_amplicons.py). Using Mothur 1.39.5 the trimmed sequences were aligned to the Silva database, the alignment cut and filtered to remove overhangs at both ends [38]. Next, sequences were deduplicated, clustered into operational taxonomy units (OTUs), a reference generated, and chimeras filtered using vsearch (minimum sequence length 64 bp). Taxonomy was assigned using the Greengenes database [39]. A phylogenetic tree was generated in QIIME1 using the make_phylogeny.py script by aligning all sequences to the reference in Mothur. Finally, the metadata, taxonomy and phylogeny information were compiled into a biom file with all quality filtered sequences. The full pipeline is available online (https://github.com/PapenfussLab/RothSchulze_microbiome).
In RStudio 1.1.453 technical replicates were merged and data (a) rarefied for alpha-diversity and differential abundance analysis, or (b) normalised using the cumulative sum squaring in QIIME1 for beta-diversity analysis, using phyloseq 1.26.1 [40]. Alpha-diversity measures were estimated by plotting observed richness, inverted Simpson and Shannon estimated diversity indexes using the plot_richness function in phyloseq 1.26.1 and visualized with microbiome-Seq 0.1 (https://github.com/umerijaz/microbiomeSeq). Statistical analysis was performed with pairwise analysis of variance (ANOVA) tests on a fitted, linear and releveled regression model on observed richness data (estimate_richness function in phyloseq 1.26.1). Relative and differential taxonomic abundance analysis were performed by estimating the mean proportion, fitting a linear model and applying a pairwise ANOVA test in RStudio 1.1.453, and the 'DESeq2_nbinom' algorithm in QIIME1, respectively. Beta-diversity measures were estimated using the ordinate and distance function with Bray Curtis measure for PCoA and weighted UniFrac for NMDS in phyloseq 1.26.1. Statistical analysis was performed with Euclidian distance and 999 permutations using the adonis function in the vegan package 2.5-4 and pairwise.perm.manova function in RVAideMemoire package 0.9-73. A p-value threshold of <0.05 was used for statistical significance following correction for multiple testing by the Benjamini-Hochberg false discovery rate (FDR) method for all tests. All data were visualised using the phyloseq 1.26.1 and ggplot 3.1.0 packages in RStudio 1.1.453 (RStudio, Inc., USA).
Cohort epidemiology
All 273 participants (133 male and 132 female) included in this study were pre-school and school-aged children as follows: 3 years (18 female Table).
Child growth measures were available for 246 (90.11%, aged 0-5 years) of the 273 children with average measures as follows: mean age 59.2 completed months (36-82), mean weight 15.8 kg (8.2-21.7) and height 103.5 cm (78-122) (S2 Table). Of this subset, 27 (11.0%) children were stunted (height-for-age < -2 standard deviation from median) including 15 with any STH infection, six (2.4%) were wasted (weight-for-height < -2 standard deviation from median) including two with any STH infection and 17 (6.9%) were underweight (weight-forage < -2 less than standard deviation from median) including nine with any STH infection. Further, four (1.6%) children were deemed too tall for their age (height-for-age > 2 standard deviation from median) including two with any STH infection and 1 (0.4%) had a weight-forheight measure above two standard deviations from median. We found no association between any of these metrices and bacterial (alpha-/beta-) diversity measures.
Total infection prevalence of any helminth species by microscopy was 49.3% by KKTS (Table 1). Infection intensity as determined by microscopic egg count ranged from 1 ->1,000 As. lumbricoides fertilized/unfertilized eggs (mean egg count 595.89) and 1-190 T. trichiura eggs (mean egg count 21.23). We note that 51 (46.8%) and 15 (17.4%) As. lumbricoides KKTS positive samples had egg densities reaching a threshold of 1,000 and 500 eggs per gram at which counting ceased, respectively (S2 Fig). Hookworm prevalence was not assessed as it was not feasible to process samples within one hour after drop-off [41]. Although not part of this study, each sample was additionally assessed for other parasitic infections via field-based Table 1. STH infection prevalence and intensity among SAC children from Tha Song Yang district, Tak province, Thailand including all DESS preserved samples (n = 273). Interrater reliability measure or kappa values are classified as follows: κ 0.0-0.2 non to slight, 0.2-0.4 fair,0.4-0.6 moderate, 0.6-0.8 substantial, 0.8-1.0 almost perfect [42]. Sensitivity is classified as the rate of true positive samples and specificity as the rate of true negative samples compared to a known gold standard (Kato Katz thick smear). Epidemiological and microscopy data for 5 individuals were missing. 36.6% (n = 100) and 1.1% (n = 3) for As. lumbricoides, T. trichiura and An. ceylanicum, respectively (Table 1). Diagnostic target gene copy number for STH positive samples ranged from 39-4,711,749 for As. lumbricoides (mean copy number 272,931.8 and Ct value 23.8), 85-5,251 An. ceylanicum (mean copy number 3,058.3 and Ct value 28.9) and 33-6,343 T. trichiura (mean copy number 709.47 and Ct value 30.9) as determined by MT-PCR concentration (S1 Table). For the 51 As. lumbricoides samples containing >1,000 eggs per gram, target copy number ranged from 0-4,711,749.0 (mean copy number 505,851.5).
Gut microbiome characterisation. We explored the acute effects of STH infections on gut microbial health comparing children with ("infected"; n = 141) and without ("uninfected"; n = 132) STH infection at the time of sampling. Information on the prior infection history of children in our study was not available, and we cannot make inference on the longer-term impacts that STH infection may have on microbial development in the gut. Overall, Firmicutes, Bacteriodetes, Proteobacteria and Actinobacteria were the most abundant bacterial phyla across the complete cohort (Fig 1 and S3 Table). The majority (56.5%) of operational taxonomic units (OTUs) assigned to bacterial genera were classified as Prevotella, Faecalbacterium, Succinivibrio and Catenibacterium. Before characterising differences in the microbiome associated with STH infection, we examined the data at a cohort level, considering participant sex, age and geographic location (Fig 1). We found no significant differences at a higher taxonomic level between male and female participants. However, two OTUs (OTU 65 Bacteroides coprophilus, OTU 9 Mollicutes) were more and six OTUs (OTU 214 Prevotella, OTU 84 Parabacteroides, OTU 89 Bacteroides uniformis, OTU 23 Bacteroides fragilis, OTU 200 Ruminococcus and OTU 20 Lactococcus garvieae) statistically significantly less abundant comparing females to males (Fig 2 and S5 Table; p-value < 0.05). Further, we observed no significant differences in alpha-diversity or beta-diversity based on participant sex and age (Figs 3 and 4 and S4 Table).
Geographical location of some schools correlated with inverted Simpson alpha-diversity, with children from Mae Nil Khi and This Mo Ko Tha having significantly lower alpha-diversity compared to children from Huay Manhok (p-value < 0.05); and children from Mae Nil Khi showing a lower alpha-diversity than those from Mae Kho and Mae Song (Fig 3). Betadiversity also significantly differed (p-value 0.032) among schools (Fig 4 and S4 Table). These differences were apparent at a taxonomic level, with children from Huay Manhok Khi having significantly more Actinobacteria (p-value 0.0217) and those from Mae Salid Noi, Mae Nil Khi and Mae Kho having significantly less Eubacterium (p-values 0.0311, 0.0385, 0.0433).
We found no significant difference in bacterial alpha-diversity between STH infected and uninfected study participants (Fig 3 and S4 Table). However, their faecal microbiomes did differ significantly in beta-diversity (Fig 4, p-value 0.021). Although, we found no significant changes at higher taxonomic levels, there were statistically significant (p-value < 0.05) changes in differential abundance at the OTU level (S5 Table). Five OTUs, one Verrumicrobia (OTU 91, Akkermansia muciniphila), two Bacteriodetes (OTU 124, Prevotella; OTU 65, Bacteroides coprophilus), one Firmicutes (OTU 118, Ruminococcaceae) and one Tenericutes (OTU 9) were significantly more abundant and one OTU (OTU 527, Bifidobacterium adolescentis) was less abundant comparing uninfected versus infected children (Fig 2 and S5 Table). Interestingly, when comparing differential OTU abundance between only As. lumbricoides positive (n = 41) and uninfected participants we found a large number of OTUs (n = 16) to be less abundant compared to three more abundant. The same is true for only T. trichiura positive samples (n = 33) (25 less and 2 more abundant) (Fig 2 and S5 Table).
Discussion
In this study, we characterised the molecular STH infection status of Thai pre-school and school-aged children as well as their baseline gut microbiome profile. A large proportion of co-infections with As. lumbricoides and T. trichiura in this subpopulation is consistent with global STH-endemic estimates in other regions [43]. Using the highly sensitive MT-PCR we were able to identify three An. ceylanicum infections that were not detected by microscopy. We investigated the association between infection status and health metric (weight, height), but found no significant difference to uninfected children.
We also explored the acute effects of current STH infection on the faecal microbiome. Overall, the predominant bacterial phyla, with Bacteriodetes, Firmicutes, Proteobacteria and Actinobacteria being most abundant, were consistent with similar studies of enteric bacteria elsewhere [44][45][46]. A multitude of factors are able to shape the gut microbiome composition. Besides infection [10,47], this can include diet [48], age [49], caesarean versus vaginal birth [50], medication, particularly antibiotic usage [51], and numerous other factors [52]. In our study, we noted statistically significant differences in alpha-diversity with school location, possibly reflecting dietary differences. Noting these host factors, we found no evidence for an association between sex, age or school and STH infection and thus, are able to consider the effect of infection despite the confounding influence they may have had on microbial composition. Although a small proportion of children were stunted, wasted and underweight for their age, we found this did not impact microbial diversity compared to healthy children.
Numerous studies have explored the impact of STH infections on the gut microbial community, and in turn on the host immune system [53,54], child development and the risk of chronic post-infectious gastrointestinal disorders, as has been seen for parasitic protists, such as Giardia [55]. The results of these studies are highly variable, have been reviewed in detail elsewhere [56,57], and highlight the complexity of this topic. The lack of uninfected, community-based controls, or rather, how these controls are defined is one major challenge. In our study, we assessed the faecal microbiome of children with, at the time of sampling, patent STH infections and compared this to samples collected from uninfected children from the same community and at the same point in time. However, it is important to recognize that our observations are based on a snapshot in time and the prior infection history of our study participants is not available. This point is important in considering current studies on this field. Studies of helminth infection in mouse models have found the most consistent evidence for helminth-induced reprofiling of the gut microbial community [56], showing a loss in alphadiversity and changes in the relative abundance of key microbial phyla, including reductions in Firmicutes and Bacteroidetes, and statistically significant shifts in abundance of a variety of specific OTUs. Studies comparing experimentally infected human volunteers to uninfected controls in non-endemic regions generally yield comparable results, showing altered alphadiversity, species richness and the relative abundance of specific taxa, such as species of Bacteroides and Turibacter [57][58][59]. However, to date, field-based studies using cross-sectional surveillance, have produced more variable results [56,57], which are likely affected by the STH species and other complex biological, technical and environmental factors [57]. Recently, field studies conducted in Malaysia, Liberia and Indonesia found STH infections correlated with an increase in the host gut microbial diversity and richness [11,12], but another in Ecuador found no effect [22].
Here, we found no difference in alpha-diversity or relative abundance with STH infection. This is largely consistent with other field-based studies [56,57]. We did see differences in bacterial beta-diversity with STH infection. This too is consistent with a previous study of STH infections in Sri Lanka [46], but has not been observed in other field studies [22,60], and may depend on the infective STH species [57]. We did not see major changes in the relative abundance of bacterial phyla, but we did see specific changes in OTU composition. In our study, STH infection was associated with a statistically significant increase in abundance of Akkermansia muciniphila. A recent meta-analysis of published studies on the impact of helminths on gut microbiomes identified a 3-fold increase in the relative abundance of Akkermansia in rodent models upon helminth infection [56]. This increase has been described also in primates infected with T. trichiura [61]. The authors of the meta-analysis [56] noted that Akkermansia species are associated with increased mucin degradation; it is worth noting that Trichuris species are known for their capacity to secrete a rich cocktail of mucin-degrading proteases [62,63]. To our knowledge, ours is the first study to note this change in humans. We also found a statistically significant decrease in Bifidobacterium, which is consistent with prior observations in rodent models [56]. Interestingly, we saw a statistically significant increase in the relative abundance of Bacteroides coprophilus with STH infection. Although a previous study noted an increase in pro-inflammatory Bacteroides vulgatus following deworming https://doi.org/10.1371/journal.pntd.0009597.g004 treatment in endemic individuals [64], recent studies of gut microbiome changes associated with inflammatory conditions, including multiple sclerosis [65], and during hepatitis-c infection [66], identify decreases in both B. coprophilus and A. mucinphila in the diseased state. Overall a reduction in OTUs correlating with A. lumbricoides and T. trichiura infections is notable. It is possible, although highly speculative, that helminths manipulate both pro-and anti-inflammatory microbes during infection as part of their well-documented efforts to manipulate the vertebrate immune response, particularly through repression of Th1/Th17 and promotion of Th2 type responses [67,68]. Clearly, this requires further study.
Much of the uncertainty associated with studies to date on the impacts of STH infection on the gut microbiome likely relates to the complexity of the interaction and technical aspects of the study design or analytical approach. As noted here, study design must consider the impact of sample collection and preservation, and our hope is that our evaluation of DESS as a lowcost preservative for STH and microbiome studies will assist in overcoming this challenge. In addition, sample size, particularly in relation to studies in remote and resource-poor populations, will affect resolution and limits study design. We have been deliberate in distinguishing our capacity to assess acute impacts of current STH infections on the gut microbiome, relative to endemic controls, from efforts to assess longer term or more universal impacts of STH infections overall. The latter requires either a complex longitudinal cohort study, or, at least, community-matched controls from non-endemic populations or with a known infection history. Appropriate matched controls with no prior history of STH infection are difficult to identify, especially in endemic regions where STH prevalence typically exceeds 50% of the population. This issue likely contributes to the lack of congruence between field-based microbiome studies of STH infections (all of which are realistically limited to observations of the acute effects of STH infections) from observations in mouse models or volunteer patients in non-endemic populations. Studies of the impact of STHs on the gut microbial community must, in our view, be equally aware of the complexity that infection intensity may play on the study outcome. Our study considers primarily low-intensity infections, and we cannot infer the potential acute or chronic consequences of higher intensity infections; although it appears reasonable to speculate that both infection intensity and frequency will influence the extent to which STH infections might cause reprofiling of the gut microfauna, as has been shown for other gastrointestinal pathogens [69]. Table. WHO-recommended height-for-weight, height-for-age and weight-for-age zscore analysis. Patients older than 5 years of age were not included due to no available reference standard. MT-PCR: multiplexed-tandem qPCR; Ct: cycle threshold value; BMI: body mass index. (XLSX) S3 Table. Relative abundance data of bacterial phyla and genera including statistical analysis. | 2021-07-28T06:17:55.258Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "b8fb1b040b4e750ec2713d41e9a4ae8d99ce1b9b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0009597&type=printable",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1ef1f35bdb75666655ee986a6d5b4a3744e36583",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233481201 | pes2o/s2orc | v3-fos-license | DRIVE: Machine Learning to Identify Drivers of Cancer with High-Dimensional Genomic Data&Imputed Labels
Identifying the mutations that drive cancer growth is key in clinical decision making and precision oncology. As driver mutations confer selective advantage and thus have an increased likelihood of occurrence, frequency-based statistical models are currently favoured. These methods are not suited to rare, low frequency, driver mutations. The alternative approach to address this is through functional-impact scores, however methods using this approach are highly prone to false positives. In this paper, we propose a novel combination method for driver mutation identification, which uses the power of both statistical modelling and functional-impact based methods. Initial results show this approach outperforms the state-of-the-art methods in terms of precision, and provides comparable performance in terms of area under receiver operating characteristic curves (AU-ROC). We believe that data-driven systems based on machine learning, such as these, will become an integral part of precision oncology in the near future.
Introduction
Cancer is a disease of the genome and is inherently heterogeneous in nature, meaning the distinctive hallmarks of a cancer can develop through a variety of mutations and biological pathways [1]. This heterogeneity poses a unique problem when making treatment decisions compared to most other diseases. Currently, cancers are commonly treated using a generic approach based on primary tumor location rather than the underlying genomic profile. This leads to treatment failure and gives rise to resistance. This problem is compounded by the underlying genomic architecture of cancer, which is dynamic and evolves both over time and in response to therapy. A more personalized approach is required, targeting the specific genomic aberrations that confer selective advantage in an individual tumor. Identifying these "driver" mutations is key for successful personalized treatment selection.
In machine learning terms, differentiating driver and passenger mutations (those that are largely benign in nature) poses a standard classification problem. However, difficulty arises due to imperfect labelling and lack of clinical evidence. The idea of a "driver mutation" is itself an imprecise concept and with no ground-truth label. Even given an arbitrary threshold for driver status, experimental validation is difficult due to the many mutations possible within a gene. In statistical approaches, frequency-based models have been the most instrumental. These are used to identify mutation rates significantly higher than observed background mutation rate [2]. However, driver mutation frequencies follow a power-law distribution -few commonly occurring mutations and a long tail of rare mutations [3]. Detection of rare drivers is therefore limited by the amount of data available and high dimensionality of such data. Additionally, background mutation frequencies are also difficult to estimate due to variability across different samples [4]. Failing to account for this leads to the increase in false positive results observed in these models. Recent studies have also explored the possibility of predicting driver mutations using functional impact-based scores [5] [6], however these also suffer from high false positive rates [7].
To address these issues, we provide a novel tool for driver mutation identification: DRIVE (DRiver Identification, Validation and Evaluation). DRIVE combines a statistical modelling approach with a features-based method. We computed features based on the occurrence of mutations, functional impact scores, structural properties of the genes and proteins of the associated mutations. In addition, we used ratio-metric features in our model to further improve performance. Our proposed model is able to detect established driver mutations with higher accuracy and fewer false positives. Moreover, it provides an advantage on current models to predict rare driver mutations with higher precision.
Methods
An overview of the different components involved in our system with data flow is shown in Figure 1.
A small description about each component is given below.
Data acquisition and pre-processing
Mutational data was obtained from the AACR Project GENIE, consisting of sequencing data of 64,217 unique tumors [8]. Genomic coordinates of variants from alignments to Genome Reference Consortium Human Build 37 (GRCh37) were converted to GRCh38 using PyLiftover 1 . Synonymous mutations were filtered from the data, leaving 371,564 missense mutations for further analysis.
Features extraction
DRIVE uses a combination of features at three different levels, as shown in Figure 2. Features at mutation level are computed using Variant Effect Predictor (VEP) [9] which is an open-source command line tool for analysis of genomic variants based on human genome annotations. VEP also includes features from the dbNSFP database [10], such as pathogenicity scores of Single Nucleotide Variants (SNVs) such as (SIFT [11], PolyPhen [12], VEST4 [13], and MutationAssessor [14]).
At codon-level, mutational hotspots were identified using the statistical modelling approach proposed in [15]. Hotspots indicate selective pressure across a population of tumor samples and have long been regarded as the main method to identify regions with selective advantage [16]. To counter the effect of background mutation rate, the contextual expected mutability was calculated across codons. It is used to compute the significance score of mutations known as beta scores, using a binomial model: where N is the total number of samples sequenced, k is the number of occurrence of mutations at the specific codon and p is the codon mutability indicating likelihood of mutation at that locus with no selective advantage.
Gene-level features were computed using 20/20+ [17] -a machine learning based approach to predict driver mutations. It calculates different structural features, including Protein-protein interactions (PPI), genes degree and gene betweenness centrality. Furthermore, it calculates different ratio-metric features including non-silent to silent mutations rate and missense to silent mutations rate.
Data labels
The benchmark dataset, including labels, was obtained from a recent study, ChasmPlus, where the authors used a semi-supervised approach to label the mutations as likely driver or passenger [18]. In order for a mutation to be labelled as a driver, it should fulfil the following three conditions; first, missense mutations must occur in the 125 clinically established pan-cancer driver gene panel [19]. Second, missense mutations of a given cancer indication must occur in a significantly mutated gene for that cancer indication [20]. Third, missense mutations must be within samples with a relatively low mutation rate (less than 500 mutations) to limit the number of passenger mutations.
Model training and evaluation
Multiple supervised classification algorithms were implemented. We handled data imbalance with two mechanisms: first, we downsampled the majority class to equal numbers of driver and passenger mutations; second, we used stratified k-fold cross validation for model evaluation. We observed the model outputs were highly sensitive to the threshold of the classifier. For clearer insight on the performance, we evaluated the models using receiver operating characteristics (ROC) and area under the ROC (AUC-ROC). Figure 3(a) displays the mean receiver operating characteristic (ROC) curves from K-fold stratified cross-validation of different classification algorithms. In general, ensemble trees such as random forest and gradient boosting classifiers performs slightly better than other models (Table 1). The values of precision and recall were computed at a threshold of 0.5, which can be adjusted to user preference. Most significant features in our system were computed using the mean decrease in Gini index (MDGI) score for random forest. Beta scores computed using the statistical model are the second most important feature in identifying driver mutations, which reiterates the advantage of our approach. Results are shown in Figure 3.
Results
We compared the performance of our highest performing classifier (random forest) with ChasmPlus [18] (Figure 4), run in default settings 2 . ChasmPlus scores were calculated for a pan-cancer model using OpenCRAVAT 3 . Initial results show DRIVE performs similarly to ChasmPlus for ROC-AUC scores, and better than many currently used models (reported to have below 0.8 in [18]). DRIVE also has better precision than ChasmPlus at a standard threshold of 0.5.
Conclusion
Our proposed approach combines the power of statistical modelling with feature-driven machine learning methods. This provides the advantage of identifying both known cancer driver mutations and rare driver mutations with reduced false positives. We believe that these methods can be improved even further by incorporating more domain knowledge, to bring truly data-driven systems based on machine learning into precision oncology. 3 Note: ChasmPlus has been trained on these mutations previously, which may confer an advantage Missense to silent Ratio of missense to silent mutations in a given gene 3) Non-silent to silent Ratio of non-silent to silent mutations in a given gene
4) HiC_compartment
HiC measure of open vs consensed chromatin. Score acts as a proxy for expression level
5) Gene-betweenness
Betweenness centrality indicates a ratio of unique paths that include a given node to all unique paths in the graph.
6)
Gene-degree Number of interaction partners on PPI network for a given gene
17) svm_class
Classification of normal genes and cancer related genes. Model is based on semantic similarity between gene ontology annotations | 2021-05-04T01:16:06.540Z | 2021-05-02T00:00:00.000 | {
"year": 2021,
"sha1": "885bc2f9e0c30ed62b2b912fd85c8bf91209ba3c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f12259e8680de7d8226baafe54d2867a52ac1a29",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Biology"
]
} |
91621705 | pes2o/s2orc | v3-fos-license | Essential Oil Composition and Antioxidant Activity of Calamintha officinalis Moench
The essential oils are aromatic compounds usually used in medical, food and cosmetic industries. They contain more than 200 diverse compounds, mostly made of monoterpene and Sesquiterpenes hydrocarbons and some branched chemicals such as esters, alcohols, aliphatic aldehydes and ketones. Essential oils are normally produced in fragrant herbs.1 Eos have antioxidant and microbicide properties.2,3 The herbs from Calamintha sp. are used to add pleasant flavor to the food, due to their characteristic tastes. They are also used in a number of therapeutic applications having anticough and antipyretic properties.4 Calamintha officinalis Moench (COM) from Lamiaceae family is very similar to the common mints not only in its appearance but also in terms of its aroma. It is, therefore, commonly used as an alternative to mint for usage in different drinks.5 The antioxidant activity of plant extracts and essential oils is of special importance owing to their positive physiological activities on human cells and their potential ability to substitute synthetic antioxidants.6 Its essential oil is used in baking as flavoring agent and to add a desirable taste to some pharmacological. It is also consumed for its spicing, sudorific and microbicide effects.7 In recent years, numerous researchers have investigated the chemical components in the essential oil of Calamintha species from diverse sources. The purpose of the present study was to obtain the main components of COM plant collected from north area of Iran (Gilan, Lahijan). This was then compared to essential oil composition of COM from other origins. On the other hand, we also aimed to assess the enzymatic antioxidant activity in extracts obtained from the plant’s leaves.
Introduction
The essential oils are aromatic compounds usually used in medical, food and cosmetic industries.They contain more than 200 diverse compounds, mostly made of monoterpene and Sesquiterpenes hydrocarbons and some branched chemicals such as esters, alcohols, aliphatic aldehydes and ketones.Essential oils are normally produced in fragrant herbs. 1 Eos have antioxidant and microbicide properties. 2,3he herbs from Calamintha sp. are used to add pleasant flavor to the food, due to their characteristic tastes.They are also used in a number of therapeutic applications having anticough and antipyretic properties. 4alamintha officinalis Moench (COM) from Lamiaceae family is very similar to the common mints not only in its appearance but also in terms of its aroma.It is, therefore, commonly used as an alternative to mint for usage in different drinks. 5The antioxidant activity of plant extracts and essential oils is of special importance owing to their positive physiological activities on human cells and their potential ability to substitute synthetic antioxidants. 6ts essential oil is used in baking as flavoring agent and to add a desirable taste to some pharmacological.It is also consumed for its spicing, sudorific and microbicide effects. 7n recent years, numerous researchers have investigated the chemical components in the essential oil of Calamintha species from diverse sources.The purpose of the present study was to obtain the main components of COM plant collected from north area of Iran (Gilan, Lahijan).This was then compared to essential oil composition of COM from other origins.On the other hand, we also aimed to assess the enzymatic antioxidant activity in extracts obtained from the plant's leaves.
Plant Samples
The plant, COM was collected from northern Iran (Guilan, Lahijan).The fresh leaves were washed with water thoroughly and dried at 40°C.The leaves were then crushed into small pieces and kept frozen for later experiments.
Preparation of Plant Extract
Frozen leaves (1 g of fresh mass) were ground in liquid nitrogen and extracted with a cool extraction buffer 3 mL (50 mM potassium phosphate, pH 7.5).The extract was centrifuged for 30 min at 12 000 rpm at 4°Cand the resulting supernatants was used as crude extract. 8PH Free Radical-Scavenging Activity DPPH Free radical scavenging activity was estimated by determining the scavenging activity of the essential oil by Burcul with some modification.9 Superoxide Radical Scavenging Activity One hundred microliters of the plant extract was added to 3 mL of a reaction mixture and mixed thoroughly.The reaction mixture contained 50 mM potassium phosphate buffer (pH 7.8), 13 mM methionine, 2 µM riboflavin, 0.1 mM EDTA and 75 µM NBT.A blank was made of the reaction mixture without enzyme and NBT and the control contained reaction mixture without enzyme.The tubes containing solutions were subjected to 400 W bulbs (4×100 W bulbs) for 15 minutes and the absorbance was read instantly at 560 nm. 10 The percent scavenging of the superoxide radical was calculated using the following equation: % scavenging = (1−A e /A 0 ) ×100 Where, A 0 is the absorbance without sample and Ae is absorbance with sample. 11
Catalase Assay
The catalase (CAT) reaction mixture (3 mL) contained 50 mM phosphate buffer (pH 7.0), 15 mM H 2 O 2 and 0.1 ml of the plant extract.Reaction started as soon as the extract was added to the reaction mixture.Alterations in absorbance of the reaction mixture at 240 nm were then recorded every 20 seconds.One unit of CAT activity was defined as an absorbance change of 0.01 unit.min - . 12eparation of Essential Oil Dried leaves of the plant (50 g) were hydrodistilled for 3 hours, by a Clevenger-type apparatus.The yield of essential oil was 1% (w/w).The essential oil was poured out, dehydrated using anhydrous sodium sulfate and stored at low temperature. 13olation and Analysis of Essential Oil by GC-MS: Gas chromatography-mass spectrometry (GC-MS) analysis was performed by Hewlett-Packard (HP-6890), with a crosslinked 5% phenyl dimethyl siloxane HP-5MS capillary column (dimensions, 30 m × 0.25 mm), the carrier gas was helium with a flow rate of 1 mL/min.The column temperature was set from 60°C to 250°C at a rate of 6°C/min and the injector and detector (FID) temperature at 250°C.The injected volume was 0.1 μL of the oil with split ratio of 1/30 with an ionizing voltage of 70 eV.
The percentage of the essential oil composition was determined by retention indices.Retention indices were defined by retention time for n-alkanes that were injected after the essential oil in the same chromatographic conditions.The components were identified by comparing with retention indices found in literature and through comparison of their mass spectra with issued mass spectra data. 14Kovats retention indices (KI) were then calculated using the following formula: Where tʹ n and tʹ n+1 are retention times of the reference n-alkane hydrocarbons eluting instantly before and after compound "X, " and tʹ x is the retention time of compound "X.". 15 Statistical Analysis Each experiment was repeated at least three times and the statistical analysis were performed using SPSS version 22 statistical software.
Ethical Considerations
No human or animal samples were used in this study.
Superoxide Radical Scavenging Activity
Superoxide anions produce active free radicals that can react with biological macromolecules causing tissue injury.Rapid detection of superoxide is highly important, as the lipid peroxidation begins quite instantly.The superoxide anion has a key role in the creation of other free radicals including hydrogen peroxide, hydroxyl radical, and singlet oxygen, which cause oxidative injury in lipids, proteins, and DNA. 16sing the following relationship, the activity of COM extract to scavenge superoxide anion was found to be 89.8%.The results obtained from this part of research indicated the high scavenging activity of the COM extract.% scavenging = (1−A e /A 0 ) ×100 Catalase Assay CAT converts H 2 O 2 to nontoxic compounds water and oxygen and, therefore, arise in CAT activity could have a role in the defense of the plants against destroying effect of H 2 O 2 .Specific catalase activity of COM extract was found to be 97 U/min/mg protein.In this study, the activity of CAT showed a significant increase during time (Figure 1).
Identification of Essential Oil Components
The yield of the essential oil obtained from the leaves of COM extracted by hydro-distillation was 1.0% (w/w).Identification of the essential oil components were done by GC-MS. Figure 2 is the chromatogram of essential oil from C. officinalis.Fortyone components were isolated, constituting 23.09% of the total oil.However, only 11 components of the oil were identified in our laboratory (Table 1).The major constituents of this oil were trans-caryophyllene (8.55%), isomenthol (2.98%), tetrahydrolinalyl acetate (2.96%), and pinene (2.24%).
Discussion
In this paper, CAT and superoxide anion radical scavenging activity of COM extract along with GC-MS analysis of its http://www.biotechrep.irJ Appl Biotechnol Rep, Volume 5, Issue 2, 2018 57 essential oil were evaluated.According to our literature survey, it has been reported that although C. officinalis yields comparatively high volume of essential oil with a number of useful pharmacological active compounds, some of them are not still explored pharmacologically.It has also been reported that, similar to other members of mint family, the essential oil of C. officinalis preferably used in baking industry as a flavoring agent.The essential oil also improves the taste and aroma of some medicinal products.It is also usually used for its spicing, sudorific, and microbicide effects. 7he ecological variants, such as temperature, relative moisture, and sun shine period, have an absolute effect on the leaves of COM. 7 According to GC-MS results, the major component of the C. officinalis oil was trans-caryophyllene (8.55%), a bicyclic sesquiterpene, known for anti-inflammatory property. 17It has been reported that trans-caryophyllene possess many pharmacological effects including strong antimicrobial property 18 and ataractic activity. 19ouchra et al reported 1, 8-cineole (36.6%), pulegone (17.9%) and limonene (9.2%) as major constituents. 20n agreement with our results, a number of compounds have been identified and reported in C. officinalis oil. 21owever, the major constituents of their oil were carvone (46.7%) and pulegone (22.1%).Burzo et al from Romani reported p-menthone (28.29), pulegone (48.75) as the main components. 4esearchers in Iran have identified thirty-four components in the oil of C. officinalis.The major constituents of their oil were β-bisabolene (9.9%), germacrene D (7.6%), β-bourbonene (7.4%) and piperitenone (5.3%). 22 report from Egypt has shown a number of various compounds including carvone (38.7%), neo-dihydrocarveol (9.9%), dihydrocarveol acetate (7.6%), dihydrocarveol (6.9%), 1,8 cineole (6.4%), cis-carvyl acetate (6.1%), and pulegone (4.1%) as the major components of their essential oil. 7he result of our study on Calamintha revealed the existence of a specific component of the essential oil (trans-caryophyllene) that has not been reported by other researchers.It has been reported that trans-caryophyllene possesses strong antimicrobial property. 18It is expected, therefore, that our essential oil should exhibit antibacterial activity.This is the remaining part of research that should be explored and reported later.However, antimicrobial property of C. officinalis essential oil has been reported by other researchers. 23he free radical scavenging activity of essential oil was determined by DPPH method.The results have indicated that these essential oil possess a low inhibitory activity (21%) relative to plant extract.
We also measured CAT and SOD activities in COM extract, a line of research that has not been reported for this plant in the literature.
CAT is a major antioxidant defense enzyme that primarily catalyses the decomposition of H 2 O 2 to H 2 O and O 2 .The CAT activity of COM extract was significantly increased during time.
Oxygen derived free radicals, such as the superoxide anion and hydroxyl radical are cytotoxic and promote tissue injury.Antioxidants act as a major defense against radical mediated toxicity by protecting from the damages caused by free radicals.Furthermore, although medicinal plants are used as 'antioxidants' in traditional medicine, their claimed therapeutic properties could be due, in part, to their capacity for scavenging oxygen free radicals. 17We found that the free radical scavenging activity of the COM extract was quite considerable.
Results of our experiment confirmed that COM extract can protect human cells against oxidative damage due to the activity of these enzymes, and it is suggested that strong antioxidant properties of COM can be used for therapeutic or pharmaceutical applications in future.
In this study, the essential oils of C. officinalis were extracted by hydrodistillation.It was observed that the plant contains large amount of essential oil and only 11 components of the oil were identified in our research.Trans-caryophyllene (8.55%) was the main constituent of this oil.The compound is known for its anti-inflammatory and antimicrobial properties.In addition to the antioxidant property that was confirmed in this study, the major component of its essential oil was found to be an antimicrobial agent.It was, therefore, concluded that based on the results of this study, the essential oil of COM from northern Iran could be considered as a reasonable candidate for food and pharmaceutical industries.
Conclusions
The result of our study on Calamintha revealed the existence of a specific component of the essential oil (trans-caryophyllene) that has not been reported by other researchers.Results of our experiment confirmed that COM extract can protect human cells against oxidative damage due to the activity of these enzymes, and it is suggested that strong antioxidant properties of COM can be used for therapeutic or pharmaceutical applications in future.
Figure 1 .
Figure 1.Change in Absorbance of CAT During Time.
Table 1 .
Main Components Identified in the Essential Oil of COM Abbreviations: KI: kovats retention indices, RT: retention time. | 2019-04-03T13:08:22.997Z | 2018-02-10T00:00:00.000 | {
"year": 2018,
"sha1": "0cd682c74004bc98606880620dfe770292aef741",
"oa_license": "CCBY",
"oa_url": "http://www.biotechrep.ir/article_75328_f36e3a29d65ad9aa40483d884a9a1ed9.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0cd682c74004bc98606880620dfe770292aef741",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
233868873 | pes2o/s2orc | v3-fos-license | Novel inhibitors with sulfamethazine backbone: synthesis and biological study of multi-target cholinesterases and α-glucosidase inhibitors
Abstract The underlying cause of many metabolic diseases is abnormal changes in enzyme activity in metabolism. Inhibition of metabolic enzymes such as cholinesterases (ChEs; acetylcholinesterase, AChE and butyrylcholinesterase, BChE) and α-glucosidase (α-GLY) is one of the accepted approaches in the treatment of Alzheimer’s disease (AD) and diabetes mellitus (DM). Here we reported an investigation of a new series of novel ureido-substituted derivatives with sulfamethazine backbone (2a-f) for the inhibition of AChE, BChE, and α-GLY. All the derivatives demonstrated activity in nanomolar levels as AChE, BChE, and α-GLY inhibitors with K I values in the range of 56.07–204.95 nM, 38.05–147.04 nM, and 12.80–79.22 nM, respectively. Among the many strong N-(4,6-dimethylpyrimidin-2-yl)-4-(3-substitutedphenylureido) benzenesulfonamide derivatives (2a-f) detected against ChEs, compound 2c, the 4-fluorophenylureido derivative, demonstrated the most potent inhibition profile towards AChE and BChE. A comprehensive ligand/receptor interaction prediction was performed in silico for the three metabolic enzymes providing molecular docking investigation using Glide XP, MM-GBSA, and ADME-Tox modules. The present research reinforces the rationale behind utilizing inhibitors with sulfamethazine backbone as innovative anticholinergic and antidiabetic agents with a new mechanism of action, submitting propositions for the rational design and synthesis of novel strong inhibitors targeting ChEs and α-GLY. Communicated by Ramaswamy H. Sarma
Introduction
Neurons are unparalleled among cell species in mammalian organisms for their characteristic variety of secretory complexes and their heterogeneity in size and shape. Acetylcholine (ACh) was discovered due to the determination that the inhibitory nerves released a substance inhibiting the heart's chronotropic. Discovered by Loewi as a result of research done with isolated heart preparations in the early 1920s, ACh is the first known neurotransmitter (Pohanka, 2012). It has functions in the peripheral and central nervous system such as attention (Istrefi et al., 2020), motivation (Collins et al., 2016), arousal (Ruivo et al., 2017), memory (Işık et al., 2020c), learning (Provensi et al., 2020), and activate muscles (Benham et al., 1985). People with Alzheimer's disease (AD) have reduced levels of ACh in the brain (Francis, 2005). The only clear case of a transmitter known to be inactivated by the extracellular enzymes is ACh. The enzymes responsible for the break-down of ACh are cholinesterases (ChEs), i.e. acetylcholinesterase (EC 3.1.1.7; acetylcholine hydrolase, AChE) and butyrylcholinesterase (EC 3.1.1.8; acylcholine acylhydrolase, BChE) (Krieger, 2010). On the other hand, AChE is more specific for ACh than BChE (Reale et al., 2018). These enzymes, which possess in various tissues, neurons, and the surrounding extracellular space (Işı k et al., 2017), in many species, including humans, are hydrolyzing the quaternary amine ACh to choline (Ch) and acetic acid within milliseconds (Kessler et al., 2017). However, it remains the capacity of BChE to hydrolyze higher Ch esters, such as butyrylcholine, is a significant physiological function of this enzyme (Chuiko, 2000). ChE inhibitors (ChEIs) increase ACh levels by inhibiting the action of AChE and BChE, which are accountable for the breakdown of Ach . The main use of ChEIs is for the therapy of dementia in patients with AD (Işık, 2019). While numerous ChEIs, in particular, such as donepezil, galantamine, neostigmine, physostigmine, rivastigmine, tacrine are available (Kilic et al., 2020;Mughal et al., 2018;Zaman et al., 2019), tend to reason potentially serious side effects (Mughal et al., 2019), such as vasodilation, slow heart rate, weight loss, constriction of the respiratory tract, constriction of the pupils in the eyes (Schneider, 2000).
It is known that there is a possible relationship between neurodegenerative AD and type 2 diabetes mellitus (DM). Studies in recent years have focused on systems that control synaptic and neuronal functions in the brain. It has been determined that insulin controls synaptic and neuronal functions in the brain. Also, it has been determined that DM patients are more likely to have AD and dementia. In the light of this information, the reason why DM is associated with AD has been expressed as follows; The defect in insulin secretion may be responsible for forming both diseases (Han & Li, 2010). In other words, its deficiency or elevated levels can cause AD and DM. Furthermore, ChE enzymes are known to be associated with diabetes. In a study conducted for this purpose, AChE activity has been higher in streptozotocininduced diabetic rats than in control groups. Moreover, BChE activity was significantly increased in type 1 and type 2 DM compared to control groups (Abbott et al., 1993). These results imply that inhibition of cholinesterase enzymes could be important for treating DM, as in AD patients.
One of the important therapeutic approaches to be considered in DM may be possible with the inhibition of carbohydrate hydrolyzing enzymes such as a-glucosidase (EC 3.2.1.20; a-GLY) (Iftikhar et al., 2019). The a-GLY, which plays an important role in regulating blood glucose levels, is an important enzyme involved in the digestion of carbohydrates (Saleem et al., 2021). a-GLY inhibitors (a-GLYIs) delay the release of D-glucose from carbohydrate-containing diets and the absorption of glucose, thereby lowering plasma glucose levels and suppressing hyperglycemia that can occur at satiety (Ren et al., 2011). As a result of this, a-GLYIs use in DM therapy (Taslimi et al., 2018). Although some a-GLYIs, such as acarbose and voglibose Taha et al., 2021), which are widely used clinically to control blood glucose levels, are effective, they often have side effects such as meteorism and flatulence abdominal distension, and possibly diarrhea (Hollander, 1992;Nakagawa, 2013). In this context, many researchers have turned to the novel drug discovery and development technology and toxicology area to discover alternatives of both ChEs and a-GLY inhibitors (Chen et al., 2013).
Anion transport across cellular membranes is essential for maintaining the normal physiological functions of cells (Poulsen et al., 2010). To date, many small-molecule anion transporters based on urea (Dias et al., 2018), thioureas (Akhtar et al., 2018), sulfonamides (Saha et al., 2016) have been reported (Yu et al., 2019). Moreover, sulfonamide compounds have many biological activities, including anti-Alzheimer's, anticancer, antimicrobial, antiviral, and antidiabetic activities (Figure 1). Thus, sulfonamide compounds derived from urea and its sulfur analogue thiourea have been used continuously to design new bioactive compounds due to their critical pharmacological properties (Akocak et al., 2021;Lolak et al., 2020). In our study, it was synthesized and characterized novel inhibitors with sulfamethazine backbone, which have been important bioactive properties with their chemical structure to discover newly multitarget inhibitors effective on AChE, BChE, and a-GLY. The in silico predictions, namely, prediction of ADME and toxicity profiles, molecular docking, and in vitro inhibition effects, of the synthesized novel ureido-substituted sulfamethazine derivatives (2a-f) were investigated for these enzymes associated with AD and DM.
ChEs and a-GLY kinetic analysis
The inhibition effects of novel ureido-substituted derivatives with sulfamethazine backbone (2a-f) were determined with at least five different inhibitor concentrations on AChE, BChE, and a-GLY. The IC 50 values of the synthesized agents were calculated from Activity (%)-[Ligand] graphs for each derivative according to our previous studies (Akbaba et al., 2013;T€ urkeş, 2019b;T€ urkeş et al., 2015). The inhibition types and K I constants were found by Lineweaver and Burk's curves as described in previous studies (Demir, 2020;T€ urkeş et al., 2014, 2019c. The results were exhibited as mean-± standard error of the mean (95% confidence intervals). Differences between data sets were considered statistically significant when the p-value was less than 0.05.
ADME-Tox study
The ADME-Tox profile screening of the novel ureido-substituted inhibitors with sulfamethazine backbone (2a-f) pertaining to pre-clinical agent discovery stages was performed using the QikProp module and SwissADME platform (Sever et al., 2020). These ADME-Tox properties include: (i) Molecular weight of the compound; (ii) Computed dipole moment of the compound; (iii) Total solvent-accessible volume in cubic angstroms using a probe with a 1.4 Å Radius; (iv) Octanol/gas partition coefficient; (v) Water/gas partition coefficient; (vi) Octanol/water partition coefficient; (vii) Aqueous solubility; (viii) IC 50 value for the blockage of HERG K þ channels; (ix) Apparent Caco-2 cell permeability in nm/sec; (x) Brain/blood partition coefficient; (xi) Apparent MDCK cell permeability in nm/sec; (xii) Skin permeability; (xiii) Prediction of binding to human serum albumin; (xiv) Human oral absorption; (xv) Van der Waals surface area of polar nitrogen and oxygen atoms; (xvii) Number of violations of Lipinski's rule of five (Lipinski et al., 1997); (xvii) Number of violations of Jorgensen's rule of three (Duffy & Jorgensen, 2000); and (xviii) Pan-assay interference compounds alert.
ChEs and a-GLY inhibition assay
Sulfonamides, which are also called sulfa/sulpha drugs, are synthetic agents containing the sulfonamide chemical group. Sulfa medicines exhibit several activities such as anticonvulsant, anti-bacterial, anti-inflammatory, immunomodulatory, and diuretic effects based on many groups of agents by interfering with cell metabolism. The synthesis of new sulfonamides is carried out to give the compound multi-faceted bioactive properties by adding different groups to known scaffolds. For example, potent some AChE and BChE inhibitors associated with AD that show activity in vivo at low concentrations have been developed by synthesizing several sulfonamide analogs (Kosak et al., 2018). In light of these developments, the researchers have focused on the sulfa drug derivatives synthesized as versatile agents against metabolic diseases in recent years, and many sulfonamide derivatives are improved for the treatment of AD and other central nervous system disorders, various cancer kinds, psychosis, diabetes, and many more complex diseases (Apaydın & T€ or€ ok, 2019; Khanfar et al., 2013). The increase in AChE and BChE activities that hydrolyzes the neurotransmitter ACh causes neurodegenerative diseases such as AD by increasing amyloid protein formation. Strong inhibition of ChEs is an important option to consider when treating the disease to reduce the hydrolysis of this neurotransmitter and accepted as an approach in the treatment of AD (Rao et al., 2007). It may be possible to balance the blood glucose level to a normal level by inhibiting the a-GLY in DM disease, which is known to be associated with AD. It has also been reported that a-GLYIs besides their antidiabetic properties have the potential to treat a variety of diseases, including hepatitis, cancer, and heart conditions (Fischer et al., 1996;McCulloch et al., 1983). In this study, it was set out to investigate the influences of the new inhibitors with sulfamethazine backbone (2a-f) as multi-target cholinesterase and a-glucosidase inhibitors on AChE, BChE, and a-GLY. All the analogues were analyzed for their inhibitory activities versus metabolic enzymes as AChE, BChE, and a-GLY, in comparison with the clinically used medicines tacrine (THA) and acarbose (ACR). The inhibition data (IC 50 and K I values) for all novel ureido-substituted inhibitors with sulfamethazine backbone (2a-f) are summarized in Table 1. AChE was inhibited by all derivatives (2a-f) with a variety of potencies. All novel ureido-substituted inhibitors with sulfamethazine backbone (2a-f) were potent AChE inhibitors with IC 50 s in the range of 72.85-244.38 nM, and K I s ranging between 56.07 ± 9.53 nM and 204.95 ± 11.47 nM. The most active analogues in this series (2a-f) were 4-fluoro substituted derivative 2c and, 3,4-dichloro substituted derivative 2f with K I s of 56.07 ± 9.53 nM, and 64.68 ± 7.08 nM, respectively, compared to standard inhibitor THA (K I of 112.05 ± 24.05 nM). The order of inhibitory activities for the novel ureido-substituted derivatives with sulfamethazine backbone (2a-f) versus AChE decreased in the order of 2c (4-fluoro substituted) > 2f (3,4-dichloro substituted) > 2d (4-chloro substituted) > 2e (4-methyl substituted) > 2a (3-chloro substituted) > 2b (3-methyl substituted).
BChE, one of the other cholinesterases, was inhibited by the novel ureido-substituted inhibitors with sulfamethazine backbone (2a-f) in nanomolar levels with IC 50 s ranging from 49.05 to 203.05 nM, and K I s in the range of 38.05 ± 7.04-147.04 ± 27.06 nM. Moreover, all derivatives (2af) showed more potent inhibitory effects on BChE as compared to THA (K I of 177.15 ± 39.05 nM). Analogue 2c bearing the fluoro moiety at the 4th position was the most potent inhibitor of BChE with a K I of 38.05 ± 7.04 nM, and the second most potent inhibitor was 3,4-dichloro substituted sulfamethazine derivative 2f with a K I of 54.07 ± 9.04 nM. The BChE inhibitory activities for the novel ureido-substituted compounds with sulfamethazine backbone (2a-f) reduced in the order of 2c (4-fluoro substituted) > 2f (3,4-dichloro substituted) > 2a (3-chloro substituted) > 2e (4-methyl substituted) > 2d (4-chloro substituted) >2b (3-methyl substituted). On the other hand, fluoro substitution at the 4th position and 3,4-dichloro substitution made better ChEs inhibitors as compared to other analogues that have been studied in the current work. Moreover, 4-fluoro substituted derivative 2c (K I s for AChE and BChE 56.07 ± 9.53 nM and 38.05 ± 7.04 nM, respectively) was identified as the most potent AChE inhibitor in this series (2a-f).
All the synthesized the novel ureido-substituted inhibitors with sulfamethazine backbone (2a-f) exhibited activity in nanomolar levels as a-GLY inhibitors with the IC 50 and K I values in the range of 9.05-68.76 nM, and 12.80 ± 3.05-79.22 ± 14.66 nM, respectively. Accordingly, these analogues (except compounds 2b and 2e) were determined to be more effective inhibitors, compared to ACR as a standard agent with a K I value of 40.37 ± 3.16 nM. The results revealed that derivatives with chloro moiety at the 3, and 4th position played a crucial role in a-GLY inhibitory activity. In particular, the most active compounds in this series were 3-chloro substituted compound 2a, 4-chloro substituted analogue 2d, and 3,4-dichloro substituted derivative 2f with K I s of 12.80 ± 3.05 nM, 27.65 ± 5.98 nM, and 29.92 ± 3.67 nM, respectively. In this respect, it was found that the inhibitory potency order of the novel ureido-substituted analogues with sulfamethazine backbone (2a-f) was 2a (3-chloro substituted) > 2d (4-chloro substituted) > 2f (3,4-dichloro substituted) > 2c (4-fluoro substituted) > 2e (4-methyl substituted) > 2b (3-methyl substituted).
On the other hand, it showed that the methyl substitution on para-and meta-position dramatically lower the AChE, BChE, and a-GLY activity, as it can be seen in the compounds 2b and 2e. These cases may be useful strategies to develop ChEs and a-GLY inhibitory activity. It was also determined from in silico studies that the presence of these fragments increases the activity. In this direction, many studies have indicated that ureido-substituted sulfamethazine derivatives exhibit significant metabolic enzyme inhibition. In this context, the study by Akocak et al. (2021) reported that a series of six N-carbamimidoyl-4-(3-substitutedphenylureido) benzenesulfonamide derivatives (2a-f) were synthesized, and their inhibition activities investigated for a-GLY, AChE, and BChE by spectrophotometric methods. They found that novel derivatives with sulfaguanidine backbone (2a-f) exhibited effective inhibitory profiles with IC 50 s in the range of 94.38-409.13 nM, and K I s ranging between 103.94 and 491.55 nM against a-GLY, with IC 50 s ranging between 523.05 and 1094.23 nM, K I s in the range of 481.04-913.43 nM versus AChE, and with IC 50 s in the range of 660.40-1058.03 nM, and K I s ranging from 598.47 to 904.73 nM against BChE. Işık et al. (2020c) reported that investigated the effects of some sulfonamide derivatives (S1-S4 and S1i-S4i) on AChE. They found that the synthesized 4-aminobenzenesulfonamides had potential inhibitor properties with different inhibition types for AChE with K I constants in the range of 2.54 ± 0.22-299.60 ± 8.73 lM. Taslimi et al. (2020a) synthesized that the derivatives of amine (1i-11i) and imine sulfonamides (1-11) and investigated the effects of the synthesized derivatives on AChE, a-GLY, and glutathione Stransferase (GST) enzymes. K I values of the series for AChE, a-GLY, and GST were found in the range of 2.26 ± 0. 45-82.46 ± 14.74 lM, 95.73 ± 13.67-1154.65 ± 243.66 lM, and 22.76 ± 1.23-49.29 ± 4.49 lM, respectively. In a study, Gokcen et al. (2016) reported that four groups of sulfonamide derivatives, having isoxazole, pyridine, pyrimidine, thiazole, and thiadiazole groups, synthesized, and their inhibition activities against hCA I and II isoenzymes evaluated. They determined that novel derivatives with sulfamethazine backbone (5c-8c) demonstrated activity in nanomolar levels as hCA I, and hCA II inhibitors with IC 50 values in the range of 9.07-27.72 nM, and 8.77 to 49.51 nM and K I constants in the range of 7.01-27.18 nM, and 5.74-137.96 nM, respectively. In another study, Hamad et al. (2020) reported that a new series of Schiff base derivatives of (E)-4-(benzylideneamino)-N-(4,6-dimethylpyr-imidin-2yl)benzenesulfonamide (3a-3f) were synthesized. Synthesized compounds were evaluated for their Jack Bean urease inhibitory activity. It was determined that all derivatives (3a-3f) showed potent inhibitory activity, ranging between IC 50 s 3.78 ± 3.23 lM and 12.90 ± 2.84 lM, as compared to standard thiourea (IC 50 of 20.03 ± 2.03 lM). Furthermore, to evaluate the drug-likeness of derivatives, ADME prediction was made, and all analogues (3a-3f) were determined to be non-toxic and have passive gastrointestinal absorption. In another study, Tugrak et al. (2020) synthesized the novel compounds with the chemical structure of N-(f4-[N'-(substituted)sulfamoyl]phenylgcarbamothioyl)benzamide (1a-g) and 4-fluoro-N-(f4-[N'-(substituted)sulfamoyl]phenylgcarbamothioyl)benzamide (2a-g) potent and selective hCA I and II isoenzymes inhibitors. The aryl part of compounds 1g and 2g from the derivatives in the series was sulfamethazine. The K I constants of derivative 1g were 59.55 ± 13.07 nM (hCA I) and 12.19 ± 2.24 nM (hCA II), whereas the K I constants of analogue 2g were 55.95 ± 10.72 nM (hCA I) and 47.96 ± 7.91 nM (hCA II). Comparing the K I values of acetazolamide (82.13 ± 4.56 nM for hCA I and 50.27 ± 3.75 nM for hCA II), with compounds 1g and 2g demonstrated promising and selective inhibitory effects against the hCA I and II isoenzymes, the main target proteins.
In silico studies
3.3.1. ADME-Tox study ADME-Tox-related parameters were determined for novel ureido-substituted inhibitors with sulfamethazine backbone (2a-f), and the results are summarized in Table 2. Also, diagrams showing 'drug-likeness' descriptors for 2a and 2c, which are the most active derivatives in this series, are given in Figure 3. None of six ureido-substituted sulfamethazine derivatives (2a-f) were found to have no Lipinski's rule violation, and only two derivatives showed one Jorgensen's rule violation. According to Lipinski's rule, the octanol/water partition coefficient (QPlogPo/w) value should be 5. For these analogues (2a-f) bearing 3-chlorophenyl, m-tolyl, 4-fluorophenyl, 4-chlorophenyl, p-tolyl, and 3,4-dichlorophenyl moiety QPlogPo/w values ranging from 2.10 to 2.72. The predicted number of hydrogen bonds donated (donorHB 5) and the predicted number of hydrogen bonds accepted (accptHB 10) for all the derivatives (2a-f) were in agreement with the drug-likeness requirements of Lipinski's rule of five. Molecular weight (MW) is a crucial factor for binding at the active site. It was found that all compounds (2a-f) have MW between 411.48 and 466.34 (the reference value of MW is < 500 on the report of Lipinski's). The aqueous solubility (QPlogS) of a compound importantly affects its distribution and absorption characteristics. Usually, a high solubility goes along with good absorption. Considering Jorgensen's rule, the QPlogS value should be À5.7. Only compound 2d (QPlogS: À5.79) and compound 2f (QPlogS: À6.07) presented solubility values out of the limits. This is why these two compounds (2d and 2f) displayed one violation of Jorgensen's rule. The predicted apparent Caco-2 cell permeability (QPPCaco values in the range of 156.85-214.39) of the analyzed derivatives (2a-f) indicated excellent results (QPPCaco value should be >22 nm/s) and agreed with Jorgensen's rule of three. To sum up, computed ADME-Tox properties confirmed novel ureido-substituted inhibitors with sulfamethazine backbone (2a-f) as hit-agents displaying suitable druglike properties.
Molecular docking study
To better understand the interaction of the novel ureido-substituted inhibitors with sulfamethazine backbone (2a-f) with 4BDT, 4DBS, and 5NN6, the most potent AChE, BChE, and a-GLY inhibitors 2c (for ChEs) and 2a (for a-GLY) were docked in the binding sites of these enzymes. After that, in the present docking study, the docking patterns of HUW, THA, and MIG were compared with that of compound 2c for AChE and BChE (K I s: 56.07 ± 9.53 nM and 38.05 ± 7.04 nM, respectively) and compound 2a for a-GLY (K I : 12.80 ± 3.05 nM), the most potent compounds in this series (2a-f). The binding interactions of the inhibitors with AChE, BChE, and a-GLY are displayed in Figure 5.
Conclusion
In conclusion, a series of novel ureido-substituted derivatives with sulfamethazine backbone (2a-f) were synthesized and characterized in detail by spectroscopic and analytic methods. N-(4,6-dimethylpyrimidin-2-yl)-4-(3-substitutedphenylureido) benzenesulfonamide derivatives (2a-f) were first assayed to inhibit AChE, BChE, and a-GLY. The inhibitory activity was more intense versus ChEs compared to standard inhibitors THA and ACR, which, in turn, displayed an inhibition stronger with respect to a-GLY. The effect of derivatives on the enzymes varied according to their molecular structures and positions. Potent novel ureido-substituted derivatives with sulfamethazine backbone (2a-f) were detected towards ChEs with a high interest towards the AChE. A derivative was identified, namely the 4-fluorophenylureido 2c, which displayed a strong inhibitory action versus AChE and BChE. Additionally, an exhaustive in silico ligand/enzyme interaction research was performed with the three metabolic enzymes by Glide XP, MM-GBSA, and ADME-Tox predicts. | 2021-05-07T06:22:52.799Z | 2021-05-05T00:00:00.000 | {
"year": 2021,
"sha1": "5d764dd141e681dfbdd90be84222cc101801a3ff",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Novel_inhibitors_with_sulfamethazine_backbone_synthesis_and_biological_study_of_multi-target_cholinesterases_and_-glucosidase_inhibitors/14544107/1/files/27905363.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "e1d3fad3a88304b1482d69af8ff4fda9a15c38f3",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244834271 | pes2o/s2orc | v3-fos-license | Review on Local Buckling of Hollow Box FRP Profiles in Civil Structural Applications
Hollow box pultruded fibre-reinforced polymers (PFRP) profiles are increasingly used as structural elements in many structural applications due to their cost-effective manufacturing process, excellent mechanical properties-to-weight ratios, and superior corrosion resistance. Despite the extensive usage of PFRP profiles, there is still a lack of knowledge in the design for manufacturing against local buckling on the structural level. In this review, the local buckling of open-section (I, C, Z, L, T shapes) and closed-section (box) FRP structural shapes was systematically compared. The local buckling is influenced by the unique stresses distribution of each section of the profile shapes. This article reviews the related design parameters to identify the research gaps in order to expand the current design standards and manuals of hollow box PFRP profiles and to broaden their applications in civil structures. Unlike open-section profiles, it was found that local buckling can be avoided for box profiles if the geometric parameters are optimised. The identified research gaps include the effect of the corner (flange-web junction) radius on the local buckling of hollow box PFRP profiles and the interactions between the layup properties, the flange-web slenderness, and the corner geometry (inner and outer corner radii). More research is still needed to address the critical design parameters of layup and geometry controlling the local buckling of pulwound box FRP profiles and quantify their relative contribution and interactions. Considering these interactions can facilitate economic structural designs and guidelines for these profiles, eliminate any conservative assumptions, and update the current design charts and standards.
The introduction of pulwinding technology was one of the most prominent developments in pultrusion. In this process, off-axis wound fibres replace continuous filament with the studies characterising its local buckling behaviour. The circular tube shape was not considered here since local buckling is not critical in tubular PFRP profiles used in civil structural applications due to their relatively low slenderness ratio and uniformly distributed stresses [48][49][50][51]. The I-shape is most common in FRP profiles since it was inherited from the steel industry [52,53]. Nevertheless, box profiles are receiving more attention because of their higher structural stability and torsional stiffness with all walls being restrained [54]. Despite that, the majority of the local buckling studies were conducted on I-shape profiles, as shown in Figure 3, which compares the number of experimental studies undertaken on I-shape versus box shape in civil structural applications. The I-shape geometry was studied over three times more frequently than the box shape up to 2014. With the introduction of pulwinding technology for commercial production, the number of studies on box profiles was multiplied in 2014. Only three experimental studies on local buckling of pulwound FRP profiles were undertaken in 2014 [55], 2016 [56], and 2019 [29].
Local buckling can be defined as a structural instability problem where the crosssectional elements (e.g., flange or web) in a compressive loaded member will undergo an out-of-plane deformation and a stiffness reduction, which may lead to structural collapse [116][117][118]. It is a dominant failure mode for short-length FRP profiles and its capacity depends on the elastic properties of the laminate, the geometry, and the supporting and loading conditions of the cross-sectional elements [119][120][121]. Theoretically, local buckling of FRP profiles is analysed by considering each wall (e.g., flange or web) individually as an orthotropic plate and modelling the restraint of the flange-web junctions. Rayleigh-Ritz method is used to approximate the eigenvalue solution of the stability problem depending on the boundary and continuity conditions [85,122]. The theoretical approaches to simulate this restraint (boundary condition) are varying between three assumptions considering the flange-web junction to be clamped, simply supported, or elastically restrained, as shown in Figure 4 for box FRP profile. These three cases represent the upper, lower, and intermediate bounds of the buckling capacity ( ), respectively [123]. The explicit closed-form solutions for these cases are also presented in the same figure, where , , , and are the flexural rigidities (the equivalents of EI per unit width) of the orthotropic plate and the coefficients , , and are functions of the rotational restraint ( ) of the flange-web junction. It is worth mentioning that such closed-form equations are based on the classical laminated plate theory (CLPT) which does not count for shear deformations, and they consider only the geometry and layup of the plate [47,124]. They do not account for the flange-web junction (corner) geometry and cannot answer for the interactions with other failure modes. Thus, considering the local buckling of PFRP profiles as a plate instability problem results in inaccurate predictions due to the omission of stresses distribution from the adjacent walls. It is always preferable to consider the whole cross-sectional geometry when analysing buckling problems, and the finite element method (FEM), finite strip method (FSM), and generalised beam theory (GBT) are usually used for this purpose [125][126][127]. Nevertheless, the FEM surpasses the other numerical approaches due to its flexible and accurate simulation of geometry (e.g., tapering or thickening the corner radius). This is evident from the reviewed literature as shown in Figure 5, which shows the percentage of each research methodology used to study local buckling and its parameters. FEM is the best candidate to study the design parameters and perform parametric studies because of its flexibility in handling complex geometries, different loading and boundary conditions, and combined failure problems [128][129][130]. The number of experimental studies of local buckling undertaken on I-shape versus box shape for civil structural applications (Box-shape: [17,29,53,55,56,72,95,96,98,100,101,104] and I-shape: [17,53,61,62,64,[66][67][68][69][70][71][72]77,79,81,[89][90][91][92]).
Local buckling can be defined as a structural instability problem where the crosssectional elements (e.g., flange or web) in a compressive loaded member will undergo an out-of-plane deformation and a stiffness reduction, which may lead to structural collapse [116][117][118]. It is a dominant failure mode for short-length FRP profiles and its capacity depends on the elastic properties of the laminate, the geometry, and the supporting and loading conditions of the cross-sectional elements [119][120][121]. Theoretically, local buckling of FRP profiles is analysed by considering each wall (e.g., flange or web) individually as an orthotropic plate and modelling the restraint of the flange-web junctions. Rayleigh-Ritz method is used to approximate the eigenvalue solution of the stability problem depending on the boundary and continuity conditions [85,122]. The theoretical approaches to simulate this restraint (boundary condition) are varying between three assumptions considering the flange-web junction to be clamped, simply supported, or elastically restrained, as shown in Figure 4 for box FRP profile. These three cases represent the upper, lower, and intermediate bounds of the buckling capacity (N cr ), respectively [123]. The explicit closed-form solutions for these cases are also presented in the same figure, where D 11 , D 22 , D 12 , and D 66 are the flexural rigidities (the equivalents of EI per unit width) of the orthotropic plate and the coefficients τ 1 , τ 2 , and τ 3 are functions of the rotational restraint (k) of the flange-web junction. It is worth mentioning that such closed-form equations are based on the classical laminated plate theory (CLPT) which does not count for shear deformations, and they consider only the geometry and layup of the plate [47,124]. They do not account for the flange-web junction (corner) geometry and cannot answer for the interactions with other failure modes. Thus, considering the local buckling of PFRP profiles as a plate instability problem results in inaccurate predictions due to the omission of stresses distribution from the adjacent walls. It is always preferable to consider the whole cross-sectional geometry when analysing buckling problems, and the finite element method (FEM), finite strip method (FSM), and generalised beam theory (GBT) are usually used for this purpose [125][126][127]. Nevertheless, the FEM surpasses the other numerical approaches due to its flexible and accurate simulation of geometry (e.g., tapering or thickening the corner radius). This is evident from the reviewed literature as shown in Figure 5, which shows the percentage of each research methodology used to study local buckling and its parameters. FEM is the best candidate to study the design parameters and perform parametric studies because of its flexibility in handling complex geometries, different loading and boundary conditions, and combined failure problems [128][129][130]. [63,64,168,169], and GBT: generalised beam theory [65,68,69,80,88]).
The local buckling behaviour of PFRP profiles varies depending on the loading condition as shown in Figure 6, which depicts the distribution of stress and strain in the hollow box profile subjected to compression versus bending. In profiles subjected to compression, all the walls buckle with a smaller buckle half-wavelength. Whereas in bending, only the walls under compressive stresses will buckle with a larger buckle half-wavelength [45,60,170]. Thus, local buckling is more critical in compression members than in flexural members due to the lower restraint provided by adjacent walls in compression [63,64,168,169], and GBT: generalised beam theory [65,68,69,80,88]).
The local buckling behaviour of PFRP profiles varies depending on the loading condition as shown in Figure 6, which depicts the distribution of stress and strain in the hollow box profile subjected to compression versus bending. In profiles subjected to compression, all the walls buckle with a smaller buckle half-wavelength. Whereas in bending, only the walls under compressive stresses will buckle with a larger buckle halfwavelength [45,60,170]. Thus, local buckling is more critical in compression members than in flexural members due to the lower restraint provided by adjacent walls in compression members [24,171,172]. Consequently, investigating and optimising the local buckling behaviour should be undertaken under both loading conditions in which compression provides the upper limit case and bending provides the lower limit case. members [24,171,172]. Consequently, investigating and optimising the local buckling behaviour should be undertaken under both loading conditions in which compression provides the upper limit case and bending provides the lower limit case. The critical manufacturing design parameters controlling the local buckling behaviour of FRP composites can be categorised into two groups of geometric (wall slenderness, cross-sectional aspect ratio, and corner geometry) and layup parameters (axial-to-inclined fibre ratio, inclined fibre angle, and stacking sequence) [124,[173][174][175][176]. After reviewing the available literature, it appears that these design parameters were not comprehensively studied for closed-section geometry (box profiles) when compared to other geometries, as shown in Figure 7, evident by the minimum number of publications for each manufacturing design parameter. Moreover, no study was found to investigate the corner radius effect on the local buckling capacity and failure mode of box profiles. Most of the publications on the layup parameters were undertaken for laminated plate geometry, not structural-level shapes. The effect of the layup parameters on the corners, which represent critical failure zones, was not considered in such studies. Table 1 summarises the local buckling design formulas of compression box and I-shape members in current standards and guides [177][178][179][180]. The effect of the cross-sectional aspect ratio is neglected in [177], which relies on the maximum slenderness ratio only. Reference [179] does not consider the effect of the rotational restraint between the flange and web. All the design standards neglect the corner radius in their local buckling design formulas. These design parameters should be studied in combination to obtain their contribution and interactions, allowing for a better understanding of the structural performance of box profile geometry and its unique stresses distribution. Consequently, this will enhance the current standards and make them more accurate by considering the corner geometry and its interactions with the other design parameters in the design formulas of these standards. The critical manufacturing design parameters controlling the local buckling behaviour of FRP composites can be categorised into two groups of geometric (wall slenderness, cross-sectional aspect ratio, and corner geometry) and layup parameters (axial-to-inclined fibre ratio, inclined fibre angle, and stacking sequence) [124,[173][174][175][176]. After reviewing the available literature, it appears that these design parameters were not comprehensively studied for closed-section geometry (box profiles) when compared to other geometries, as shown in Figure 7, evident by the minimum number of publications for each manufacturing design parameter. Moreover, no study was found to investigate the corner radius effect on the local buckling capacity and failure mode of box profiles. Most of the publications on the layup parameters were undertaken for laminated plate geometry, not structural-level shapes. The effect of the layup parameters on the corners, which represent critical failure zones, was not considered in such studies. Table 1 summarises the local buckling design formulas of compression box and I-shape members in current standards and guides [177][178][179][180]. The effect of the cross-sectional aspect ratio is neglected in [177], which relies on the maximum slenderness ratio only. Reference [179] does not consider the effect of the rotational restraint between the flange and web. All the design standards neglect the corner radius in their local buckling design formulas. These design parameters should be studied in combination to obtain their contribution and interactions, allowing for a better understanding of the structural performance of box profile geometry and its unique stresses distribution. Consequently, this will enhance the current standards and make them more accurate by considering the corner geometry and its interactions with the other design parameters in the design formulas of these standards. [17,66,67,72,[75][76][77]81,83,84,87,90,108,109,111], Plate [133,137,[143][144][145][146]152,153,169], and Box [54,76,94], Cross-sectional aspect ratio: Open-section [63][64][65]68,69,74,86,111] and Box [63,106,108], Corner geometry: Open-section [61,89], Fibre angle: Open-section [53,58,74,78,80,87,97,113,114], Plate [131][132][133][134]136,139,[141][142][143][145][146][147][148][149]151,153,155,156,163,165,166,168], and Box [29,53,54,58,94,95,97], Axial-to-inclined fibre ratio: Open-section [80,87,97], Plate [136,137,141,147,149,153,155,168], and Box [94], and Stacking sequence: Open-section [80,87,97,109,111,112,114,140], Plate [131,[135][136][137][138]141,142,[144][145][146][147][148][149]151,153,155,156,159,160,[162][163][164][165][166][167], and Box [54,97]). Where:
I-shape Same as [180]
Structural Design of Polymer Composites EUROCOMP Design Code and Handbook [179] Orthotropic plate Where: Guide for the Design and Construction of Structures made of FRP Pultruded Elements [180] Pre-standard for load & resistance factor design (LRFD) of pultruded fibre-reinforced polymer (FRP) structures [177] Hollow box Prospect for new guidance in the design of FRP [178] Hollow box Structural Design of Polymer Composites EUROCOMP Design Code and Handbook [179] Orthotropic plate Where:
Guide for the Design and Construction of Structures made of FRP Pultruded Elements [180]
I-shape were incorporated in design standards [111,182]. However, the boundary and interactions between local buckling and compressive failure in terms of the design parameters have not been reported for hollow box PFRP profiles. Studying these interactions can lead to facilitated design guidelines and optimised configurations of the design parameters to fully utilise the profile potentials. In the following sections, these manufacturing design parameters are discussed and the available literature on their effect and interaction is summarised. Moreover, the lack of knowledge and the potential research gaps are highlighted in order to develop the current design for manufacturing manuals.
Geometric Parameters of Hollow Box PFRP Profiles
The geometric parameters control the PFRP profile stability and determine its load capacity and failure mode [67,183]. These parameters of local buckling are discussed in the following sections by summarising their effect, comparing them for different geometries, and highlighting the available literature on their interactions.
Wall Slenderness
The wall slenderness (width-to-thickness ratio) significantly contributes to the local buckling capacity of thin-walled PFRP profiles [77,184]. Reducing the wall slenderness increases the profile stability and buckling capacity exponentially [152,167], and shifts the failure mode from local buckling to material compressive failure due to the increase in the flexural stiffness of the laminated walls [170,185]. The effect of the wall slenderness was studied extensively for laminated plate geometry subjected to uniaxial compressive load [133,143,146,152] and the effect of the layup properties on the buckling load capacity of slender plates was found to be negligible compared to their dimensions [137,144,153]. This finding agrees with the results of parametric studies on open-section PFRP columns [67,81,114], shown in Figure 8. When the slenderness ratio is reduced (thicker walls), the effect of the layup properties becomes significant. On the contrary, the effect of the layup properties becomes negligible when the wall slenderness is increased (thinner walls). Consequently, the layup properties should be considered carefully in the ultimate strength design of thick open-section profiles, while they can be considered only in the serviceability limit (deflection) design of thin open-section profiles [115]. However, the interaction of the wall slenderness with the other geometric parameters and failure modes of box profile geometry was not studied in the available literature. When comparing the available data, the box profiles exhibited higher buckling capacity compared to the open-section profiles for the same wall slenderness range, as shown in Figure 9. This behaviour can be referred to the higher restraint and torsional rigidity provided on both sides of the wall of box profiles. It was noticed that the thick open-section profiles exhibited a low buckling-to-material strength ratio compared to their counterpart box profiles. Thus, local buckling can be counted as an inevitable failure mode for open-section profiles. On the contrary, local buckling can be avoided for the box profiles if the wall slenderness is slightly increased due to the higher buckling-to-material When comparing the available data, the box profiles exhibited higher buckling capacity compared to the open-section profiles for the same wall slenderness range, as shown in Figure 9. This behaviour can be referred to the higher restraint and torsional rigidity provided on both sides of the wall of box profiles. It was noticed that the thick open-section profiles exhibited a low buckling-to-material strength ratio compared to their counterpart box profiles. Thus, local buckling can be counted as an inevitable failure mode for opensection profiles. On the contrary, local buckling can be avoided for the box profiles if the wall slenderness is slightly increased due to the higher buckling-to-material strength ratio and the available optimisation range. In other words, local buckling can be eliminated in the design for the manufacturing stage, allowing for the ultimate material strength to be used rather than considering the lower buckling strength in the structural design stage of box PFRP profiles. In addition, it was noticed that most of the open-section profiles were widely studied (larger number of references for the same wall slenderness) by experimental, theoretical, and numerical approaches to investigate the wall slenderness. On the contrary, the box profiles had fewer references for the same wall slenderness, which is a sign of few studies assessing the wall slenderness with various methodologies. When comparing the available data, the box profiles exhibited higher buckling capacity compared to the open-section profiles for the same wall slenderness range, as shown in Figure 9. This behaviour can be referred to the higher restraint and torsional rigidity provided on both sides of the wall of box profiles. It was noticed that the thick open-section profiles exhibited a low buckling-to-material strength ratio compared to their counterpart box profiles. Thus, local buckling can be counted as an inevitable failure mode for open-section profiles. On the contrary, local buckling can be avoided for the box profiles if the wall slenderness is slightly increased due to the higher buckling-to-material strength ratio and the available optimisation range. In other words, local buckling can be eliminated in the design for the manufacturing stage, allowing for the ultimate material strength to be used rather than considering the lower buckling strength in the structural design stage of box PFRP profiles. In addition, it was noticed that most of the open-section profiles were widely studied (larger number of references for the same wall slenderness) by experimental, theoretical, and numerical approaches to investigate the wall slenderness. On the contrary, the box profiles had fewer references for the same wall slenderness, which is a sign of few studies assessing the wall slenderness with various methodologies. [111], 12 [79], and 13 [109,110]).
Only one study was found to investigate the contribution of multiple design parameters on the local buckling behaviour of pulwound hollow square profiles [94]. The study was conducted on stub columns axially loaded using Taguchi (L9 array) design of experiment, as shown in Table 2 which shows the studied parameters and their levels. The resulting compressive strength and stiffness were analysed statistically to rank the effect of these parameters using the signal-to-noise (SNR) ratio and to determine the contribution of each parameter using the analysis of variance (ANOVA). The wall thickness was the dominant parameter for load capacity with a contribution of 93.4%. The winding angle was the second parameter with 2.6% and the axial-to-wound fibre ratio was ranked third with 1.2%. Moreover, the effect of the wall slenderness on the boundary between local buckling and compressive failure of box profiles was reported in this study. The failure mode of the pulwound hollow square profile was estimated to change from local buckling to compressive failure at a wall thickness of 6.75 mm, as shown in Figure 10. However, the interactions between the studied parameters were not captured because of the Taguchi design of experiment limitation (using reduced not full factorial experiment matrix). No study was found to address the relative contributions and interactions of the wall slenderness and the other geometric parameters. Initiating such studies on the design parameters of pulwound box profiles can provide design guidelines and optimal design configurations with improved utilisation, weight, and cost characteristics. The resulting compressive strength and stiffness were analysed statistically to rank the effect of these parameters using the signal-to-noise (SNR) ratio and to determine the contribution of each parameter using the analysis of variance (ANOVA). The wall thick ness was the dominant parameter for load capacity with a contribution of 93.4%. The winding angle was the second parameter with 2.6% and the axial-to-wound fibre ratio was ranked third with 1.2%. Moreover, the effect of the wall slenderness on the boundary between local buckling and compressive failure of box profiles was reported in this study The failure mode of the pulwound hollow square profile was estimated to change from local buckling to compressive failure at a wall thickness of 6.75 mm, as shown in Figure 10. However, the interactions between the studied parameters were not captured because of the Taguchi design of experiment limitation (using reduced not full factorial experi ment matrix). No study was found to address the relative contributions and interaction of the wall slenderness and the other geometric parameters. Initiating such studies on the design parameters of pulwound box profiles can provide design guidelines and optima design configurations with improved utilisation, weight, and cost characteristics.
Cross-Sectional Aspect Ratio
The cross-sectional aspect ratio (web height/flange width) defines the unsupported length of each wall and the major and minor axes of the cross-section. It affects the critica buckling load and stability of PFRP profiles [63] and alters their failure mode [186][187][188] While maintaining a constant cross-sectional area, the flange and web buckling capacitie were found to increase and decrease, respectively, when the cross-sectional aspect ratio i increased for both box [63] and open-section beams [172].
Cross-Sectional Aspect Ratio
The cross-sectional aspect ratio (web height/flange width) defines the unsupported length of each wall and the major and minor axes of the cross-section. It affects the critical buckling load and stability of PFRP profiles [63] and alters their failure mode [186][187][188]. While maintaining a constant cross-sectional area, the flange and web buckling capacities were found to increase and decrease, respectively, when the cross-sectional aspect ratio is increased for both box [63] and open-section beams [172].
The significant effect of the cross-sectional aspect ratio was characterised under compression and bending for open-section profiles [59,86]. Increasing this ratio three times was found to decrease the buckling strength down to 42.8% under compression while it will increase the buckling strength up to 57.0% under bending. Moreover, the optimal cross-sectional aspect ratios of open-section PFRP profiles were investigated for column [65,109,111] and beam [65,82] applications. In addition, the interaction between the cross-sectional aspect ratio and the layup properties was studied for box [63] and I-shape [64] GFRP columns. The layup properties became insignificant when the flange width was increased and local buckling controlled it, as shown in Figure 11a,b, respectively. will increase the buckling strength up to 57.0% under bending. Moreover, the opti cross-sectional aspect ratios of open-section PFRP profiles were investigated for colu [65,109,111] and beam [65,82] applications. In addition, the interaction between the cr sectional aspect ratio and the layup properties was studied for box [63] and I-shape GFRP columns. The layup properties became insignificant when the flange width w increased and local buckling controlled it, as shown in Figure 11a,b, respectively. Moreover, the interaction between compressive failure and local buckling fail modes was studied for box [96] and I-shape [69] GFRP columns. Figure 12 visualises interaction for I-shape GFRP columns. The first stub column I1 (narrow flange) showed interactive failure mode between compressive crushing of fibres and local buckling walls (buckling induced material crushing) since it has the lowest local slenderness. the other hand, the second and third stub columns (I2 and I3, respectively) failed in lo buckling with larger waviness in I3 (wide flange). In addition, the boundaries betw lateral buckling, web buckling, flange buckling, and interactive buckling failure mode I-shape PFRP beams were investigated [74]. It was concluded that the interactive (lo lateral) distortional buckling is prominent over the other buckling types and should considered in the design stage. The interaction of the failure modes influenced the lay properties as the optimal fibre angle was = ±45 against local buckling and was 60 − 70 against interactive buckling. Moreover, the interaction between compressive failure and local buckling failure modes was studied for box [96] and I-shape [69] GFRP columns. Figure 12 visualises this interaction for I-shape GFRP columns. The first stub column I 1 (narrow flange) showed an interactive failure mode between compressive crushing of fibres and local buckling of walls (buckling induced material crushing) since it has the lowest local slenderness. On the other hand, the second and third stub columns (I 2 and I 3 , respectively) failed in local buckling with larger waviness in I 3 (wide flange). In addition, the boundaries between lateral buckling, web buckling, flange buckling, and interactive buckling failure modes of I-shape PFRP beams were investigated [74]. It was concluded that the interactive (local-lateral) distortional buckling is prominent over the other buckling types and should be considered in the design stage. The interaction of the failure modes influenced the layup properties as the optimal fibre angle was θ = ±45 • against local buckling and was θ = 60 • − 70 • . against interactive buckling. Regarding the box profile geometry, the axial buckling capacity of walls in hollow square beams was reported to be higher than for hollow rectangular beams due to the higher buckling tendency at the weakest direction in the rectangular cross-section [58]. Nevertheless, the overall buckling moment of the beam under bending increases when the cross-sectional aspect ratio is increased since the wall slenderness of the top flange, which carries the majority of the compressive stresses, is decreased [106]. One study was found to examine the interaction between the walls of CFRP box beams [54]. It was reported that webs with a smaller slenderness ratio obtain a higher buckling capacity of the flange due to the higher rotational restraint provided by the thicker webs to the flange. Another study was found investigating the boundary of failure modes of box GFRP beam in terms of the cross-sectional aspect ratio [108]. The effect of the cross-sectional aspect ratio on the buckling of the top flange (spar cap) was significant compared to its effect on the shear web. This was referred to the higher compressive stresses acting on the top flange, which made its buckling load more sensitive to the change of dimensions. The optimal buckling capacity was obtained at the inflection point of the flange buckling and web buckling failure modes, which is denoted by the "○" symbol in Figure 13. This point represents the best cross-sectional aspect ratio for maximum buckling capacity and minimum material usage of the beam. Regarding the box profile geometry, the axial buckling capacity of walls in hollow square beams was reported to be higher than for hollow rectangular beams due to the higher buckling tendency at the weakest direction in the rectangular cross-section [58]. Nevertheless, the overall buckling moment of the beam under bending increases when the cross-sectional aspect ratio is increased since the wall slenderness of the top flange, which carries the majority of the compressive stresses, is decreased [106]. One study was found to examine the interaction between the walls of CFRP box beams [54]. It was reported that webs with a smaller slenderness ratio obtain a higher buckling capacity of the flange due to the higher rotational restraint provided by the thicker webs to the flange. Another study was found investigating the boundary of failure modes of box GFRP beam in terms of the cross-sectional aspect ratio [108]. The effect of the cross-sectional aspect ratio on the buckling of the top flange (spar cap) was significant compared to its effect on the shear web. This was referred to the higher compressive stresses acting on the top flange, which made its buckling load more sensitive to the change of dimensions. The optimal buckling capacity was obtained at the inflection point of the flange buckling and web buckling failure modes, which is denoted by the " " symbol in Figure 13. This point represents the best cross-sectional aspect ratio for maximum buckling capacity and minimum material usage of the beam. Rectangular box profiles (with web height/flange width ≥ 1.5) were found to exhibit a post-buckling trend in their load-displacement curves under compression loading [17]. Figure 14 compares the load-displacement curves of hollow square and rectangular PFRP profiles subjected to axial compression. The hollow square profile exhibited linear elastic behaviour until the peak (buckling) point, then failed. On the other hand, the hollow rectangular profile showed a linear elastic behaviour until the buckling point of the wider walls then the structural stiffness was degraded due to the loss of stability of the wider walls and the load capacity increased under a new equilibrium path until failure occurred. Although the cross-sectional area of the rectangular profile is 26.9% higher than for the square profile, its buckling strength was 54.7% less than the square profile due to the higher wall slenderness of the wide walls, which caused earlier buckling and suppressed the profile potentials. However, no study was found to address the interactions between the cross-sectional aspect ratio and the other geometric parameters, or the effect of the interaction between the flange and webs on the stability and overall structural behaviour of pulwound box PFRP profiles. Such studies can provide optimal design configurations and better design guidelines as the current design formulas are conservative and consider only the wall with the maximum slenderness ratio for buckling capacity estimation and do not include the interaction between the flange and the webs and their corner radius.
Corner Geometry
The corner (flange-web junction) geometry of PFRP profiles is a critical manufacturing parameter affecting the production process, the pulling force, and the heated die settings. It is considered to be a weak point of premature failure due to stresses concentration Rectangular box profiles (with web height/flange width ≥ 1.5) were found to exhibit a post-buckling trend in their load-displacement curves under compression loading [17]. Figure 14 compares the load-displacement curves of hollow square and rectangular PFRP profiles subjected to axial compression. The hollow square profile exhibited linear elastic behaviour until the peak (buckling) point, then failed. On the other hand, the hollow rectangular profile showed a linear elastic behaviour until the buckling point of the wider walls then the structural stiffness was degraded due to the loss of stability of the wider walls and the load capacity increased under a new equilibrium path until failure occurred. Although the cross-sectional area of the rectangular profile is 26.9% higher than for the square profile, its buckling strength was 54.7% less than the square profile due to the higher wall slenderness of the wide walls, which caused earlier buckling and suppressed the profile potentials. However, no study was found to address the interactions between the cross-sectional aspect ratio and the other geometric parameters, or the effect of the interaction between the flange and webs on the stability and overall structural behaviour of pulwound box PFRP profiles. Such studies can provide optimal design configurations and better design guidelines as the current design formulas are conservative and consider only the wall with the maximum slenderness ratio for buckling capacity estimation and do not include the interaction between the flange and the webs and their corner radius. Rectangular box profiles (with web height/flange width ≥ 1.5) were found to exhibit a post-buckling trend in their load-displacement curves under compression loading [17]. Figure 14 compares the load-displacement curves of hollow square and rectangular PFRP profiles subjected to axial compression. The hollow square profile exhibited linear elastic behaviour until the peak (buckling) point, then failed. On the other hand, the hollow rectangular profile showed a linear elastic behaviour until the buckling point of the wider walls then the structural stiffness was degraded due to the loss of stability of the wider walls and the load capacity increased under a new equilibrium path until failure occurred. Although the cross-sectional area of the rectangular profile is 26.9% higher than for the square profile, its buckling strength was 54.7% less than the square profile due to the higher wall slenderness of the wide walls, which caused earlier buckling and suppressed the profile potentials. However, no study was found to address the interactions between the cross-sectional aspect ratio and the other geometric parameters, or the effect of the interaction between the flange and webs on the stability and overall structural behaviour of pulwound box PFRP profiles. Such studies can provide optimal design configurations and better design guidelines as the current design formulas are conservative and consider only the wall with the maximum slenderness ratio for buckling capacity estimation and do not include the interaction between the flange and the webs and their corner radius.
Corner Geometry
The corner (flange-web junction) geometry of PFRP profiles is a critical manufacturing parameter affecting the production process, the pulling force, and the heated die settings. It is considered to be a weak point of premature failure due to stresses concentration
Corner Geometry
The corner (flange-web junction) geometry of PFRP profiles is a critical manufacturing parameter affecting the production process, the pulling force, and the heated die settings. It is considered to be a weak point of premature failure due to stresses concentration at this critical zone [189][190][191]. It is recommended to increase the inner corner radius (fillet) to prevent cracking by uniformly distributing the stresses and preventing their concentration [192], as shown in Figure 15. Increasing the outer corner radius to be equal to the inner radius plus the wall thickness can also facilitate the production process and help to avoid thermal-induced cracks [192]. at this critical zone [189][190][191]. It is recommended to increase the inner corner radius (fillet) to prevent cracking by uniformly distributing the stresses and preventing their concentration [192] , as shown in Figure 15. Increasing the outer corner radius to be equal to the inner radius plus the wall thickness can also facilitate the production process and help to avoid thermal-induced cracks [192]. Figure 15. Recommended configurations of the corner of PFRP profiles [192].
One study was found to experimentally characterise the structural behaviour of the corner of commercial box GFRP beams with longitudinal glass rovings and continuous strand mat (CSM) layups [101]. Microscopic photos were taken to diagnose any resin-rich zones and fibre wrinkling, as shown in Figure 16a,b. Although these manufacturing defects were distributed along the walls, the failure of box GFRP beams initiated at the corners due to the discontinuity in fibres and stresses concentration was noticed, as shown in Figure 16c. It was recommended that the steep change in the inner corner geometry could be changed from right angle to fillet in order to uniformly distribute the stress between the walls. [192].
One study was found to experimentally characterise the structural behaviour of the corner of commercial box GFRP beams with longitudinal glass rovings and continuous strand mat (CSM) layups [101]. Microscopic photos were taken to diagnose any resin-rich zones and fibre wrinkling, as shown in Figure 16a,b. Although these manufacturing defects were distributed along the walls, the failure of box GFRP beams initiated at the corners due to the discontinuity in fibres and stresses concentration was noticed, as shown in Figure 16c. It was recommended that the steep change in the inner corner geometry could be changed from right angle to fillet in order to uniformly distribute the stress between the walls.
Regarding the local buckling behaviour, the corners (initial radius 2.38 mm) of opensection (I-shape) PFRP beams were enhanced by bonding polyester pultruded equal leg angles (38 mm × 38 mm × 6.4 mm) or hand-layup fillets (38 mm) on the top corner [61], as shown in Figure 17. In both cases, the load capacity was significantly enhanced by 1.5 times due to the increased geometry, which enhanced the rotational stiffness and strength of the corners and allowed for uniform distribution of stresses. The failure mode was shifted from buckling of the top flange to compressive failure of fibres with the ultimate material strength fully utilised. In another study, CFRP layers and GFRP stiffening plates were used to strengthen the corners of I-shape beams to increase their buckling capacity [89]. This approach was proven to be very effective in preventing local buckling of the flange and enhancing the flange-web junction and the flexural strength of the beams. In these two studies, the fillet geometry exhibited a better effect than angles and plates due to the lower stresses concentration caused by their uniform change of geometry compared to the sudden change in the cross-section of the beam caused by the angles and plates. strand mat (CSM) layups [101]. Microscopic photos were taken to diagnose any resin-rich zones and fibre wrinkling, as shown in Figure 16a,b. Although these manufacturing defects were distributed along the walls, the failure of box GFRP beams initiated at the corners due to the discontinuity in fibres and stresses concentration was noticed, as shown in Figure 16c. It was recommended that the steep change in the inner corner geometry could be changed from right angle to fillet in order to uniformly distribute the stress between the walls. Regarding the local buckling behaviour, the corners (initial radius 2.38 mm) of opensection (I-shape) PFRP beams were enhanced by bonding polyester pultruded equal leg angles (38 mm × 38 mm × 6.4 mm) or hand-layup fillets (38 mm) on the top corner [61] , as shown in Figure 17. In both cases, the load capacity was significantly enhanced by 1.5 times due to the increased geometry, which enhanced the rotational stiffness and strength of the corners and allowed for uniform distribution of stresses. The failure mode was shifted from buckling of the top flange to compressive failure of fibres with the ultimate material strength fully utilised. In another study, CFRP layers and GFRP stiffening plates were used to strengthen the corners of I-shape beams to increase their buckling capacity [89]. This approach was proven to be very effective in preventing local buckling of the flange and enhancing the flange-web junction and the flexural strength of the beams. In these two studies, the fillet geometry exhibited a better effect than angles and plates due to the lower stresses concentration caused by their uniform change of geometry compared to the sudden change in the cross-section of the beam caused by the angles and plates. However, no study was found to address the inner and outer corner radii effect as manufacturing parameters on the local buckling capacity of PFRP profiles. In addition, no study was found to address the corner geometry effect on local buckling of box PFRP profiles. Moreover, the effect and interaction of the layup properties on the corner radii have not been studied for box profiles since most of the reported investigations on the layup parameters considered laminated plate geometry. In addition, the effect of continuous confinement provided by the wound fibres around the corners in pulwound box profiles has not been reported. Currently, standards and design manuals do not include the corner radius as a design parameter in their equations and structural designs. Moreover, the corner geometry (e.g., inner-to-outer radii ratio) needs to be investigated to reflect its contribution to the local buckling capacity in the related design equations. Consequently, understanding the corner geometry role as a design parameter for local buckling will lead to more stable designs of box PFRP profiles with enhanced load capacity and the However, no study was found to address the inner and outer corner radii effect as manufacturing parameters on the local buckling capacity of PFRP profiles. In addition, no study was found to address the corner geometry effect on local buckling of box PFRP profiles. Moreover, the effect and interaction of the layup properties on the corner radii have not been studied for box profiles since most of the reported investigations on the layup parameters considered laminated plate geometry. In addition, the effect of continuous confinement provided by the wound fibres around the corners in pulwound box profiles has not been reported. Currently, standards and design manuals do not include the corner radius as a design parameter in their equations and structural designs. Moreover, the corner geometry (e.g., inner-to-outer radii ratio) needs to be investigated to reflect its contribution to the local buckling capacity in the related design equations. Consequently, understanding the corner geometry role as a design parameter for local buckling will lead to more stable designs of box PFRP profiles with enhanced load capacity and the avoidance of buckling failure.
Layup Parameters of Hollow Box PFRP Profiles
The layup properties define the anisotropy and mechanical properties of FRP profiles in the longitudinal and transverse directions and directly affect their local buckling behaviour [193]. These properties should be designed depending on the intended application since the design will address a specific geometry and loading condition and cannot be generalised for all composite structures [143,194]. The layup parameters of local buckling are discussed in the following sections by summarising their effects, comparing them for different geometries, and highlighting the available literature on their interactions.
Axial-to-Inclined Fibre Ratio
For civil structural applications, the layup of PFRP profiles consists of longitudinal fibre rovings to obtain the required axial and flexural stiffness and off-axis (inclined) fibres to enhance the shear and transverse properties [42,195]. The ratio of these axial-to-inclined fibres shapes the anisotropy and mechanical properties of the laminated walls to achieve the required axial and flexural stiffness and the desired shear and transverse properties. In general, it is recommended to add inclined fibres along with the axial plies to enhance the off-axis mechanical properties, damage tolerance, and stability of laminated plates [196,197]. These inclined fibres are also needed to fulfil the web stiffness and strength requirements of PFRP beams [198,199].
Regarding the geometry effect on this ratio, it was found that increasing the axial fibre percentage will increase axial buckling resistance of laminated plates [123]. On the contrary, increasing the inclined fibre percentage will increase the local buckling strength of open-section FRP columns due to the higher rotational rigidity between the orthogonal walls [200]. No study was found on the interaction between the axial-to-inclined fibre ratio and the other layup properties or on its effect on the geometric parameters of pulwound box FRP profiles.
Inclined Fibre Angle
In classical laminated plate theory (CLPT), FRP composite plates with angle-ply ([±θ] S ) layup exhibit the maximum local buckling capacity at a fibre angle (θ) of ± 45 • since it obtains the highest bending-extension stiffness parameters (Dij) [153,201]. However, axial fibre rovings must be added to meet the axial and flexural stiffness requirements for civil structural applications. Moreover, it was proven that introducing new fibre angles apart from the traditional 0 • , ±45 • , and 90 • angles can also provide improved designs for local buckling of different geometries and loading conditions [147]. The contribution of the fibre angle on the buckling capacity was found to be significant for certain geometries.
For instance, small fibre misalignments, such as ±2 • , were noticed to affect the buckling capacity of GFRP tubes up to 7.8% [161]. The optimal fibre angle to obtain the maximum buckling capacity is a function of the geometry, boundary condition, and loading condition [133,143,194]. Under flexural loading, it was found that increasing the web orthotropy exhibits the highest increase in the buckling capacity of the flange due to the increase in the rotational restraint at the flange-web junction. Moreover, the increase in the flange buckling capacity is higher when its orthotropy is low [54]. For open-section FRP beams, the buckling load was found to decrease when the fibre angle is increased [58].
Moreover, the interaction between the fibre angle and the stacking sequence was found to be significant and may shift the optimal fibre angle depending on the geometry and boundary and loading conditions [149,168]. For instance, antisymmetric laminated plates require a fibre angle of 25 o to obtain the maximum buckling load unlike symmetric laminates [157]. Even for symmetric layups, the optimal fibre angle for maximum buckling of GFRP cylindrical shells changes depending on the introduction or removal of axial fibres [167], as shown in Figure 18. Stacking the inclined plies at the outer side to confine the axial fibres enhances the buckling capacity. Regarding the pulwound FRP profiles, no study was found to investigate the winding angle effect on the corner geometry or its interactions with the other layup parameters under compression or bending. Assessing the contribution of this parameter on the buckling resistance of pulwound box PFRP profiles will alleviate the lack of knowledge for this special shape.
Stacking Sequence
The stacking sequence of laminated composites affects their stability, deflection response, interlaminar stresses, post-buckling behaviour, and progressive failure [202][203][204]. Its optimal configuration to resist local buckling depends on the geometry and boundary and loading conditions and has to be determined specifically for the intended application [138,205]. In general, stacking the inclined plies to the outer surface of a laminated plate enhances the local buckling resistance under axial compression due to the increase in confinement [162,206]. On the contrary, stacking the axial fibres to the outer surface increases the plate buckling resistance against transverse compression [207]. A compromise between the buckling capacity and other mechanical properties should be considered in the design since stacking axial fibres at the outer surface exhibits higher tensile and flexural moduli [208]. In general, stacking sequences with elastic coupling are not preferred for compressively loaded members as they are vulnerable to manufacturing imperfections, buckling, bending, and warping due to thermal effects [120,196,207]. Thus, symmetric and balanced layups are usually used to minimise the coupling effects. For simply supported laminated plates, the interaction between the stacking sequence and fibre angle was found to be significant at = 45 [209] , as shown in Figure 19. The minimum buckling load was obtained when the − plies were outmost from the mid-plane due to the maximum effect of bending-twisting coupling (maximum value of + ). The reduction in the buckling load for this case reached its peak at = 45 with a 25% drop in load from the optimal case ([+ /− /− /+ ] ).
Stacking Sequence
The stacking sequence of laminated composites affects their stability, deflection response, interlaminar stresses, post-buckling behaviour, and progressive failure [202][203][204]. Its optimal configuration to resist local buckling depends on the geometry and boundary and loading conditions and has to be determined specifically for the intended application [138,205]. In general, stacking the inclined plies to the outer surface of a laminated plate enhances the local buckling resistance under axial compression due to the increase in confinement [162,206]. On the contrary, stacking the axial fibres to the outer surface increases the plate buckling resistance against transverse compression [207]. A compromise between the buckling capacity and other mechanical properties should be considered in the design since stacking axial fibres at the outer surface exhibits higher tensile and flexural moduli [208]. In general, stacking sequences with elastic coupling are not preferred for compressively loaded members as they are vulnerable to manufacturing imperfections, buckling, bending, and warping due to thermal effects [120,196,207]. Thus, symmetric and balanced layups are usually used to minimise the coupling effects. For simply supported laminated plates, the interaction between the stacking sequence and fibre angle was found to be significant at θ = 45 • [209], as shown in Figure 19. The minimum buckling load was obtained when the −θ plies were outmost from the mid-plane due to the maximum effect of bending-twisting coupling (maximum value of D 16 + D 26 ). The reduction in the buckling load for this case reached its peak at θ = 45 • with a 25% drop in load from the optimal case ([+θ/ − θ/ − θ/ + θ] S ). laminated plates, the interaction between the stacking sequence and fibre angle was fo to be significant at = 45 [209] , as shown in Figure 19. The minimum buckling was obtained when the − plies were outmost from the mid-plane due to the maxim effect of bending-twisting coupling (maximum value of + ). The reduction in buckling load for this case reached its peak at = 45 with a 25% drop in load from optimal case ([+ /− /− /+ ] ). Figure 19. Effect of fibre angle on the buckling load of simply supported laminated plate with metric stacking sequences [209]. Figure 19. Effect of fibre angle on the buckling load of simply supported laminated plate with symmetric stacking sequences [209].
Regarding the geometry effect, the stacking sequence was found to affect the boundaries of different failure modes of CFRP composite cylindrical shells [160], as shown in Figure 20. Reducing the shape factor (radius/thickness) shifts the failure mode from local buckling towards compressive failure. The 0 o laminate possesses the maximum axial compressive strength and the largest local buckling failure zone because of the axial direction of the fibres and the minimum circumferential confinement (maximum out-of-plane waviness), respectively. Conversely, the 90 o laminate exhibits the minimum axial compressive strength and the smallest local buckling failure zone because of the transverse direction of the fibres and the maximum circumferential confinement (minimum out-of-plane waviness), respectively. The [55/ − 55/0 6 ] S laminate presents the optimal compromise against both local and global buckling. On the contrary, angle-ply laminates with ± 25 o and ± 90 o plies possess the highest local buckling strength for CFRP cylindrical shells with geometric imperfections [159]. When comparing cross-ply and angle-ply layups for laminated plates under uniaxial compression, cross-ply layups exhibited optimal buckling resistance [138,183] while for cylindrical shells angle-ply is better [155]. For open-section profiles, angle-ply laminates obtained a higher buckling load than quasi-isotropic laminates [113]. The buckling capacity of these profiles was decreasing when the fibre angle was increased and the cross-ply laminates were observed to sustain a larger buckling load than angle-ply when the fibre angle is larger than 30 • [58,80]. It was found that the effect of the stacking sequence on the buckling capacity of laminated plates decreases as their dimensions are increased [128] but it becomes significant in open-section structural-level columns with slender walls [140]. No study was found on the effect of stacking continuous wound fibres with different sequences on the corner geometry of pulwound box PFRP profiles, or on the interaction between the stacking sequence and other layup parameters in such profiles.
buckling load than angle-ply when the fibre angle is larger than 30 [58,80]. It was found that the effect of the stacking sequence on the buckling capacity of laminated plates decreases as their dimensions are increased [128] but it becomes significant in open-section structural-level columns with slender walls [140]. No study was found on the effect of stacking continuous wound fibres with different sequences on the corner geometry of pulwound box PFRP profiles, or on the interaction between the stacking sequence and other layup parameters in such profiles.
Conclusions
Hollow box PFRP profiles are increasingly used as structural elements in civil structural applications. Although the studies and the standards were developed to facilitate the design process of PFRP profiles, there is still a lack of knowledge regarding the local buckling design parameters (layup and geometry) for box profile geometry. This presents an issue in designing these profiles and fully using their potentials, evident by the limited range of specifications in the available commercial profiles. This article presents a literature review on the local buckling design parameters controlling the structural behaviour of box PFRP profiles. Although most of these parameters were studied individually, there is still a need to perform a comprehensive study to obtain their contribution and interaction, which will provide practical design guidelines and recommended configurations of the design parameters. This review on the design parameters of PFRP profiles outlines the current state of knowledge and the investigations to be conducted. Thus, it provides a useful reference to researchers and design engineers. Furthermore, it presents a benchmark for the next generation of design guidelines, which will broaden the use of PFRP in construction by eliminating the current difficulties in PFRP profiles design. Based on this review, the current state of knowledge and future trends for optimising these profiles and their design parameters are summarised as follows:
•
Hollow box PFRP profiles are featured with higher structural stability and torsional rigidity compared to the open-section profiles due to the restraint at both ends of the wall and its unique stresses distribution. However, their design parameters have not been studied comprehensively as for open-section and laminated plate geometries. While local buckling is inevitable for open-section profiles, it can be avoided for box profiles if the wall slenderness is optimised due to the high buckling-to-material strength ratio and the available optimisation range. This will allow the design to consider the ultimate material strength rather than considering the lower buckling strength.
•
The flange-web junction (corner) radius and its effect on the local buckling of hollow box PFRP profiles have not been studied or quantified even though its effect was significant on the buckling behaviour and failure mode of open-section profiles. Moreover, the interaction between the layup properties or the flange-web slenderness and the corner geometry has not been studied for box profile geometry. In addition, the effect of continuous confinement provided by the wound fibres around the corners in pulwound box profiles has not been reported. The corner (fillet) radius is not included in the analysis and design equations of box PFRP profiles. No study was found to address the inner and outer corner radii effect on the local buckling capacity as manufacturing parameters of PFRP profiles.
•
Pulwound box FRP profiles were recently introduced for infrastructure applications with better transverse and circumferential properties. However, studies are still needed to comprehensively address all the critical design parameters controlling the local buckling of these profiles and quantify their relative contributions and interactions. Considering these interactions can facilitate economic structural designs and guidelines for these profiles, eliminate any conservative assumptions, and update the current design standards and manuals. Understanding the contributions and interactions of these parameters will broaden the use of these profiles with competitive structural performance and cost versus the conventional construction materials.
•
As with the other structural shapes, there is a need to construct design curves and failure maps for hollow box PFRP profiles, considering the interactions and showing the shift in the failure modes in terms of the critical design parameters. Investigating these review findings, especially the importance of the interactions, will enhance the current design guidelines, facilitate economic and competitive designs, and manufacture optimised profiles for civil structural applications. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2021-12-03T16:38:23.515Z | 2021-11-28T00:00:00.000 | {
"year": 2021,
"sha1": "696c9c5e7339665b035fb0733dbf020d4448f5de",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/13/23/4159/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b322d2f77b6ede0c579e6149d7fd22e885f1b48c",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
119294428 | pes2o/s2orc | v3-fos-license | The G+M eclipsing binary V530 Orionis: A stringent test of magnetic stellar evolution models for low-mass stars
We report extensive photometric and spectroscopic observations of the 6.1-day period, G+M-type detached double-lined eclipsing binary V530 Ori, an important new benchmark system for testing stellar evolution models for low-mass stars. We determine accurate masses and radii for the components with errors of 0.7% and 1.3%, as follows: M(A) = 1.0038 +/- 0.0066 M(sun), M(B) = 0.5955 +/- 0.0022 M(sun), R(A) = 0.980 +/- 0.013 R(sun), and R(B) = 0.5873 +/- 0.0067 R(sun). The effective temperatures are 5890 +/- 100 K (G1V) and 3880 +/- 120 K (M1V), respectively. A detailed chemical analysis probing more than 20 elements in the primary spectrum shows the system to have a slightly subsolar abundance, with [Fe/H] = -0.12 +/- 0.08. A comparison with theory reveals that standard models underpredict the radius and overpredict the temperature of the secondary, as has been found previously for other M dwarfs. On the other hand, models from the Dartmouth series incorporating magnetic fields are able to match the observations of the secondary star at the same age as the primary (3 Gyr) with a surface field strength of 2.1 +/- 0.4 kG when using a rotational dynamo prescription, or 1.3 +/- 0.4 kG with a turbulent dynamo approach, not far from our empirical estimate for this star of 0.83 +/- 0.65 kG. The observations are most consistent with magnetic fields playing only a small role in changing the global properties of the primary. The V530 Ori system thus provides an important demonstration that recent advances in modeling appear to be on the right track to explain the long-standing problem of radius inflation and temperature suppression in low-mass stars.
INTRODUCTION
The discovery of V530 Ori (HD 294598, BD−03 1283, 2MASS J06043380−0311513) as an eclipsing binary was made by Strohmeier (1959), who established an orbital period for the system of 6.110792 days. The depth reported for the primary eclipse was about 0.7 mag, but no secondary eclipse was seen in these early photographic measurements. The primary star is of solar type. The object has received little attention following the discovery, other than the occasional measurement of times of primary eclipse, which was the only eclipse detected until recently. It was claimed by Sahade & Berón Dávila (1963) to be a possible member of the Collinder 70 cluster, a proposal that appears to have since been dismissed. Faint spectral lines of the secondary with about the same width as those of the primary were first detected in 1985 by Lacy (1990), but remained elusive in subsequent highresolution observations (see, e.g. Popper 1996). Similarly, no signs of the secondary eclipse could be seen in more recent photometric monitoring, implying either a very faint and cool companion, or possibly an eccentric orbit and a special orientation such that no secondary eclipses occur.
This motivated us to begin our own program of spectroscopic observation in 1996. Our interest in the system was piqued when we were able to derive the first single-lined spectroscopic orbit, which is indeed eccentric but only slightly so, and to predict the exact location of the secondary eclipse, which we were then successful in detecting with more targeted photometric observations. The depth in V is less than 3%. Continued analysis has enabled us to also measure radial velocities for the secondary, and to fully characterize the binary.
The confirmed presence of a late-type star in V530 Ori makes it a rare example of a system containing a solartype primary that is easy to study and provides access to other key properties of the binary, and at the same time a late-type secondary that is very faint but still measurable. As such, V530 Ori is potentially very useful for testing models of stellar evolution if accurate properties for the stars can be derived, by virtue of the greater leverage afforded by a mass ratio significantly different from unity. Previous measurements for M dwarfs have shown rather serious disagreements with models in the sense that such stars appear larger and cooler than predicted by theory (e.g., Torres & Ribas 2002;Ribas 2003;López Morales & Ribas 2005;Torres 2013). This is now widely believed to be related to stellar activity (magnetic inhibition of convection, and/or star spots; Mullan & MacDonald 2001;Chabrier et al. 2007;Feiden & Chaboyer 2012), but there are relatively few systems containing M stars with complete information available for testing this hypothesis.
Here we provide a full description of our spectroscopic and photometric observations of V530 Ori, leading to the first determination of accurate properties for the stars including the absolute masses and radii. We report also a detailed chemical analysis of the system based on the solar-type primary star, bypassing the usual difficulties and limitations of determining the metallicity of M stars. We additionally estimate the surface magnetic field strengths for both components, an important piece of information permitting a more meaningful comparison with recent models that incorporate magnetic fields. Our results provide one of the clearest illustrations that such models are indeed able to reproduce the measured properties of low-mass stars.
EPHEMERIS
Dates of minimum light for V530 Ori were collected from the literature and from our own unpublished photometric measurements (see Table 1), and were used to establish the ephemeris. The measurements (34 timings for the primary and 7 for the secondary) span about 82 years, or ∼4900 orbital cycles of the binary. Uncertainties for the older timings and for some of the more recent ones have not been published, so we determined them by iterations to achieve reduced χ 2 values near unity, separately for each type of measurement (σ = 0.028, 0.011, and 0.0001 days for the photographic, visual, and photoelectric/CCD data). We found we also needed to rescale the published photoelectric/CCD errors by factors of 1.8 and 2.9 for the primary and secondary, respectively. A linear weighted least-squares fit using the primary and secondary minima together resulted in Min I (HJD) = 2,453,050.826061(91) + 6.11077840(33)E Min II (HJD) = 2,453,053.6623(16) + 6.11077840(33)E , which we have used in the analysis that follows. Uncertainties are indicated in parentheses in units of the last significant digit.
Secondary eclipses occur at a phase of 0.46414 (27), clearly showing that the orbit is eccentric. Some degree of apsidal motion is therefore expected. An ephemeris curve (Lacy 1992a) was fit to all the data with the same weighting scheme as above, adopting values for the eccentricity and inclination angle derived in our spectroscopic and light curve analyses below, and is illustrated in Figure 1. However, the apsidal period is only poorly determined from this fit (U = 7800 ± 22000 years).
SPECTROSCOPIC OBSERVATIONS
V530 Ori was monitored spectroscopically with three different instruments over a period of more than 17 years. Observations began at the Harvard-Smithsonian Center for Astrophysics (CfA) in 1996 June with a Cassegrainmounted echelle spectrograph ("Digital Speedometer" April. The spectra consist of a single order 45Å wide recorded with an intensified photon-counting Reticon detector at a central wavelength of 5187Å, which includes the Mg I b triplet. The resolving power provided by this setup is R ≈ 35,000. Additional observations were collected with a nearly identical instrument attached to the 4.5 m-equivalent Multiple Mirror Telescope (also on Mount Hopkins), prior to its conversion to a monolithic 6.5 m telescope. The 74 usable spectra from these instruments have signal-to-noise ratios (SNRs) ranging from about 10 to 50 per resolution element of 8.5 km s −1 . Observations of the dusk and dawn sky were made every night to monitor the velocity zero point, and to establish small run-to-run corrections applied to the DS velocities reported below.
We gathered a further 30 spectra of V530 Ori at the Kitt Peak National Observatory (KPNO) from 1999 March to 2001 January, using the coudé-feed telescope and the coudé spectrometer. The spectra cover the wavelength region 6450-6600Å, and include the Hα line. The 250 µm slit and OG 550 filter projected onto 0.186Å on the detector. The detector was a Ford 3072 × 1024 pixel CCD (F3KB) with 15 µm square pixels. The 'A' grating (632 grooves mm −1 ) was used in the second order with Camera 5 (a folded Schmidt design). The spectra were flat-fielded and wavelength calibrated following standard procedures, based on quartz lamp flats and Th-Ar emission tube spectra. Observations of the standard stars ι Psc or β Vir were taken with the same setup during the same nights in order to correct for instrumental drifts. The adjustments assumed constant velocities of +5.636 km s −1 for ι Psc (HD 222368) and +4.468 km s −1 for β Vir (HD 102870), from Nidever et al. (2002).
Finally Note. -This table is available in its entirety in machine-readable and Virtual Observatory (VO) forms in the online journal. A portion is shown here for guidance regarding its form and content. a Timing uncertainties as published, or as measured in the case of our own photometric observations. Adopted uncertainties for the photographic, visual, and photoelectric/CCD measurements with no published errors are 0.028, 0.011, and 0.0001 days, respectively. Other errors have been scaled by iterations during the ephemeris fit by factors of 1.8 and 2.9 for the primary and secondary (see text). b 'Epoch' refers to the cycle number counted from the reference time of primary eclipse (see text). c 'Ecl' is 1 for primary eclipses and 2 for secondary eclipses. d 'Type' is PG for photographic, V for visual, and PE for photoelectric or CCD measurements. e Sources are: (1) Strohmeier (1959); (2) Isles (1988); (3) Lacy & Fox (1994); (4) Lacy et al. (1999); (5) This paper; (6) Lacy (2002) This bench-mounted instrument yields a resolving power of R ≈ 44,000, and spectra spanning 3860-9100Å in 51 orders. The SNRs range from 13 to 121 per resolution element of 6.8 km s −1 . Instrumental drifts for TRES are below 10 m s −1 in velocity, which is negligible for our purposes.
Lines of the very faint secondary star in V530 Ori are not immediately obvious in any of our spectra, even in the redder ones from KPNO, but its radial velocities (RVs) can nevertheless be measured accurately along with those of the primary using the two-dimensional cross-correlation algorithm TODCOR (Zucker & Mazeh 1994).
Templates for the DS and TRES spectra were selected from a large library of calculated spectra based on model atmospheres by R. L. Kurucz (see Nordström et al. 1994;Latham et al. 2002) and a line list prepared by J. Morse. These templates cover approximately 300Å centered on the Mg I b region, and include numerous other lines mainly of Fe, Ca, and Ti. For the KPNO spectra we used a different template library based on PHOENIX models (see Husser et al. 2013), kindly computed for us by I. Czekala for the wavelength region of interest. Our synthetic templates are parametrized in terms of the effective temperature (T eff ), rotational velocity (v sin i when seen in projection), surface gravity (log g), and metallicity, [Fe/H]. The latter two have a minimal impact on the velocities, so we adopted fixed values of log g = 4.5 and solar composition for both stars. The optimum template parameters (T eff and v sin i) for the primary were determined following by running grids of crosscorrelations seeking the best template match as measured by the mean cross-correlation coefficient averaged over all exposures. This was done separately for the three sets of spectra, with very consistent results. We obtained T eff = 6000 K and v sin i = 10 km s −1 . The faintness of the secondary, which has a flux some 40 times smaller than that of the primary, prevents us from determining its template parameters in a similar way. Instead we relied on the temperature difference inferred from our light curve solutions in Sect. 6, and we assumed the star is rotating synchronously. The latter is a reasonable assumption, as the timescale for synchronization of the secondary (∼10 7 yr; see, e.g., Hilditch 2001) is much shorter than the ∼3 Gyr age we estimate for the system later in Sect. 8. With these constraints the template parameters for the secondary were T eff = 4000 K and v sin i = 6 km s −1 .
The final heliocentric velocities from the TRES spectra are the average of the measurements from the three echelle orders covered by the templates, and are listed in Table 2. Typical uncertainties are 0.05 km s −1 for the primary (star A) and 1.6 km s −1 for the faint secondary (star B). Experience has shown that the very narrow wavelength range of the DS spectra (45Å) can sometimes lead to systematic errors in the RVs due to residual line blending as well as lines shifting in and out of the spectral window as a function of orbital phase (see Latham et al. 1996). We investigated this by means of numerical simulations for each spectrum, and found the effect to be significant (shifts of up to 7 km s −1 for the secondary, but only 0.02 km s −1 for the primary). We therefore applied corrections to the individual velocities in the same way as done in previous studies with similar spectroscopic material (e.g., Torres et al. 1997;Lacy et al. 2010) in order to remove the bias. These adjustments increase the minimum masses by about 4% for the primary star and 2% for the secondary. The final DS velocities with corrections included are given also in Table 2. They have typical uncertainties of 0.5 km s −1 and 6.7 km s −1 for the primary and secondary, respectively. RVs from the KPNO observations are based on the entire wavelength range of those spectra except for the broad Hα line, which was masked out. Those measurements (two being excluded here for giving very large residuals from the orbit described in the next section) are presented with the others in Table 2. Their uncertainties are typically 0.4 km s −1 for the primary and 5.4 km s −1 for the secondary.
HJD
Orbital Our TODCOR analyses also provided an estimate of the light ratio between the primary and secondary at the mean wavelength of our spectra (see Zucker & Mazeh 1994). For the DS observations we obtained ℓ B /ℓ A = 0.014 ± 0.002 in the Mg I b region, corresponding to a magnitude difference ∆m = 4.6. The TRES spectra yielded a similar value of 0.013 ± 0.002 for the average of the three orders used to measure RVs, centered also on the Mg I b region. As expected from the spectral types, the secondary appears brighter at the redder wavelengths of the KPNO spectra, and the light ratio obtained there is 0.042 ± 0.003 at a mean wavelength of 6410Å.
Our TRES spectra display moderately strong emission cores in the Ca II H and K lines, which is indicative of stellar activity. Measurement of the radial velocity of the emission cores shows that they follow the center of mass of the primary, and are thus associated with that star. Further evidence of activity is presented below.
3.1. Spectroscopic orbital solution Separate spectroscopic orbital solutions using the three velocity data sets were carried out to check for potential systematic differences, with the ephemeris held fixed at the values in Sect. 2. The results shown in Table 3 indicate fairly good agreement considering the faintness of the secondary and the difficulty in measuring its velocity. Our adopted solution combining all of the RVs is given in the last column, where we have allowed for arbitrary offsets between the DS and KPNO velocities relative to those measured with TRES, which are nonnegligible in both cases. The TRES velocities dominate because of their considerably smaller uncertainties; the rms residuals (σ A and σ B ) are listed at the bottom of the table along with other quantities of interest. We find the orbit to be slightly eccentric (e = 0.08802 ± 0.00023), consistent with predictions from theory for this system indicating a timescale for tidal circularization of ∼18 Gyr (e.g., Hilditch 2001).
A graphical representation of our fit appears in Figure 2 together with the observations and the RV residuals, the latter shown separately for each data set.
Spectral disentangling
Although a number of eclipsing binaries containing M stars have been studied in the past, in very few cases is the metallicity of the system known because of the difficulty of analyzing the spectra of late-type stars, which are dominated by strong molecular features. In V530 Ori the primary is a solar-type star, for which an abundance analysis would be straightfor- ward except for the fact that its spectrum is contaminated at some level by the secondary. To remove this effect we have subjected our observations to spectral disentangling (Bagnuolo & Gies 1991;Simon & Sturm 1994;Hadrava 1995), by which we are able to reconstruct the spectra of the individual components for further analysis. Pavlovski & Hensberge (2005) and others have shown that disentangled spectra can yield reliable abundances (see also Pavlovski & Hensberge 2010;Pavlovski & Southworth 2012).
The application of the technique to V530 Ori pushes it to the limit because of the extreme faintness of the secondary (2.5% fractional light in V , and even less toward the blue) and the modest SNRs of our spectra. Some previous studies have succeeded in similar situa- · · · · · · · · · +0.413 ± 0.055 ∆RV (TRES−KPNO) (km s −1 ) . . . · · · · · · · · · −0.596 ± 0.080 Derived quantities · · · · · · 0.42 , 5.47 0.42 , 5.37 a Period and time of primary eclipse from Sect. 2. b Center-of-mass velocity on the reference system of the TRES instrument.
tions with light ratios of ∼5% (e.g., Pavlovski et al. 2009;Lehman et al. 2013;Tkachenko et al. 2014) and even 1.5-2% (Holmgren et al. 1999;Mayer et al. 2013), but with spectra of considerably higher SNR than ours. We performed disentangling separately for each of our three data sets (TRES, DS, KPNO) because of their different spectral resolutions and wavelength coverage, discarding a few spectra with low SNR. We used the program FDBinary (Ilijić et al. 2004), which implements disentangling in the Fourier domain (Hadrava 1995). For the DS and KPNO observations we disentangled the entire spectral range available, and for TRES we restricted ourselves to the interval 4475-6760Å to avoid regions with lower flux or telluric contamination. Special care was taken to select spectral stretches with both ends in the continuum, as required by the algorithm. Given the the rich line spectrum the wavelength regions we disentangled differ in length from 30Å to 150Å. Renormalization of the disentangled spectra (see Pavlovski & Hensberge 2005;Lehman et al. 2013) was performed using the measured light ratios reported earlier from our spectroscopic analysis as well as those below from our light curve fits, interpolating or extrapolating linearly as needed.
The disentangled spectrum of the primary star gains in SNR compared to the individual spectra roughly as where N is the number of spectra and SNR the average SNR of the individual spectra. A similar expression holds for the disentangled secondary spectrum, with the light ratio reversed. The spectra resulting from the procedure have SNRs of 246 (primary) and 8 (secondary) for TRES (λ5800, N = 27), in a region containing strong Ca I lines, compared to a synthetic spectrum (top) with parameters T eff = 3900 K, log g = 4.65, and v sin i = 5 km s −1 , close to those appropriate for the star. The model spectrum has been scaled to a light ratio of 4% relative to the primary. We subjected the disentangled spectra of the primary component to a detailed analysis to determine the effective temperature and chemical abundance. A first estimate of T eff was made by fitting the Balmer line profiles, which depend primarily on temperature and very little on log g, via genetic minimization (Tamajo et al. 2011). Metal lines in the wings were masked out, and the surface gravity and v sin i were held fixed at values reported below in Sect. 7. We obtained temperatures of 5840 ± 50 K and 5870 ± 45 K from Hα and Hβ in the TRES spectra, and 5780 ± 55 K from Hα in the KPNO spectra. These uncertainties may be underestimated, however, as we cannot rule out systematics from the normalization process and merging of the echelle orders.
We then used the uclsyn package (Smalley et al. 2011) to fine-tune the temperature and set the microturbulent velocity ξ t from the numerous Fe I lines, and to determine the detailed abundances based on the measured equivalent widths. Surface gravity was held fixed as above. uclsyn relies on synthetic spectra computed under local thermodynamic equilibrium (LTE) using AT-LAS9 model atmospheres (Kurucz 1979). Excitation equilibrium was imposed to determine T eff from the Fe I lines, with the selection of lines and their gf values taken from the recent critical compilation of Bensby et al. (2014). Microturbulence was determined by enforcing no dependence between the abundances and the reduced equivalent widths. We obtained T eff = 5890 ± 80 K and ξ t = 1.2 ± 0.1 km s −1 from the TRES spectra, and T eff = 5970 ± 110 K and ξ t = 1.7 ± 0.1 km s −1 from the red KPNO spectra. We attribute the discrepancy in ξ t values to the greatly different wavelength coverage of the TRES and KPNO spectra. The DS spectra do not permit independent estimates of these parameters because of the very limited wavelength coverage, so they were fixed at values of 5900 K and 1.2 km s −1 . We collect the various temperature determinations for the primary star in Table 4, along with others described later, noting that they are not all completely independent as some of them rely on the same sets of spectra.
Detailed abundances on the scale of Asplund et al. (2009) were obtained for 21 species from the TRES spectra, as listed in Table 5, and somewhat fewer for the DS and KPNO spectra. The uncertainties account for errors in T eff and ξ t of 100 K and 0.1 km s −1 , respectively. The agreement between the three instruments is excellent, the average differences for all elements taken together being TRES − DS = +0.022 ± 0.014 dex (10 lines in com-mon), TRES − KPNO = −0.011 ± 0.032 dex (7 lines), and DS−KPNO = −0.022±0.029 dex (4 lines). In particular, the iron abundances based on Fe I are very consistent. Those from Fe II are somewhat less reliable and are based on far fewer lines. We adopted the weighted average of the Fe I values, [Fe/H] = −0.12 ± 0.08, with a conservative uncertainty. Abundances of most other elements in V530 Ori tend to be subsolar as well. This includes the α elements, which are therefore not enhanced in this system.
PHOTOMETRIC OBSERVATIONS
Two sets of V -band images of V530 Ori were obtained with independent robotic telescopes operating at the University of Arkansas (URSA WebScope) and near Silver City, NM (NFO WebScope) from 2001 January to 2012 February. A description of the telescopes and instrumentation, as well as the data acquisition and reduction procedures may be found in the papers by Grauer et al. (2008) and Sandberg Lacy et al. (2012). We collected a total of 5137 URSA observations and 3024 NFO observations providing complete phase coverage. The comparison ('comp') and check ('ck') stars were HD 294597 (TYC 4786-1469-1; V = 10.43) and HD 294593 (TYC 4786-2281-1; V = 9.56). The differential URSA measurements (in the sense variable minus comp) are listed in Table 6; those from the NFO appear in Table 7 (computed as variable minus 'comps', where comps is the magnitude corresponding to the sum of the fluxes of the comp and ck stars). The precision of these measurements is about 7 milli-magnitudes (mmag) for URSA and 5 mmag for NFO. A graphical representation of these observations is shown later in Sect. 6.
Differential photometric measurements of V530 Ori were also gathered with the Strömgren Automatic Telescope at ESO (La Silla, Chile), during several campaigns from 2001 January to 2006 February. A total of 720 observations were made in the uvby bands, using the three comparison stars HD 39438 (F5 V), HD 39833 (G0 III), and HD 40590 (F6 V). The typical precision per differential measurement ranges from 7 mmag in y to 11 mmag in u, and the phase coverage is complete. The reduction of this material followed procedures analogous to those described by Clausen et al. (2008). We report these observations in Table 8, and show them graphically in Figure 4. In addition to the light curves, we obtained homogeneous standard uvbyβ indices with the same telescope on dedicated nights in which V530 Ori and the comparison stars were observed together with a large sample of standard stars. The resulting indices outside of eclipse are V = 9.861 ± 0.008, b − y = 0.408 ± 0.005, m 1 = 0.199 ± 0.009, c 1 = 0.296 ± 0.010, and β = 2.589 ± 0.007.
-Columns list the atomic number, the element and ionization degree, the number of spectral lines measured and abundance relative to the Sun from each instrument, and finally the reference photospheric solar values from Asplund et al. (2009). Abundances of other elements based on a single line are considered less reliable and are not listed. seen at other orbital phases.
LIGHT CURVE ANALYSIS
The V -band and uvby data of V530 Ori were analyzed using the JKTEBOP code of John Southworth (Nelson & Davis 1972;Popper & Etzel 1981;Southworth et al. 2004), which is adequate for relatively uncomplicated systems such as this that are well detached. The fitted light-curve parameters are the central surface brightness of the smaller, fainter, cooler, and less massive star (secondary) relative to the other (J B ), the sum of the relative radii of the primary and secondary in units of the semi-major axis (r A + r B ), the radius ratio (k ≡ r B /r A ), the inclination angle of the orbit (i), the orbital eccentricity and longitude of periastron of the primary (e and ω), and the linear limb-darkening coefficients (u A and u B ). The ephemeris used in the solutions was that of Sect. 2, and the mass ratio was held fixed at the spectroscopic value q = 0.5932. Because the secondary eclipse is so shallow, the limb-darkening parameters for the smaller star were fixed at theoretical values based on an average of predictions from Van Hamme (1993), Díaz-Cordovés et al. (1995), Claret (2000), and Claret & Hauschildt (2003), and the values for the larger star were allowed to vary. Gravity darkening exponents based on the components' temperatures were taken from theory (Claret 1998). The light curve modeling was carried out using the Levenberg-Marquardt option in JK- TEBOP, but the results and their uncertainties were checked by performing a Monte Carlo simulation study, and found to agree well between the two methods.
Preliminary fits showed that the values for i, e, and ω were very consistent among the data sets, so weighted mean values were adopted (i = 89. • 78 ± 0. • 08, e = 0.0862 ± 0.0010, ω = 130. • 08 ± 0. • 14) and held fixed for the final solutions. The results for the different data sets are presented in Table 9, where ℓ A and ℓ B are the light fractions of the components at orbital quadrature, σ is the rms residual in mmag, and N is the number of observations. The fits for the URSA and NFO data near the primary and secondary eclipses are illustrated in Figure 6 and Figure 7, respectively. An illustration of the correlation between some of the main variables is shown in Figure 8, based on a Monte Carlo simulation with 1000 trials using the URSA data set.
Our solutions consistently indicate that the secondary eclipse (only 0.028 mag deep in V ) is total, with a duration of totality of about 70 minutes. The primary eclipse is annular. Trials were made allowing for the possible presence of third light, but the resulting values were not significantly different from zero, so no third light was allowed in the final solutions. Additional trials were carried out using a non-linear limb-darkening law of the logarithmic type (Claret 2000), and also a quadratic law, but we found the residual variances of the fits to be always worse than with the linear limb-darkening law. The resulting fitted orbital parameters were not significantly different from those with the linear law, except that the logarithmic law preferred a primary relative radius value (r A ) about 1% larger, and the quadratic law gave a value about 1.9% larger. Because the fit to the data is superior for the linear law, we have chosen those results for the remainder of this study. Average values of the geometric properties used for computing the absolute dimensions are listed in the last column of Table 9.
ABSOLUTE DIMENSIONS
Masses and radii for the components of V530 Ori computed from the information in Table 3 and Table 9 are presented in Table 10, and are determined to better than 0.7% in the case of the masses and 1.3% for the radii. Based on the three detailed and independent chemical analyses in Sect. 4, the average metallicity of V530 Ori (assuming the primary and secondary to have the same composition) is determined to be [Fe/H] = −0.12 ± 0.08. A photometric estimate in good agreement with this value was obtained using the Strömgren indices in Sect. 5 weight-averaged with those measured by Lacy (2002), along with the calibration in Eq. 14 by Olsen (1984). The result is [Fe/H] = −0.10 ± 0.13, which should be unaffected by the very faint secondary. Use of the calibration by Holmberg et al. (2007) yields a somewhat lower value of [Fe/H] = −0.23 ± 0.09, still in agreement with the more reliable spectroscopic determination.
The procedure described in Sect. 3 to determine template parameters for deriving RVs can be refined by interpolating between grid points in our libraries of synthetic spectra, in order to determine more precise values for T eff and v sin i. The v sin i value for the primary obtained in this way, 9±1 km s −1 , is consistent with what is expected if the star were rotating pseudo-synchronously (see Table 10; Hut 1981), and is in agreement with predictions from theory suggesting a synchronization timescale of only ∼10 7 yr (Sect. 3), much shorter than the system age estimated below. However, the resulting tem- perature for that star from this method depends on the metallicity adopted, due to strong correlations between those two properties. We performed the determinations with [Fe/H] values of 0.0 and −0.5, and then interpolated to [Fe/H] = −0.12, separately for our DS, TRES, and KPNO spectra. The T eff values obtained for the primary are 5880 K, 5880 K, and 5820 K, respectively, which are similar to those derived from disentangling (Sect. 4). They have estimated uncertainties of 100 K. The accuracy of our various (non-independent) temperature determinations for the primary star, which we have summarized in Table 4, is likely limited by systematic effects not reflected in the formal uncertainties. For the analysis that follows we have adopted a consensus temperature for the primary of 5890 ± 100 K, in which the uncertainty is a conservative estimate that is approximately equal to half the spread in the spectroscopic determinations. The secondary temperature was inferred from this value and the temperature difference, ∆T eff . The latter may be derived from the central surface brightness ratio J B (Table 9) using the absolute visual flux calibration of . As this procedure is entirely differential, the resulting temperature difference, ∆T eff = 2010 ± 70 K is typically better determined than the individual temperatures. The adopted T eff value for the secondary is then 3880 ± 120 K. These stellar temperatures correspond approximately to spectral types of G1 and M1 for the primary and secondary. We note, finally, that the small differences between these final stellar properties and the template parameters adopted in Sect. 3 for the RV determinations have a negligible effect on those measurements. The reddening towards V530 Ori was estimated in several ways. One comes from the Strömgren photometry and the calibration by Crawford (1975) Drimmel et al. (2003), and Amôres & Lépine (2005) for an assumed distance of 100 pc. The results, 0.071, 0.039, 0.052, 0.019, and 0.030, were averaged with the previous one to yield an adopted reddening of E(B − V ) = 0.045 ± 0.020, with a conservative uncertainty. A consistency check on the effective temperature adopted above may be obtained from standard photometry available for V530 Ori from various catalogs and other literature sources (Tycho-2, Høg et al. 2000;2MASS, Cutri et al. 2003;TASS, Droege et al. 2006;APASS, Henden et al. 2012;Lacy 1992b;Lacy 2002;and Sect. 5). From eleven appropriately de-reddened non-independent color indices and the calibrations of Casagrande et al. (2010) (for the above adopted spectroscopic metallicity) we obtained T eff = 5800 ± 100 K, which corresponds to the combined light of the two stars as the secondary has a non-negligible influence on the photometry, especially at the redder wavelengths. Individual temperatures for the components may then be inferred using the absolute visual flux calibration of , and are T eff = 5920 K for the primary and 3900 K for the secondary, with estimated uncertainties of 100 K. The primary value is consistent with our earlier spectroscopic estimates (Table 4).
The distance to V530 Ori is listed also in Table 10, along with other derived properties; it relies on an average out-of-eclipse brightness of V = 9.886 ± 0.004 based on the literature sources cited above, corrected for extinction using A(V ) = 3.1E(B − V ). Separate distance calculations for the two components yield consistent results.
Standard models
Our knowledge of the metallicity of V530 Ori presents an opportunity for a stringent test of models of stellar evolution against our highly accurate mass, radius, and temperature measurements, with one less free parameter than is common in these types of comparisons. This is particularly important in this case because the system contains an M star, for which abundance analyses are usually very challenging and generally unavailable. A first test is shown in Figure 9, using the models from the Yonsei-Yale series (Yi et al. 2001;Demarque et al. 2004). These models are intended for solar-type stars, and adopt gray boundary conditions between the interior and the photosphere that are adequate for stars more massive than about 0.7 M ⊙ , but become less realistic for lower-mass stars such as the secondary of V530 Ori. Consequently, we compare them only against the primary, which is very similar to the Sun. As shown in the figure, an evolutionary track for the measured mass of the star and its measured metallicity is in near perfect agreement with its temperature and surface gravity, at an age of about 3.3 Gyr. The star is approaching the half-way point of its main-sequence phase. Consistent with this old age, there is no sign of the Li I λ6708 absorption line in the disentangled spectra of either star. Figure 10 shows a comparison with model isochrones from the Dartmouth series (Dotter et al. 2008), which are appropriate both for solar-type and lower-mass stars. A 3 Gyr isochrone computed for the metallicity of the system reproduces the radius of the primary star at its mea- sured mass, but underestimates the size of the secondary by about 2.5% (see inset in the top panel of the figure). This same isochrone is consistent with the temperature of the primary, within its uncertainty, but slightly overestimates that of the secondary. Similar anomalies in radius and temperature have been seen in many other M dwarfs, and are attributed to the effects of stellar activity and/or magnetic fields (for a recent review of this phenomenon see Torres 2013, and references therein). One such system of M dwarfs is YY Gem (Torres & Ribas 2002;Torres et al. 2010), whose two identical components happen to have virtually the same mass and T eff as the secondary of V530 Ori, but a radius that is 5% larger. While age and composition differences may be part of the explanation, variances in the activity levels (YY Gem being much more active) are likely to play a significant role as well.
Several other series of models have been published in recent years that incorporate realistic physical ingredients appropriate for low-mass stars such as the secondary of V530 Ori (non-gray boundary conditions, improved high-density/low-temperature equations of state). These include the PARSEC models from the Padova series (Chen et al. 2014), calculations from the Yale group (Spada et al. 2013), and from the Pisa group (Dell'Omodarme et al. 2012). Older models that are also appropriate and are still widely used are those from the Lyon group (Baraffe et al. 1997(Baraffe et al. , 1998. Figure 11 presents a comparison in the log g vs. T eff diagram of the measured properties for V530 Ori B against evolutionary tracks from most of the above models for a mass of 0.6 M ⊙ , conveniently very close to the measured mass of 0.5955 M ⊙ . Tracks are shown for ages from 140 Myr to 10 Gyr, with open circles marking the predicted properties of the secondary at the best-fit age for the primary in each model. We include also a 0.5955 M ⊙ model from the Dartmouth series, for reference. We point out, how- Dotter et al. (2008). Top: Mass-radius diagram showing isochrones from 1 to 6 Gyr for the measured metallicity of [Fe/H] = −0.12, with the solid line representing the isochrone that best fits the primary star (3 Gyr). The inset shows an enlargement around the secondary, which is seen to be larger than predicted. Bottom: Mass-temperature diagram with the same isochrones as above. -Properties for the low-mass secondary of V530 Ori (solid circle with error bars) shown against evolutionary tracks for a mass of 0.6 M ⊙ similar to that measured for the star, and ages of 140 Myr to 10 Gyr. Models represented are those from Lyon (Baraffe et al. 1997(Baraffe et al. , 1998, Yale (Spada et al. 2013), andPisa (Dell'Omodarme et al. 2012), interpolated to the measured metallicity of the system or at the nearest composition available (see text). Also shown for reference is a track from the Dartmouth series. Open circles on each track mark the properties of the secondary at the age predicted by models of the primary. In all cases the models underestimate the secondary radius (i.e., they overestimate log g) and predict temperatures that are too hot. ever, that such comparisons are not always straightforward, or even possible in some cases, due to coarseness of the model grids, limitations in the set of parameters available (metallicity, mixing length parameter), and the need to interpolate among existing models, which most likely limits the accuracy. In particular, we have not compared against the Padova models as only isochrones (but not yet evolutionary tracks) are available. The Pisa track shown in Figure 11 is for the highest metallicity available (Z = 0.01), which is marginally lower than we measure for V530 Ori. For the Lyon models interpolation to the measured metallicity of [Fe/H] = −0.12 is only possible for a mixing length parameter of α ML = 1.0, whereas all other models adopt a solar-calibrated value of α ML . Additionally, there are differences in the interior compositions adopted in all these calculations, and in many other details that may explain why the predictions differ from model to model, though a thorough discussion of these issues is beyond the scope of this paper. Nevertheless, a common pattern seen in the figure is that all models overestimate the temperature of the secondary star by 4-8%, and also overestimate its surface gravity, which means they underestimate the radius (by about 2 to 4%). These discrepancies are in the same direction as found previously for many other low-mass stars.
Additional differences between models and observations for V530 Ori are seen when comparing the secondary/primary flux ratios we estimated spectroscopically and photometrically (Sect. 3 and Sect. 6) against predictions for stars with the exact masses we measure. We illustrate this in Figure 12, in which the predictions in several standard photometric passbands are based on the same 3 Gyr Dartmouth isochrone that provided the best fit to the mass and radius of the primary in Figure 10. Models systematically underestimate all of the measured flux ratios by roughly a factor of two, with the absolute deviations increasing toward longer wavelengths. This is not entirely unexpected, given that the models also fail to match the radius and temperature of the secondary star, as well as its bolometric luminosity, which is overestimated. Interestingly, we find that arbitrarily increasing the secondary mass to M B = 0.64 M ⊙ leads to predictions that agree nearly perfectly with all of the measured flux ratios (bottom panel of Figure 12), from Strömgren u to the value measured from our KPNO spectra at ∼6410Å, close to the R C band. This is unlikely to be a coincidence. We note, though, that a mass for the secondary of 0.64 M ⊙ (nearly 7% larger than measured, or ∼18σ) is implausibly large given our observational uncertainties, and would not make the fit to the other global properties (R, T eff ) any better. The reason for the underpredicted ℓ B /ℓ A values may be related to deficiencies in the temperature-color transformations adopted in the Dartmouth models, which are based on PHOENIX model atmospheres (Hauschildt et al. 1999a,b), and which are known to degrade rapidly at optical wavelengths for cooler stars. Even so, one might expect the predictive power of these models to be better when considering flux ratio differences between one wavelength and another (e.g., the difference between [ℓ B /ℓ A ] y and [ℓ B /ℓ A ] D51 ), because those rely on theory only in a differential sense. This is indeed what we see in Figure 12, and we take this to represent indirect support for the accuracy of our light curve solutions in Sect. 6 (performed independently in each passband), and therefore of the accuracy of the measured stellar radii.
Magnetic models
A series of stellar models were computed using the magnetic Dartmouth stellar evolution code (Feiden & Chaboyer 2012, 2013 to test the idea that magnetic fields are responsible for the observed anomalies between the secondary in V530 Ori and stellar models. The aim of the present analysis is to first determine whether magnetic models are able to provide a consistent solution for the two components of V530 Ori, and then, if a consistent solution is identified, to establish whether the conditions presented by the models are physically plausible. Prior to implementing magnetic fields in the stellar evolution calculations, as a check we re-assessed the performance of the standard (i.e., non-magnetic) models from the magnetic Dartmouth code owing to small differences with the original Dartmouth models of Dotter et al. (2008). Comparisons were carried out in the age-radius and age-T eff planes for mass tracks computed at the precise masses and metallicity of the V530 Ori stars. Figure 13 shows that properties of the primary star are well reproduced by the model (represented with Dartmouth models for the metallicity of V530 Ori compared against the measured radii and temperatures of the components, represented by the horizontal bands. Standard (non-magnetic) evolutionary tracks for the precise masses of the stars are drawn with solid lines, and models incorporating magnetic fields with a rotational dynamo prescription are drawn with dotted lines. Field strengths for the secondary are Bf = 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 kG, and result in increasing departures from the standard models. A magnetic model with a field strength of 170 G is shown for the primary, but is nearly indistinguishable from the corresponding standard model. The best fit age range is shown by the vertical band. Right: Relative changes in radius (δR/R) and effective temperature (δT eff /T eff ) for the secondary as a function of the strength of the magnetic field (see text). The best fit value is marked with a filled diamond. a solid line) between 2.7 and 3.5 Gyr, yielding an age of 3.1 ± 0.4 Gyr, similar to our earlier finding. As discussed before, the properties of the secondary are not reproduced by the corresponding standard model. Instead, theory predicts a radius that is 3.7% too small and a temperature that is 4.8% too hot compared to observations. Given that standard models match the properties of the primary to a large degree, we began our magnetic model analysis by assuming only the secondary is affected by the presence of a magnetic field.
A small grid of magnetic stellar models was computed at a fixed mass (0.596 M ⊙ ) and metallicity ([Fe/H] = −0.12) for V530 Ori B. Two procedures were used for modeling the influence of the magnetic field on convection that are described by Feiden & Chaboyer (2013). These two procedures were designed to roughly mimic the effects of two different dynamo actions: a rotational or shell dynamo (α-Ω) and a turbulent or distributed dynamo (α 2 ). All models utilized a dipole radial profile as the influence of the magnetic field is only weakly dependent on the choice of radial profile for stars with a radiative core and convective envelope (Feiden & Chaboyer 2013). For models using the rotational dynamo procedure, values of the average surface magnetic fields were Bf = 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 kG, while for the turbulent dynamo the values were Bf = 0.5, 0.6, 0.7, 0.8, 1.0, 2.0, and 3.0 kG, in which B is the photospheric magnetic field strength and f the filling factor. Corresponding mass tracks are show with dotted lines in Figures 13 and 14, with the relative changes in radius (δR/R) and temperature (δT eff /T eff ) of the secondary indicated on the right as a function of the strength of the magnetic field. Results show that magnetic models of V530 Ori B can be made to reproduce the observed properties assuming either dynamo procedure, with the rotational dynamo suggesting Bf B = 2.1 ± 0.4 kG and the turbulent dynamo giving Bf B = 1.3 ± 0.4 kG. These values were calculated by extracting the properties of each magnetic model computed at an age of 3.1 Gyr, and generating curves using a cubic spline interpolation that give the model radius and model temperature difference between the primary and secondary as functions of Bf (right panels of Figures 13 and 14). The spacing of the magnetic field strength was 0.05 kG along the interpolated curves. We then computed the χ 2 value, at each point along the interpolated curve and took the resulting minimum as the best-fit Bf . For completeness, we note that the minimum χ 2 value we found is χ 2 min = 0.4. Approximate errors for the permitted model Bf were determined by satisfying the condition χ 2 ( Bf ) = χ 2 min + 1. As shown earlier, the primary star is active as well and may be similarly influenced by its magnetic field, even though standard models seem to be able to match the observed properties without that effect. To test this, we generated magnetic models for the primary star guided by an estimate of the field strength, described in the next section, of Bf A = 170 G. Results using the rotational dynamo formulation are shown in Figure 13, but produce only a negligible departure from the standard model mass track. Figure 14, on the other hand, demonstrates that the turbulent dynamo model causes a greater level of radius inflation and temperature suppression in the primary. Temperature suppression is such that agreement is nearly lost between the model and the observations. The age prediction is reduced to 2.4 ± 0.4 Gyr, and magnetic models of the secondary require moderately stronger Bf values with the turbulent dynamo than in the previous case. Performing the same proce-dure as before to generate the best fit value, we obtained Bf B = 1.7 ± 0.3 kG. However, in this case we found χ 2 min = 3.5, indicating the final fit is poor. This is driven by the fact that the temperature difference is more difficult to fit given the significantly lower temperature of the primary model with a magnetic field.
Magnetic field strengths: empirical estimates
Observational evidence for activity in V530 Ori is clear in the case of the primary, and although no direct signs of it are seen for the very faint secondary, we expect that star to be active as well. Approximate magnetic field strengths for both stars were estimated as follows. Saar (2001) has shown there is a power-law relationship between Bf and the Rossby number, Ro ≡ P rot /τ c , where P rot is the rotation period of the star and τ c the convective turnover time. The Rossby number for the primary may be estimated by noting that our spectroscopic v sin i measurement suggests it is rotating either synchronously or pseudo-synchronously. We will assume the latter here, although the difference is very small (see Table 10). This leads to a rotation period of P rot ≈ 5.84 days based on the measured orbital eccentricity (see Hut 1981). For τ c we must rely on theory. Since the calibration of Saar (2001) used convective turnover times taken from the work of Gilliland (1986), we have done the same here for consistency, and adopted (based on the temperature of 5890 K) τ c = 13.8 ± 2 days, with a conservative uncertainty. The resulting Rossby number for V530 Ori A is Ro = 0.423 ± 0.067. A similar calculation for the secondary gives Ro = 0.116 ± 0.005 based on τ c = 50.3 ± 2 days (Gilliland 1986), from its temperature of 3880 K, and assuming pseudo-synchronous rotation (justified in view of the very short timescale for synchronization compared to the age of the system; see Sect. 3). The Saar (2001) relation then projects a magnetic field strength for the primary of Bf A = 170 ± 140 G, and a value for the secondary of Bf B = 830 ± 650 G, where the uncertainties account for all observational errors as well as the scatter of the calibration. The field strength for the secondary is not far from the values required by the models in the previous section, suggesting the theoretical predictions are at least plausible.
A consistency check on the empirically estimated Bf values may be obtained by relating these field strengths to X-ray luminosities, and comparing them against a measure of the total X-ray emission from V530 Ori detected by the ROSAT satellite. Indeed, Pevtsov et al. (2003) showed in a study of magnetic field observations of the Sun and active stars that there is a fairly tight power-law relationship between the X-ray luminosity and the total unsigned surface magnetic flux, Φ = 4πR 2 Bf , which is valid over many orders of magnitude. An updated relation restricted to dwarf stars was presented by Feiden & Chaboyer (2013). Using this latter relation along with the measured stellar radii we obtain log L X,A = 28.63 ± 0.59 and log L X,B = 29.14 ± 0.57 (with L X in erg s −1 ). The sum of the X-ray luminosities corresponds to log L X,A+B = 29.26 ± 0.46. The entry for V530 Ori in the ROSAT All-Sky Survey Faint Source Catalog (Voges et al. 2000) lists a count rate of 0.0151 ± 0.0072 cts s −1 (0.1-2.4 keV) and a hardness ratio of HR1 = −0.43 ± 0.37 for the system, from a 465 s exposure. The corresponding total X-ray luminosity computed using the energy conversion factor given by Fleming et al. (1995) and the distance in Table 10 is log L X (ROSAT) = 29.06 ± 0.33. The good agreement between this measurement and the sum of the individual X-ray luminosities, log L X,A+B , may be taken as an indication of the accuracy of the Bf values reported above, even though their formal errors are large.
DISCUSSION
To the extent that our empirical magnetic field estimates above represent the actual surface field strengths of the stars in V530 Ori, it seems natural to require the models for both components to account for these effects. However, the way in which the influence of magnetic fields on the stellar properties is treated in the models seems to make a significant difference, particularly for the primary star, and it is not at all clear which formulation is more realistic. Given that this issue is at the heart of the long-standing problem of radius inflation and temperature suppression in cool stars, a careful consideration of the physical assumptions is in order.
Based strictly on the agreement with our empirical estimates, a scenario whereby the primary star's magnetic field is generated by a "rotational" dynamo and the secondary by a more "turbulent" dynamo would seem to be preferred. In this case, the magnetic field of the primary draws its energy largely from kinetic energy of (differential) rotation, with the magnetic field rooted in a strong shear layer below the convection zone (i.e., the tachocline), analogous to the mechanism believed to drive the solar dynamo (Parker 1993;Charbonneau & MacGregor 1997). Convection is then inhibited by the stabilizing effect that a (vertical) magnetic field has on a fluid (Gough & Tayler 1966;Lydon & Sofia 1995). Given the similarity of V530 Ori A to the Sun, the adoption of this magneto-convection formulation seems justified. With a surface magnetic field strength Bf A = 170 G, the influence of a magnetic field on the flow of convection is minimal and the structure of the model is unaffected (see Figure 13), so that the magnetic model produces results consistent with the non-magnetic model.
Concerning the secondary, both magnetic field formulations yield agreement with the stellar properties (T eff and R) at an age defined by the properties of the primary (assuming the discussion above holds). At face value the turbulent dynamo approach requires a field strength ( Bf = 1.3 ± 0.4 kG) that is closer to the empirically estimated value of Bf B = 0.83 ± 0.65 kG than the alternate approach with a rotational dynamo (which predicts Bf = 2.1 ± 0.4 kG). The accuracy of the empirical value is difficult to assess and depends strongly on the reliability of the Saar (2001) calibration. The turbulent dynamo formulation simplistically assumes that the energy for the magnetic field is provided by kinetic energy available in the larger scale convective flow. Convection is then made less efficient as energy is diverted away from convecting fluid elements thereby impeding their velocity and thus reducing the total amount of convective energy flux (e.g., Durney et al. 1993;Chabrier & Küker 2006;Browning 2008). Precisely how this conversion is achieved (e.g., through turbulence, helical convection, or feedback generated by the Lorentz force) is not explicitly defined in the stellar models.
While consistency between the estimated surface magnetic field strength and that required by the models is encouraging, it is not clear that the dynamo mechanism at work in V530 Ori B should be any different from that in V530 Ori A. Both stars possess a radiative core and a convective outer envelope and thus, presumably, a stable tachocline in which to produce a magnetic field through an interface dynamo. Furthermore, the presence of a stable tachocline is not necessarily a strict condition for a solar-like dynamo (Brown et al. 2010). Therefore, there is no reason a priori to believe that the stars should have a different dynamo mechanism. If we instead assume that the primary also has a dynamo driven by convection, then the structural changes imparted by the magnetic field become significant, even for a modest 170 G magnetic field at the surface. Changes induced on the primary are such that models of the primary and secondary cannot be made to agree at the same age, leaving us with precisely the same problem that we were looking to correct with the magnetic models.
A possible reason to expect a different dynamo mechanism would be if differential rotation were somehow suppressed in the secondary star. Quenching of differential rotation has been observed in detailed magnetohydrodynamic simulations as a result of Maxwell stresses produced by an induced magnetic field (Browning 2008). On the other hand, simulations of a Sun-like star with an angular velocity similar to V530 Ori A do not demonstrate this quenching (Brown et al. 2010), so we may posit that the primary star has a dynamo driven by differential rotation, as we initially supposed. Although the two components of V530 Ori are likely rotating with a similar angular velocity, convective velocities in the secondary are slower, leading to convective flows that are more susceptible to the influence of the Coriolis force. This could then drive strong magnetic fields that also quench the differential rotation. Unfortunately, assessing the level of differential rotation on the secondary is not currently possible. Browning (2008) predicts that when differential rotation is quenched, the large scale axisymmetric component of the magnetic field should account for a larger fraction of total magnetic energy. Using the empirical scaling relations of Vidotto et al. (2014), we estimated the large scale magnetic field component on each star using our derived X-ray luminosities. We find that the large scale component of the magnetic field (taken to be perpendicular to the line of sight) makes up 6% and 12% of the total magnetic energy, corresponding to Bf ⊥ = 10 G and 100 G for V530 Ori A and B, respectively. While the trend is consistent with the secondary having a more significant large scale field component (in terms of total magnetic energy contribution), it is not possible to say whether this is the result of different dynamo actions.
In summary, while many critical aspects of the problem are still not understood, the arguments above seem to support a picture in which the models are able to match the measured temperatures and radii of the components with the magnetic field playing little role in changing the structure of the primary star (i.e., consistent with it having a rotational dynamo). The nature of the magnetic field on the secondary is less clear, with the observations perhaps favoring a distributed (turbulent) dynamo over a rotational one, but not at a very significant level.
Other consequences of magnetic fields on structure of the stars in V530 Ori appear small: the predicted apsidal motion constant corresponds to an apsidal motion period of U = 19,400 yr for a magnetic secondary (both dynamo types), not very different from the value of 19,100 yr computed with no magnetic fields. The observed value from Sect. 2 is unfortunately much too imprecise for a meaningful comparison. We note that the properties of the system are such that the contribution to the apsidal motion from General Relativity effects (e.g., Giménez 1985) is expected to dominate (72%) over the classical terms from tidal and rotational distortion.
A larger effect of magnetic fields is seen on the convective turnover time. The Dartmouth models yield τ c = 16 days for the primary star, somewhat longer than other estimates mentioned earlier, and values for the secondary of 50.5 days (standard, non-magnetic), 49.3 days (rotational dynamo), and 65.4 days (turbulent dynamo).
CONCLUDING REMARKS
With masses and radii determined to better than 0.7% and 1.3%, respectively, and a secondary of spectral type M1, V530 Ori joins the ranks of the small group of eclipsing binary systems containing at least one low-mass main-sequence star with well-measured properties. What distinguishes this example is that the chemical composition is well known from our detailed analysis of the disentangled spectrum of the primary component, which is an easily studied G1 star. Investigations of most other systems containing M stars have struggled to infer metallicities directly from the molecule-ridden spectra of the M stars, or by more indirect means. Knowledge of the metallicity removes a free parameter in the comparison with stellar evolution models that permits a more meaningful test of theory, as we have done here. We have also made a special effort to establish an accurate temperature for the primary star by measuring it in several different ways, as the T eff value for the secondary hinges on it, as does the entire comparison with models.
Both the Yonsei-Yale and the Dartmouth models provide a good match to the primary star at the measured metallicity, suggesting that both its temperature and metallicity are accurate. On the other hand, we find that standard models from the Dartmouth series underpredict the radius and overpredict the temperature of the secondary by several percent, as has been found previously for many other cool main-sequence stars. Magnetic models from the same series succeed in matching the observed radii and temperatures of both stars at their measured masses with surface magnetic fields for the secondary of about 1-2 kG in strength, fairly typical of early M dwarfs, and an age of some 3 Gyr. These field strengths are not far from what we estimate empirically for V530 Ori B on the basis of the Rossby numbers. The agreement is reassuring, and suggests that we are closer to understanding radius inflation and temperature suppression for convective stars, not only qualitatively but also quantitatively. Earlier quantitative evidence in this direction was presented by Feiden & Chaboyer (2012, 2013, also for the Dartmouth models, with the present case being perhaps a stronger test in that our estimates of the individual magnetic field strengths used somewhat weaker assumptions. V530 Ori is thus a key benchmark system for this sort of test. Questions remain, however, about the exact nature of the magnetic fields and how their effect on the global properties of the stars should be treated in the models (rotational dynamo, turbulent dynamo, or some other prescription). | 2014-10-22T20:06:48.000Z | 2014-10-22T00:00:00.000 | {
"year": 2014,
"sha1": "a5216ccfcd245e033a54e6395972feb3404ff28a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1410.6170",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a5216ccfcd245e033a54e6395972feb3404ff28a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
33571899 | pes2o/s2orc | v3-fos-license | Outbreak of Diarrhetic Shellfish Poisoning Associated with Mussels, British Columbia, Canada
In 2011, a Diarrhetic Shellfish Poisoning (DSP) outbreak occurred in British Columbia (BC), Canada that was associated with cooked mussel consumption. This is the first reported DSP outbreak in BC. Investigation of ill individuals, traceback of product and laboratory testing for toxins were used in this investigation. Sixty-two illnesses were reported. Public health and food safety investigation identified a common food source and harvest area. Public health and regulatory agencies took actions to recall product and notify the public. Shellfish monitoring program changes were implemented after the outbreak. Improved response and understanding of toxin production will improve management of future DSP outbreaks.
DSP is characterized by symptoms of nausea, vomiting, diarrhea, chills and abdominal pain [2]. The incubation period ranges from 30 min to 12 h and symptoms may last up to 3 days [7]. Chronic sequelae have not been reported, although there is very little known about possible health effects associated with chronic exposure to OA-group toxins or long term impact of acute intoxication [1,2,8]. The majority of DSP outbreaks have been reported outside of North America [2,6] but outbreaks and toxin detections (primarily DTX-1) have been reported from eastern Canada [9,10]. Although DTX-1 had been detected in the Northeastern Pacific marine sponge from British Columbia (BC) coastal waters, there were no confirmed reports of DSP prior to this outbreak [11]. Shellfish samples epidemiologically linked to illness are tested for lipophilic shellfish toxins (LSTs include OA-group toxins, pectenotoxins, azaspiracids, yessotoxins, and cyclic imine toxins), but clinical samples are not analysed for these toxins in Canada.
OA-group toxins are monitored as part of the LSTs to meet Canadian Shellfish Sanitation Program requirements [12]. LSTs have been monitored in select BC sites since 2004 and the LST monitoring was expanded in 2011. Affected areas are closed to harvesting when action levels are exceeded [12]. As of July, 2011 the Health Canada (HC) action level for DSP toxins was 0.20 μg/g for the sum of OA and DTX-1 in edible shellfish tissue. DTX-2 and DTX-3 were not included in the action level prior to August, 2011.
Monitoring DTX-3 directly in a regulatory laboratory is challenging because it may include many different compounds, but DTX-3 may be monitored indirectly by including either an enzymatic [13] or alkaline hydrolysis [14] step to remove the fatty acid esters and allow detection of the parent compound. The DTX-3 concentration is then calculated as the difference between the parent compound concentrations in the hydrolysed and unhydrolysed portions of the sample. This paper describes a DSP outbreak in BC and outlines further areas of study to improve understanding and ability to investigate future outbreaks.
Methods
Public health authorities collect clinical and exposure information using a standard surveillance form [15]. Surveillance for shellfish related illness is often based on clusters and not at the individual level. During this investigation, clusters reported to public health and the implicated food premises (restaurant or retail) were investigated to determine the source of the mussels and collect supplier information. A food safety traceback is conducted by tracking a food product from the retail level through distribution to the production or processing facilities to find the suspected source. The Canadian Food Inspection Agency (CFIA) conducted a traceback investigation using information collected from the food premises and provided HC with analytical results for OA-group toxins in mussel samples from the harvest area on August 5 and 6, 2011 for risk assessment.
The BC Public Health Microbiology and Reference Laboratory tested leftover food samples from implicated food premises for enteric pathogens and tested stool samples for norovirus and bacteria from individuals who were symptomatic and consumed the mussels. The CFIA tested shellfish samples from the implicated harvest area for LSTs using liquid chromatography-mass spectrometry (LC-MS/MS) [16], including an alkaline hydrolysis procedure to detect DTX-3 [14].
Results
On August 3, 2011, public health officials in two BC regional health authorities were notified of gastrointestinal illness among individuals who had consumed cooked mussels at different restaurants between July 28 and August 2. Information on symptoms, incubation period, and consumption of cooked products in various locations led to the hypothesis that DSP was the cause of illness. Restaurants receiving customer complaints notified the shellfish industry (harvester and processors) of the reported illnesses. The harvester took steps to withdraw product on August 3.
Sixty-two clinical cases associated with 15 food premises in three health authorities were reported. All cases reported consuming cooked mussels between July 28 and August 6. The most common symptoms were diarrhea, nausea, vomiting, abdominal pain and cramps. The incubation period ranged from 5 to 15 h and symptoms lasted 1 to 3 days.
The food safety investigation and traceback identified a single harvest area at the north end of the Strait of Georgia ( Figure 1) and a single mussel harvester. The harvest dates of implicated mussels were between July 24 and 31. OA-group toxin results of mussel samples from this harvest area are presented in Table 1. Two samples were reported above the action level of 0.2 μg/g for the sum of OA and DTX-1. Monitoring results demonstrated that concentrations of OA-group toxins in mussel samples were below the action level after August 8. Interestingly, DTX-3 was the most prominent toxin detected and was present in all mussel samples harvested after July 19. Two mussel samples and two sauces served with mussels at implicated restaurants tested negative for enteric pathogens. Six clinical stool samples were negative for norovirus and enteric pathogens. In addition to the harvester-led withdrawal of product on August 3, a harvest-area closure was recommended by the CFIA on August 5 based on OA-group toxin concentrations. On August 6, a health hazard alert recalling all implicated mussels harvested from the area was issued by the CFIA based on a HC risk assessment. The area was re-opened on August 24 following established procedures [10].
Discussion
Algal blooms leading to OA-group toxin production and shellfish contamination are increasing worldwide [2,6,17]. Although the causes for this increase are not clear, environmental factors such as climate change and projected increases to ocean temperatures, reduction in pH and increasing availability of nitrate may lead to increasing algal blooms. Increased marine traffic and global sales of spats for cultivation have been proposed as possible reasons for the emergence of toxic algal species in new areas. In addition, social factors such as an increased consumption of seafood, or improved regulatory standards and monitoring to identify toxins and human illnesses may also have impact on this apparent increasing trend [2,17,18]. The trigger for toxin production that led to this outbreak is unknown and further study in consultation with experts is needed.
Washington State public health authorities also reported DSP cases in June, 2011 [19]. The cases also consumed mussels with high concentrations of DTX-1. Retrospectively, the similarity in timing and location of cases may suggest a common ecological reason for the emergence of OA-group toxins in Pacific Northwest coastal waters in 2011. Timely communication between departments and jurisdictions that share coastal waters will ensure that each is aware in the case of future events.
There is international variation in OA-group toxin action levels and laboratory testing methods [1,6,20]. Regulatory testing most commonly employs LC-MS/MS or a mouse bioassay (to be suspended by the European Union after 2014). LC-MS/MS methods offer greater sensitivity and the ability to identify and quantify specific toxins and will be the primary method used internationally [1]. Canadian biotoxin monitoring is based on results of shellfish samples harvested from established sites. Some jurisdictions monitor phytoplankton levels in harvest-area water in addition to toxins in shellfish [6] and based on 2011 data from BC this approach was suggested by Esenkulova et al. to provide earlier warnings compared to biotoxin monitoring alone [21]. While some research has indicated a relationship between the increased concentration of phytoplankton and the toxin there have also been limitations noted [6,22,23]. The potential value of phytoplankton monitoring to compliment current LST monitoring in order to provide information on potential harmful algal blooms that may affect shellfish and human health and could lead to an earlier and reliable response in BC could be explored in collaboration with researchers and industry.
A review of all HC marine biotoxin action levels was already in progress as of 2011, and an interim action level of 0.20 μg for the sum of OA, DTX-1, DTX-2 and DTX-3 per gram of edible shellfish tissue was provided by HC after reviewing LST results from this investigation [24]. Significant DTX-3 concentrations had not been detected in Canadian shellfish previously. Monitoring concentrations of DTX-3 is relevant because it may be present in higher concentrations than other OA-group toxins, as in our situation. In addition, there are indications that it can be hydrolysed to its parent compound in the gastrointestinal tract of humans [25]. Results from this DSP outbreak led to increasing the number of monitoring sites and sampling frequency in BC and the program has the capacity to expand and prioritize samples as necessary. Monitoring results are also provided to the shellfish industry as they become available [26]. These program changes in BC should further minimise the potential risk of contaminated shellfish entering commercial markets. Ongoing communication between jurisdictions and organizations as well as an understanding of factors contributing to OA-group toxin production should lead to improved identification and management in the event of a DSP outbreak.
There is limited information on the epidemiology of human illness caused by OA-group toxins. Improvements in identification and surveillance of human illness, understanding of exposures, dose-response, severity of illness and chronic sequelae will improve our understanding of the burden of disease and ability to communicate on the risks associated with DSP, which may lead to better public health information and response.
Conclusions
This DSP outbreak in BC was a significant event with 62 clinical illnesses reported and rapid action to identify and remove the source of illness. The intersectoral and collaborative approach to human illness surveillance and response to shellfish-related illnesses in BC led to rapid mitigation. Modifications to the monitoring program, which have already been implemented, will reduce the risk of DSP. Identifying factors contributing to toxin production could direct future monitoring plans. | 2016-04-23T08:45:58.166Z | 2013-05-01T00:00:00.000 | {
"year": 2013,
"sha1": "5a72387ad39c64e7d62ec3eb42e57dff9887ae6e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/11/5/1669/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a72387ad39c64e7d62ec3eb42e57dff9887ae6e",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
232290734 | pes2o/s2orc | v3-fos-license | Sharp decay rates for localized perturbations to the critical front in the Ginzburg-Landau equation
We revisit the nonlinear stability of the critical invasion front in the Ginzburg-Landau equation. Our main result shows that the amplitude of localized perturbations decays with rate $t^{-3/2}$, while the phase decays diffusively. We thereby refine earlier work of Bricmont and Kupiainen as well as Eckmann and Wayne, who separately established nonlinear stability but with slower decay rates. On a technical level, we rely on sharp linear estimates obtained through analysis of the resolvent near the essential spectrum via a far-field/core decomposition which is well suited to accurately describing the dynamics of separate neutrally stable modes arising from far-field behavior on the left and right.
Introduction
The Ginzburg-Landau equation (1.1) arises in many contexts as a modulation equation describing approximate dynamics near a Turing instability in pattern-forming systems. In many cases, patterns nucleate locally near a small, localized perturbation of the trivial background state, and then grow, saturate, and spread through a spatially extended system. A fundamental question is then to determine the speed at which localized disturbances spread through the system and the pattern this invasion mechanism produces in the wake. One often restricts mathematical considerations to a one-sided invasion processes, in which case a first description of the invasion process focuses on the existence and stability of invasion fronts connecting the stable and unstable rest states. In the Ginzburg-Landau equation, these are traveling wave solutions A(x, t) = q(x − ct; c) satisfying q + cq + q − q 3 = 0, lim For each fixed c, q(·; c) generates a two-parameter family of invasion fronts with speed c via spatial translation and rotation of the complex phase, owing to the translation invariance and gauge symmetry of (1.1). The real fronts are monotone for c ≥ 2, and the front with the minimal speed c * = 2 is the most interesting in light of the marginal stability conjecture [28,7,10], which postulates that solutions to (1.1) with compactly supported initial data, the most relevant in most invasion processes, spread with asymptotic speed 2. We therefore write q * = q(·; 2), and refer to this solution as the critical front.
When restricted to real-valued solutions, (1.1) obeys a maximum principle, and one can then use comparison principle based arguments to show that non-negative, compactly supported initial data spread with asymptotic speed 2 [1,5,6,14,23,24]. The lack of a maximum principle for complex-valued solutions, however, presents a substantial challenge to resolving the marginal stability conjecture in (1.1). The present authors recently proved the marginal stability conjecture in a general framework of higher order parabolic equations, which in particular lack maximum principles, under conceptual assumptions on the existence and spectral stability of critical fronts [2]. In a broader setting, the minimal speed c * = 2 is replaced by the linear spreading speed, which characterizes marginal pointwise linear stability in the co-moving frame; see [2,17] for details. The analysis in [2], however, relies on an additional technical assumption that the invading state in the wake of the fronts is exponentially stable, with spectrum strictly contained in the left half plane. This assumption is violated here due to the gauge invariance of (1.1), with the invading state instead being only diffusively stable; see Figure 1.
Nonlinear stability of the critical front in the Ginzburg-Landau equation against sufficiently localized perturbations was established in [10,7]. The analysis in [10] is based on energy estimates, and establishes stability without identifying a precise decay rate, while [7] gives a more detailed description of the dynamics via renormalization group theory, establishing stability with decay rate t −1+ε in the amplitude and t −1/2+ε in the phase. Here we revisit this stability analysis and obtain sharp decay rates for the amplitude and phase of perturbations, thereby improving upon the results of [7,10]. We consider (1.1) in the co-moving frame with speed c * = 2, so that q * is an equilibrium solution to the resulting equation and consider complex-valued perturbations of q * of the form A = (q * + r)e iϕ . To state our main result, we first introduce a smooth positive exponential weight ω satisfying as well as smooth positive algebraic weights ρ r − ,r + for r − , r + ∈ R, satisfying ρ r − ,r + (x) = (1 + x 2 ) r + /2 , x ≥ 1, (1 + x 2 ) r − /2 , x ≤ −1. (1.5) Our main result is the following nonlinear stability of the critical front.
Theorem 1. There exist positive constants C and ε so that if (r 0 , ϕ 0 ) ∈ L 1 ∩ L ∞ (R) × L 1 ∩ W 1,∞ (R) satisfy ωρ 0,1 r 0 L 1 + ωr 0 L ∞ + ϕ 0 L 1 ∩W 1,∞ < ε, (1.6) then the solution A = (q * + r)e iϕ to (1.3) with initial data A 0 = (q * + r 0 )e iϕ 0 exists for all t > 0 and satisfies ρ 0,−1 ϕ(·, t) W 1,∞ ≤ Cε (1 + t) 1/2 (1. 8) for all t > 0. Right: a schematic of a real critical front (red) and the time evolution of an exponentially localized perturbation in the imaginary component b(x, t), in the co-moving frame with speed 2. The perturbation initially grows due to the pointwise instability of the background state, but is advected into the bulk of the front where it decays diffusively. This imaginary perturbation to the real front also induces a faster-decaying real perturbation to the real front, which we omit in this picture.
As usual, W k, (R) denotes the Sobolev space of weakly differentiable functions up to order k with integrability index . Due to the gauge invariance of (1.1), Theorem 1 of course holds if one replaces q * with q * e iθ 0 for any fixed phase θ 0 ∈ [0, 2π). The t −3/2 decay exhibited here for the amplitude was established under the restriction to real-valued solutions in [13,11,3] and is known to be sharp in light of the asymptotics given in [13,3]. Key to the analysis of [2] establishing the marginal stability conjecture for an exponentially stable invading state are sharp decay estimates for perturbations of the critical front, used there to close a perturbative argument near a refined approximate solution. Indeed, the t −3/2 decay rate is closely related to the logarithmic delay − 3 2 t in the position of a front solution evolving from initial data compactly supported on the right as predicted by Ebert and van Saarloos [8]. In light of this, we are confident that the present analysis is not only of technical interest but also represents a significant step towards resolving the marginal stability conjecture in the Ginzburg-Landau equation.
Preliminaries
Choice of coordinates and general approach. Considering perturbations of the critical front of the form A = (q * + r)e iϕ leads to the system The linearization about (r, ϕ) = (0, 0) is diagonal, and Palmer's theorem [25,26] implies that the essential spectrum of the r component of the linearization is unstable, since 1 − 3q 2 * → 1 as x → ∞. Conjugating with the exponential weight ω defined in (1.4) stabilizes the essential spectrum, so that it touches the imaginary axis at the origin and is otherwise contained in the left half-plane; see Figure 1, left panel. Hence we define the weighted variable p = ωr, so that we recover this marginal stability by restricting to exponentially localized perturbations.
The linearization in the ϕ component is L ϕ = ∂ xx + 2∂ x + 2q * /q * ∂ x . The essential spectrum of this operator is marginally stable but we encounter an additional technical difficulty, namely that the coefficient 2q * /q * attains its limit at +∞ at only an algebraic rate. As we shall see, this slow algebraic convergence would obstruct our approach to obtaining linear estimates, and we therefore remove this difficulty by introducing a weighted variable ψ = ωq * ϕ. These coordinates are used in a heuristic argument for the expected decay rates in [7], and are similar to those used to establish nonlinear stability of source defects in the complex-coefficient Ginzburg-Landau equation in [4].
An alternative approach would separate A = a + ib into real and imaginary components; indeed, these coordinates are natural in that the rest state A ≡ 0 admits a two-dimensional linear pointwise instability, with separate growth in the real and imaginary components. Hence the linearization about a critical front in the imaginary component is also unstable, requiring an exponential weight to push the essential spectrum to the imaginary axis. This is somehow masked in (r, ϕ) coordinates due to the singularity in polar coordinates as r → 0 + . The technical difficulty with using (a, b) coordinates is that in these coordinates, critical nonlinear terms appear in the analysis of the state in the wake, so that a normal form-type coordinate transformation would be needed to remove these terms and close a nonlinear argument. We note here that ψ(x, t) ∼ e x b(x, t) for x 1, so that ψ captures the behavior of the imaginary component in the leading edge, while simultaneously enjoying the advantage that polar coordinates capture only irrelevant nonlinear terms in the wake.
The weighted variables (p, ψ) then solve the system where (1.13) and (1.14) The coefficients of L ψ and L p each attain limits exponentially quickly as x → ±∞, with limiting operators and The essential spectra of these operators are given by the dispersion curves These curves determine the boundaries of the essential spectra of the full operators L p and L ψ ; see Figure 1, left panel, for a schematic and [21,12] for background. Our analysis is based on sharp linear decay estimates which we obtain by deforming the integration contours in the definitions of the semigroups e Lpt , e L ψ t near the essential spectrum. We then extract time decay from precise estimates on the behavior of the resolvents of L p , L ψ near their essential spectrum. Crucial to our approach is the use of a far-field/core decomposition which allow us to efficiently separate behavior arising from the limiting dynamics at +∞ from that determined by dynamics at −∞. We are thereby able to decompose the resolvent into two terms and deform the integration contours in the formula for the semigroup separately in each of these terms, using contours adapted to the behavior on the right as in [3,2] in one case and contours adapted to the diffusive spectrum with quadratic tangency at the origin, Σ − ψ , as in [20,19], in the other. The stability of the critical front considered here bears some conceptual similarities to the stability of source defects in the complex Ginzburg-Landau equation considered in [4]. In particular, a key challenge in both contexts is characterizing diffusive stability in the presence of outward transport. The approach in [4] uses an ansatz to explicitly capture outgoing diffusive wave packets and then establish decay using pointwise semigroup estimates. Here, rather than explicitly capturing the diffusive wave packets advected to the left in the bulk of the front (see Figure 1), we take advantage of the fact that the outward transport induces additional decay in weighted norms which allow for algebraic growth. We are able to estimate the nonlinearities in such norms due to the fact that only derivatives of ϕ appear in the (r, ϕ) system, and hence when we change to (r, ψ) coordinates, every term in the nonlinearity term involving ψ, rather than ψ x , carries a factor of (ω −1 q −1 * ) x and hence is very localized on the left. This is ultimately due to the gauge invariance in the original coordinates in (1.1). The improved decay of ψ in algebraically weighted norms allowing growth as well as improved decay of ψ x when compared to the diffusive decay rate t −1/2 is then sufficient to close a nonlinear argument. Our approach to the nonlinear argument is therefore somewhat more direct than that of [4], but at the cost of a detailed description of the outgoing wave packets.
We also mention related work establishing stability of supercritical fronts -moving faster than the linear spreading speed -in the Swift-Hohenberg equation [9] and in the Ginzburg-Landau equation coupled to an additional conservation law [16], where the main difficulty is again to characterize diffusive decay in the presence of outward transport. The methods there are specifically adapted to supercritical fronts, relying crucially on the fact that one can obtain exponential in time linear stability of the unstable rest state in a suitable exponentially weighted norm.
Outline of the paper. In Section 2, we revisit the study of the resolvent of L p in order to obtain new decay estimates for e Lpt in L 1 (R), as we will need these estimates to close our nonlinear arguments. In Section 3, we study the resolvent of L ψ , separating behavior originating from limiting dynamics on the left and on the right via a far-field core decomposition, extending techniques introduced in [3,2]. In Section 4, we translate the resolvent estimates obtained in the preceding two sections into linear decay estimates via integrating over appropriately chosen contours. In Section 5, we use control of a carefully constructed time-weighted norm to show that the linear decay estimates persist for the full nonlinear system, thereby proving Theorem 1.
Function spaces.
We require more general exponential weights for our analysis of the resolvent near the essential spectrum. For η − , η + ∈ R, we define a smooth positive exponential weight ω η − ,η + satisfying For an non-negative integer k and 1 ≤ ≤ ∞, we then define the exponentially weighted Sobolev (1.21) When k = 0, we write W 0, exp,η − ,η + (R) = L exp,η − ,η + (R). Similarly, for r − , r + ∈ R, we define algebraically weighted Sobolev spaces W k, r − ,r + (R) through the norm where ρ r − ,r + is given by (1.5), and for k = 0 we write W 0, r − ,r + (R) = L r − ,r + (R). Additional notation. We let B(X, Y ) denote the space of bounded linear operators between two Banach spaces X and Y , equipped with the operator norm topology. For δ > 0, we let B(0, δ) denote the open unit ball centered at the origin in the complex plane with radius δ. When the intention is clear, we may abuse notation slightly by writing a function u(x, t) or u(x; γ) as u(t) = u(·, t) or u(γ) = u(·; γ), viewing it as an element of some function space for each t or γ. Throughout the paper, we use the notation x = (1 + x 2 ) 1/2 .
Resolvent estimates for L p
The linearization in the amplitude component L p is precisely the linearization about a real Fisher-KPP front, which we have studied in greater generality via our far-field/core approach in [3,2]. To unfold the branch point in the right dispersion curve Σ + , we let γ = √ λ, with branch cut chosen along the negative real axis. The sharp t −3/2 decay rate is implied by the following regularity of the resolvent near the essential spectrum, which is a special case of [2, Proposition 3.5].
Proposition 2.1. There exist positive constants C and δ and a bounded limiting operator R 0 p : To close the nonlinear argument here, we require an additional linear decay estimate measuring the solution in L 1 , which we prove by estimating the resolvent near the essential spectrum in L 1 . We start by analyzing the resolvent of the limiting operator on the right, L + p = ∂ xx .
Resolvent estimates for L + p
As in [3,2], we take advantage of the absorption mechanism induced by the strong spectral stability of L − p by establishing estimates on (L + p − γ 2 ) −1 restricted to odd functions, and then enforcing this oddness in our far-field/core decomposition when we pass estimates to the full resolvent. For any sufficiently localized odd function f , the action of (L + p − γ 2 ) −1 = (∂ xx − γ 2 ) −1 for any γ ∈ C with Re γ > 0 is given by Using this representation, we establish the following estimate.
Proof. First we establish the pointwise estimate for x, y ≥ 0, for some constants C, c > 0, and for any γ with Re γ ≥ 1 2 |Im γ|. To prove this estimate, first consider the case x ≥ y ≥ 0, for which we have The restriction Re γ ≥ 1 2 |Im γ| implies that −Re γ ≤ −c|γ| for some constant c > 0. Together with the fact that |1 − e z | ≤ C|z| for Re z ≤ 0, we thereby obtain For y ≥ x ≥ 0, the same argument with the roles of x and y interchanged leads to the estimate since x ≤ y for 0 ≤ x ≤ y, and hence we have the desired pointwise estimate. Using this estimate, we obtain , as desired.
The following estimate establishes boundedness of the resolvent in L ∞ provided an extra factor of exponential localization, and is useful in passing resolvent estimates onto the core terms in our far-field/core decomposition.
Lemma 2.3.
Fix η > 0. There exist positive constants C and δ such that for any odd f ∈ L 1 (R), we have Proof. The result follows from the pointwise estimate for x, y ≥ 0, and Re γ ≥ 0. To prove this, first consider x ≥ y ≥ 0. In this case, arguing as the preceding lemma, we have Since x ≥ y, we have e −ηx ≤ e − η 2 x e − η 2 y , and hence if γ is sufficiently small relative to η, e −ηx x |e −γx ||e γy | is bounded, so that (2.7) holds for x ≥ y ≥ 0. For y ≥ x ≥ 0, we again argue as in the preceding lemma to instead obtain as desired.
Finally, we state a basic estimate which corresponds to the standard L 1 -L ∞ , t −1/2 decay estimate in the heat equation, which will prove useful in establishing the same estimate for e L ψ t . This estimate follows readily from Young's convolution inequality.
Lemma 2.4.
There exists a constant C > 0 such that for any f ∈ L 1 (R), we have for any γ with Re γ ≥ 1 2 |Im γ|.
Full resolvent estimates for L p
In order to establish the equivalent of Lemma 2.2 for the full resolvent (L p − γ 2 ) −1 , we revisit the far-field/core decomposition used to prove Proposition 2.1 in greater generality in [3,2]. We first let (χ − , χ c , χ + ) be a partition of unity on R with and χ − (x) = χ + (−x). Hence χ c (x) is compactly supported. We decompose a given f ∈ L 1 0,1 (R) as and decompose the solution to (L p − γ 2 )p = f as Since the coefficients of L p attain their limits exponentially quickly as x → ±∞ and the commutator [L + p , χ + ] is compactly supported,f (γ) is exponentially localized with rate uniform in γ for γ small. Utilizing also regularity of p + in γ, we see thatf (γ) has a well-defined limit at γ = 0, and we obtain the following detailed estimate, which is a special case of Lemma 3.7 of [2].
With exponential localization off (γ), we solve (2.13) via the far-field/core ansatz Inserting this ansatz into (2.13) gives an equation Using Fredholm properties of L p on exponentially weighted spaces, we obtain the following invertibility of F . See [3,2] for further details. We also carry out a similar argument in the following section in order to obtain estimates on (L ψ − γ 2 ) −1 .
is invertible. We thereby denote the solution to F (w, b; γ) =f by
20)
with analytic maps We use this proposition to solve for p c and obtain the following estimates on the full resolvent.
Proposition 2.7. There exist positive constants C and δ such that for any f ∈ L 1 0,1 (R), we have Proof. The estimate on the derivative is strictly easier, so we focus only on estimating (L p − γ 2 ) −1 f in L 1 . The desired estimates for χ + p + and p − follow from Lemma 2.2 and the fact that the spectrum of L − p does not contain the origin, so we only need to establish the estimate for p c . By Proposition 2.6, we have Using the boundedness of T (γ) together with the estimates in Lemma 2.5, we readily obtain For the other term, similarly using boundedness of B(γ), we have Since which completes the proof of the proposition.
Resolvent estimates for L ψ
We use the same overall far-field/core approach to analyze the resolvent of L ψ . The main difference is that the spectrum of L − ψ is also marginally stable, in addition to that of L + ψ , so that we have to take into account neutrally stable modes arising from behavior on the left as well as from the right. We first establish spectral stability, ruling out unstable eigenvalues and the possibility of an embedded "eigenvalue" (with a bounded eigenfunction) at the origin.
Proof. The linear equation L ψ ψ = 0 has a positive pointwise solution ψ = ωq * . A Sturm-Liouville argument therefore implies that there can be no eigenvalues with non-negative real part; see for instance [27, proof of Theorem 5.5]. Using standard theory of exponential dichotomies, one concludes that ωq * is the only solution which is bounded for x < 0. Since ωq * is unbounded on x > 0, we therefore see that there is no solution which is bounded on the whole real line, as desired. Alternatively, one can revert to ϕ coordinates and explicitly solve the resulting first-order ODE for u = ϕ x to find the other linearly independent solution which is unbounded on x < 0.
Resolvent estimates for L − ψ
Here we study the resolvent for the limiting operator on the left in the ψ linearization, L − ψ = ∂ xx +2∂ x . This resolvent is given by with spatial eigenvalues ν ± (λ) given by Since the dispersion curve associated to L − ψ has no branch points, the resolvent kernel G − λ is pointwise analytic in λ in a neighborhood of the origin. Using the formula (3.2) for the resolvent kernel, one readily obtains the following lemma.
Lemma 3.2. There exist positive constants C and δ such that for any
for all λ ∈ B(0, δ) to the right of the essential spectrum of L − ψ .
We next analyze the regularity in λ of the spatial derivative of (L − ψ − λ) −1 f in order to later characterize time decay of derivatives of ψ.
for all λ ∈ B(0, δ) to the right of the essential spectrum of L − ψ .
Proof. For any λ to the right of the essential spectrum of L − ψ , we have (3.7) The coefficient − 1 2 √ 1+λ is analytic in λ, so we only need to estimate the integrals. For the first term, we have If |ν − (λ) + 2||ξ| ≤ 1, then by Taylor expansion we have for λ small. If on the other hand |ν − (λ) + 2||ξ| > 1, then we simply use the additional factor of e −2ξ to absorb the small exponential growth of e (ν − (λ)+2)ξ possible for λ small, so that Hence we obtain For the other integral, we note that Re ν + (λ) ≥ 0 for λ to the right of the essential spectrum of L − ψ , and ν + (λ) = O(λ) for λ small, so for λ small, completing the proof of (3.5), with Lemma 3.4. There exist positive constants C, c, and δ and a contour Γ given by for any f ∈ L 1 (R) for λ small and to the right of Γ.
Proof. We split the integral representation of the resolvent as in (3.7). Note that if we choose c sufficiently small, Γ is to the right of the essential spectrum of L − ψ , touching it only at the origin. The estimate on the first integral in (3.7) follows readily from the uniform exponential localization of e ν − (λ)(x−y) for y ≤ x and λ to the right of the essential spectrum of L − ψ , so we focus on the second integral. Using the change of variables y = x − z there and then changing the order of integration, we have provided λ is to the right of the essential spectrum of L − ψ , from which the lemma follows.
Lemma 3.5. There exist positive constants C and δ and a bounded limiting operator
for any λ ∈ B(0, δ) to the right of the essential spectrum of L − ψ .
Proof. Using the formula (3.2) for the resolvent kernel, we have Regularity in λ of the first integral may be established as in the proof of Lemma 3.3, so we focus on the second term. Exploiting the fact that supp(f ) ⊆ (−∞, 0], we have For x ≤ y ≤ 0 and for λ to the right of the essential spectrum of Hence we have from which the lemma readily follows.
Next we characterize regularity in a full neighborhood of λ = 0 given an extra exponentially localized coefficient, which is needed to reconcile with regularity of the right resolvent in our far-field/core argument in the following section. Lemma 3.6. Fix η > 0 and 1 ≤ ≤ ∞. There exist positive constants C and δ such that for any Proof. We focus first on the case = ∞. As in the proof of Lemma 3.5, we split the integral representation of the resolvent into two terms and focus on the integral from x to 0. We claim that for x ≤ y ≤ 0, we have for λ sufficiently small. To prove this, we first consider the case where |ν + (λ)(x − y)| ≤ 1. In this case, we have by Taylor's theorem If λ is sufficiently small relative to η, then the small exponential growth of e Re ν + (λ)x and e −Re ν + (λ)y can be absorbed into the factors of e η 4 x and e η 2 y . Hence we obtain which completes the proof of (3.12). By this estimate, we have which completes the proof of the lemma in the case = ∞. To handle 1 ≤ < ∞, we simply write , and repeat the above argument withη = η/2.
Resolvent estimates for L ψ
As in Section 2.2, we solve the equation (L ψ − γ 2 )ψ = f via a far-field/core decomposition. We again decompose f using the partition of unity introduced in Section 2.2 as be the odd extension of f + , and let ψ + solve in order to take advantage of the improved properties of (L + ψ −γ 2 ) −1 = (∂ xx −γ 2 ) −1 on odd functions. We let ψ − solve and then decompose the solution ψ to (L − γ 2 )ψ = f as As a result, ψ c solves 14) wherẽ As in the previous section, we start by establishing regularity off in γ in appropriate exponentially weighted spaces.
Lemma 3.7.
There exist positive constants C and δ such that for any γ ∈ B(0, δ) with Re γ ≥ 0, we have for any f ∈ L 1 0,1 (R), for = 1 or ∞ as well as for any f ∈ L 1 (R).
Proof. First we prove (3.16). The dependence on γ enters through ψ + and ψ − . The coefficients of ψ + and its derivatives in (3.15) are all supported on x > 0 and exponentially localized with a rate uniform in γ for γ small. It follows from [2, Proposition 3.1] that these terms are Lipschitz in γ in the desired region.
For the terms involving ψ − , expansions to order λ = γ 2 are guaranteed by Lemma 3.6 in a full neighborhood of λ = 0 thanks to the strongly localized coefficients, so in particular for Re γ ≥ 0, as desired.
The estimate (3.17) follows similarly, but with localization of terms involving ψ + obtained from Lemma 2.3. The term f L ∞ on the right hand side is needed to control We solve (3.14) by taking advantage of this exponential localization off and the Fredholm properties of L ψ on exponentially weighted spaces. To carry this out, we make the ansatz requiring v to be exponentially localized. Inserting this ansatz into (3.14) leads to an equation where For η > 0, we let (X η , Y η ) denote either pair of spaces Proof. The Fredholm index may be computed from the asymptotic dispersion relations, (3.24) obtained from substituting ψ = e λt+νx into ψ t = L ± ψ ψ, as follows; see for instance [21,12] for background. To take into account the effect of the exponential weight, we compute the roots to ν → d + ψ (λ, ν − η) for η > 0 small, and find a double root at ν = η > 0. The roots of d − ψ (0, ν + η) are ν = −η and ν = −η − 2, each negative for η > 0. The fact that none of these roots are zero implies that L ψ is Fredholm on X η , and its Fredholm index is given by the difference of the Morse indices, as desired.
is analytic in γ.
Proof. We rewrite F as taking advantage of the fact that (L − ψ − γ 2 )(e ν + (γ 2 )· ) = (L + ψ − γ 2 )e −γ· = 0. The terms involving (L ψ − L ± ψ ) or commutators with the cutoff functions are all exponentially localized with uniform rate, so F preserves exponential localization and hence is well defined on the above spaces, as desired. Analyticity follows from the fact that ν + (γ 2 ) is analytic; for more details, see for instance [2, proof of Lemma 3.9]. Corollary 3.10. For η > 0 sufficiently small and for either pair of spaces (X η , Y η ) in (3.22), there exists a δ > 0 such that for γ ∈ B(0, δ), the map with analytic maps is Fredholm index 0 for γ sufficiently small. Lemma 3.1 implies that this map has trivial kernel at γ = 0, since a nontrivial kernel would give rise to a bounded solution to L ψ ψ = 0. The result then follows from the implicit function theorem.
Having solved (L ψ − γ 2 )ψ = f in a neighborhood of γ = 0 in the preceding corollary, we now analyze the regularity of the solution in γ near the essential spectrum. The following proposition will be used in Section 4 to establish diffusive decay of e L ψ t L 1 →L ∞ .
Proof. With notation as in the far-field/core decomposition carried out above, define (3.31) The term χ − ψ − (γ) satisfies the desired estimate by Lemma 3.2. For the second term, we have by Corollary 3.10 using Corollary 3.10 to estimate the operator norm of B − (γ) and Lemma 3.7 to control the norm of f (γ). For γ 2 to the right of the essential spectrum of L − ψ , χ − e ν + (γ 2 )· is bounded, so ψ left satisfies the desired estimate.
We collect the remaining terms in the far-field/core decomposition in ψ right , including the exponentially localized "center" term, so (3.32) The term χ + ψ + (γ) satisfies the desired estimate by Lemma 2.4, and the center term T (γ)f (γ) is controlled in L ∞ by f L 1 ∩L ∞ thanks to Corollary 3.10 and Lemma 3.7. Also using Corollary 3.10 and Lemma 3.7 to control B + (γ)f (γ), we obtain for Re γ ≥ 0, so in particular ψ right satisfies the desired estimate.
The pointwise analyticity follows from the analyticity in γ from Corollary 3.10 as well as the representation formulas for ψ + and ψ − via integration against the resolvent kernels.
The next proposition takes advantage of the outward transport on the left to extract faster temporal decay, characterized here by higher regularity in γ, by measuring the solution in a norm which allows algebraic growth on the left.
We now estimateψ right . By Corollary 3.10 and Lemma 3.7, we have for γ small with Re γ ≥ 0. For the term χ + ψ + (γ), we have By the pointwise estimate (2.5) on G odd γ , we have as desired. For the remaining terms inψ right , we have for γ small with Re γ ≥ 1 2 |Im γ|, using Corollary 3.10 and Lemma 3.7 to control T (γ)f (γ) and B + (γ)f (γ). This completes the proof of the proposition.
We now establish improved regularity in γ of spatial derivatives of ψ, which will eventually translate into improved temporal decay. First, we characterize this regularity when measuring the derivative in L 1 .
Proposition 3.13.
There exist constants C and δ > 0 such that for all f ∈ L 1 (R), decomposing the solution to (L ψ − γ 2 )ψ = f as in Proposition 3.11, we have for all γ ∈ B(0, δ) with γ 2 to the right of the contour Γ defined in (3.8), and The second term satisfies the desired estimate by Lemma 3.4. For the first term, we have For the other part of ∂ x ψ left , we have by Corollary 3.10 and Lemma 3.7 For the first term in parenthesis, we have for γ 2 to the right of the essential spectrum of L − ψ . The second term in the parenthesis is bounded by the argument in the proof of Lemma 3.4. Hence ∂ x ψ left satisfies the desired estimate. Now we estimate ∂ x ψ right . First, we write ψ + as using the integral kernel for (∂ xx − γ 2 ) −1 on the real line without specifically taking advantage of odd data. We then have by Young's inequality The second term satisfies the desired estimate by the above argument. For the first term, we have for a fixed η > 0, using the compact support of χ + to absorb the exponential factor, and then using Lemma 2.3 to estimate the remaining term involving ψ + . By Corollary 3.10 and Lemma 3.7, we have Finally, using similar arguments to the estimates on ∂ x (χ + ψ + ), we obtain for γ small with Re γ ≥ 1 2 |Im γ|. Hence ∂ x ψ right satisfies the desired estimate.
The regularity in γ of spatial derivatives is further improved when we measure in L ∞ instead of L 1 .
Proof. Using the far-field/core decomposition to solve for ψ, we again write.
We then define We group the terms χ − ψ − and B − (γ)f (γ)χ − e ν + (γ 2 )· in Ψ right d even though they involve terms originating from the neutral mode on the left because they do not have sufficient regularity in γ to obtain decay at the optimal rate when integrating along a parabolic contour tangent to the imaginary axis. Indeed,f (γ) is only Lipschitz in γ, whereas to obtain decay at with rate t −1 by integrating over such a contour, one would need expansions to order λ = γ 2 . However, these terms have a compactly supported factor χ − which can absorb the small exponential growth of e ν + (γ 2 )· when γ 2 passes to the left of the essential spectrum of L − ψ but remains small, and hence we can still control these terms in this region and thereby use the contours otherwise reserved for terms originating from the right.
More precisely, by Lemma 3.6, we have for any γ ∈ B(0, δ) with Re γ ≥ 0 for δ sufficiently small. By a similar argument, also using Corollary 3.10 and Lemma 3.7 we obtain The other terms in Ψ right d are bounded in L ∞ for γ ∈ B(0, δ) with Re γ ≥ 1 2 |Im γ| by arguments similar to those in the proof of the preceding proposition. The remaining terms collected in Ψ left d are The first term satisfies the desired expansion by Lemma 3.3. For the second term, we have by Corollary 3.10 and Lemma 3.7 for γ small with γ 2 to the right of the essential spectrum of L − ψ , which completes the proof of the proposition.
Linear estimates
We now translate the regularity of the resolvents of L p and L ψ near their essential spectra into time decay estimates on the semigroups they generate. As elliptic operators with smooth bounded coefficients, L p and L ψ are both sectorial operators on L p (R) for any 1 ≤ p ≤ ∞, and hence they generate analytic semigroups through the inverse Laplace transform, for appropriately chosen contours Γ p , Γ ψ . We take advantage of the estimates on the resolvents obtained in Sections 2 and 3 to deform the integration contours near the essential spectrum and thereby extract temporal decay.
Large time estimates on e Lpt
First we state the t −3/2 decay estimate on e Lpt , which follows as in [3,2] from the regularity of the resolvent in Proposition 2.1. In our nonlinear argument, we will need to estimate p(t) in the stronger norm L 1 , and so we also state and prove the following linear decay estimate in this space. (4.4) Proof. By Proposition 2.7, for any p 0 ∈ L 1 0,1 (R), we have for any γ sufficiently small with Re γ ≥ 1 2 |Im γ|. With this estimate in hand, we obtain time decay from a standard argument, choosing Γ p in (4.1) to be a circular arc centered at the origin whose radius scales as t −1 , connected to two rays extending out to infinity in the left half plane; see the right two panels in Figure 2. The above estimate on the rate of blow up of the resolvent then translates into the desired decay rate; see, for instance, [3,Proposition 7.4].
Large time estimates on e L ψ t
The first linear estimate we prove here states that ψ decays diffusively, with the expected L 1 -L ∞ estimate.
Proposition 4.3.
There exists a constant C > 0 such that for any ψ 0 ∈ L 1 (R), we have for all t > 0 The terms ψ right (γ) and ψ left (γ) remain analytic in γ 2 for γ 2 in the resolvent set of L ψ , so we may separately deform the contours in these two integrals by Cauchy's theorem. For the integral involving ψ left (γ), we first fix ε > 0 small and integrate along the contour in the left most panel in Figure 2. We choose the lengths of the line segments Γ 1,ε left , Γ 2,ε left to be equal to ε. The boundedness of the resolvent from Proposition 3.11 then guarantees that Hence we may send ε → 0 + and thereby write the semigroup as the integral over the limiting contour Γ left = Γ + left ∪ Γ 0 left ∪ Γ − left pictured in the center left panel of Figure 2. We parameterize the parabolic arc tangent to the imaginary axis as for some δ, c > 0 sufficiently small. The contribution from the rays extending out to infinity is exponentially decaying in time, since these are contained strictly in the left half plane, so we focus on estimating the integral over Γ 0 left , for which we have By the control of ψ left in Proposition 3.11, we then have For the term involving ψ right (γ), we integrate over the same contour used in the proof of Proposition 4.2, pictured in the right two panels of Figure 2. Note that for t large, the contour Γ t right passes through the essential spectrum of L ψ ; however, the resolvent is still pointwise analytic in γ in this region by Proposition 3.11, so that the resulting contour integral is well defined and still determines the action of the semigroup e L ψ t [18]. We thereby obtain, using the estimate on ψ right from Proposition 3.11, The restriction to ψ 0 ∈ L 1 (R) ∩ L ∞ (R) may be removed by combining these estimates with the standard small time parabolic regularity estimate for 0 < t < 1; see Lemma 4.6 below. Hence we obtain for all t > 0, as desired.
In the nonlinear argument, we will need to establish sufficiently fast decay of terms of the form f ψ(t) 2 , where f is a function which is exponentially localized on the left. The pure diffusive decay rate t −1/2 of ψ is not sufficient to close our nonlinear argument for such a term. However, we can take advantage of the outward transport as well as the localization of f to obtain a stronger decay estimate, sufficient to close our nonlinear argument, by measuring in a norm which allows some algebraic growth on the left.
Proposition 4.4.
There exists a constant C > 0 such that for any ψ 0 ∈ L 1 0,1 (R), we have for all Proof. We again restrict at first to ψ 0 ∈ L 1 0,1 (R) ∩ L ∞ (R) and use Proposition 3.12 to decompose the resolvent, this time writing The integrand in the first integral only depends on γ through e γ 2 t , and hence is analytic in γ 2 on C, so that this term vanishes. For the second integral, we integrate over the contour Γ left used in the proof of Proposition 4.3 and use Proposition 3.12 to estimate the remainderψ left (γ) −ψ left (0) to obtain gaining an extra factor of t −1/2 in decay from the O(|γ| 2 ) estimate on the remainder. The contributions from the other parts of the contour Γ left are again exponentially decaying in time.
The estimate on the term involvingψ right (γ) follows as in the proof of Proposition 4.3, but we gain an extra factor of t −1/2 since ψ right (γ) L ∞ is bounded rather than blowing up at rate |γ| −1 by Proposition 3.12. The restriction to ψ 0 ∈ L ∞ (R) can again be removed by the small time regularity estimate in Lemma 4.6.
Finally, we establish the following decay estimates on derivatives, equivalent to those for the heat equation.
Proposition 4.5.
There exists a constant C > 0 such that for any ψ 0 ∈ L 1 (R), we have for all and Proof. The proof of (4.8) is completely analogous to that of Proposition 4.3, with Proposition 3.13 replacing Proposition 3.11. The proof of (4.9) is similar, using Proposition 3.14 but gaining an extra factor of t −1/2 decay due to the O(|γ| 2 ) estimate on the remainder Ψ left
Small time estimates
As L p and L ψ are both elliptic operators with smooth, bounded coefficients, e Lpt and e L ψ t obey the same well known small time regularity estimates; see for instance [22,15]. The estimates in this section thereby hold for L = L p or L ψ . Lemma 4.6. Fix 1 ≤ ≤ ∞. There exists a constant C > 0 such that for all 0 < t < 1, we have and (4.12)
Nonlinear stability -proof of Theorem 1
The system (1.11)-(1.12) for (p, ψ) is locally well posed in L 1 (R) ∩ L ∞ (R) × W 1,∞ (R). By parabolic regularity, a solution with initial data (p 0 , ψ 0 ) small in after a fixed short time. Hence in proving Theorem 1, we may assume without loss of generality that ∂ x p 0 is small in L 1 ∩ L ∞ .
To control the nonlinearity, we will use the fact that by Taylor's theorem, there exists a constant C > 0 such that and We will obtain global control of Θ(t) through the estimate in the following proposition.
Proposition 5.1. There exist positive constants C 1 and C 2 such that the function Θ(t) from (5.2) satisfies We start by rewriting the system (1.11)-(1.12) in mild form via the variation of constants formula, obtaining where and 14) The coefficients in the nonlinear terms gain spatial localization due to the factors involving ω and q * . Precise estimates on this localization effect may be inferred from the front asymptotics [1], To characterize decay rates of nonlinearities throughout this section, we will make frequent use of the following elementary estimate.
Lemma 5.2.
Fix α > 0 and let β > 1, further assuming β ≥ α. There exists a constant C > 0 such that for all t > 1, we have Proof. The integral in question is clearly uniformly bounded for 1 < t < 2, since s is bounded away from t in the region of integration, so there is no singularity in the integrand. We may therefore assume t > 2. When estimating the full nonlinearities, there will of course be a piece of the integral from t − 1 to t, but here we focus on only the part from 0 to t − 1 in order to separate the large time decay from small time regularity estimates.
For t > 2, we note that t − 1 > t 2 , and write In the first region of integration, t − s ∼ t, and so we have In the second region of integration s ∼ t, and so we have for t > 2, since β ≥ α. If α = 1, we gain a factor of log t, but since β > 1, this can be absorbed by (1 + t) −β while still retaining decay at rate t −1 . Finally, if α < 1 we have for t > 2, since β > 1, so β + α − 1 > α, completing the proof of the lemma.
We note that if α < 1, then the integral from 0 to t − 1 in Lemma 5.2 may be replaced by an integral from 0 to t, since the singularity in the integrand near s = t is integrable in this case.
We prove Proposition 5.1 by breaking the estimate into several pieces, first establishing control of p(t).
Estimates on p(t)
We begin by estimating p(t) W 1,∞ 0,−1 , taking advantage of the estimates on p and ψ encoded in the definition of Θ. Lemma 5.3 (W 1,∞ 0,−1 estimates on I p,1 (t)). There exists a constant C > 0 such that for all t ∈ (0, T * ) we have Proof. We split the integral in the definition of I p,1 (t) as For the first piece, we have by Proposition 4.1 ( The nonlinear terms have exponentially localized coefficients, so that provided ω −1 q −1 * p(s) L ∞ ≤ 1 2 for all s ∈ (0, t). Hence, since Θ(t) is non-decreasing, we have so that we have the desired estimate for the piece of the integral from 0 to t − 1. The estimate on the integral from t − 1 to t is similar, but we use the small time regularity estimate (4.11) from Lemma 4.6 in place of Proposition 4.1.
For t > 1, we split the integral into two pieces in order to handle small time regularity separately from the decay estimates for large times, writing The integral from t − 1 to t is estimated as in the 0 < t < 1 case above. For the other integral, we have, using Proposition 4.1, By the front asymptotics (5.15), we have |ω −1 q −1 * | ≤ Cρ 0,−1 , so that Hence we obtain by Lemma 5.2 and the fact that Θ(t) is non-decreasing, as desired.
Lemma 5.5 (W 1,∞ 0,−1 estimates on I p,3 (t)). There exists a constant C > 0 such that for all t ∈ (0, T * ), we have (5.20) Proof. We focus on the case t > 1, as the small time estimates can be handled in a similar manner to the proof of Lemma 5.4. We again split the integral into two pieces, writing Again, we focus on the integral from 0 to t − 1, as the other integral involves only small time estimates similar to those in the proof of Lemma 5.4. For this first integral, we have By the front asymptotics (5.15), we have for a fixed η > 0 sufficiently small. In particular Hence we obtain by Lemma 5.2, as desired.
The estimates on I p,4 (t) are strictly easier than those on I p,i (t), i = 2, 3, since the only difference is that the factor of ωq * is replaced by a factor of p(s), which has the same spatial localization but extra temporal decay: that is, we have ωq * L ∞ 0,−1 ≤ C but p(s) W 1,∞ 0,−1 ≤ C(1 + s) −3/2 Θ(s). We thereby obtain the following lemma. Lemma 5.6 (W 1,∞ 0,−1 estimates on I p,4 (t)). There exists a constant C > 0 such that for all t ∈ (0, T * ), we have Proposition 5.7 (W 1,∞ 0,−1 estimates on p(t)). There exist positive constants C 1 and C 2 such that Proof. Having already handled all the nonlinear terms in Lemmas 5.3 through 5.6, it only remains to estimate the term e Lpt p 0 in the variation of constants formula (5.6). For 0 < t < 1, we have by Lemma 4.6 ( For t > 1, we instead use Proposition 4.1 to estimate so that for all t > 0 we have as desired. We now establish control of p(t) in W 1,1 , which is in turn used in estimating several terms in the nonlinearity.
Proposition 5.8 (W 1,1 estimates on p(t)). There exist positive constants C 1 and C 2 such that for all t ∈ (0, T * ), we have Proof. The proof is exactly the same as that of Proposition 5.7, but with Proposition 4.2 replacing the linear estimate Proposition 4.1.
Estimates on ψ(t)
We first prove estimates on ψ(t) L ∞ −1,0 , since in light of Proposition 4.4, these require measuring the nonlinearities in L 1 0,1 , and so these estimates are strictly harder than estimates on ψ(t) L ∞ , which only require measuring the nonlinearities in the weaker L 1 norm. Lemma 5.9 (L ∞ 0,−1 estimates on I ψ,1 (t)). There exists a constant C > 0 such that for all t ∈ (0, T * ), we have Proof. We again split the integral in I ψ,1 (t) as Throughout this section, we will focus on estimating the integral from 0 to t − 1. The estimates on the other integral are similar, but we replace Proposition 4.4 with Lemma 4.6 to guarantee integrability near t = s. Of course if t < 1, then we only write one integral from 0 to t, and use the small time estimates of Lemma 4.6 in this single integral. Hence for the remainder of this section we will assume t > 1. Using Proposition 4.4, we estimate Taylor expanding the nonlinearity and using that |ω −1 q −1 * | ≤ Cρ 0,−1 and q * /q * is bounded, we have Estimating the integral from t − 1 to t similarly but using the small time estimates from Lemma 4.6, we obtain by Lemma 5.2 and an analogous argument for the second integral, provided ω −1 q −1 * p(s) L ∞ ≤ 1 2 for all s ∈ (0, t), as desired.
Proof. Having estimated the nonlinear terms in the preceding three lemmas, it only remains to estimate the term e L ψ t ψ 0 in the variation of constants formula (5.7). For 0 < t < 1, we have by By similar arguments, we obtain the following control of ψ(t) in L ∞ .
Proof. The proof is exactly the same as that of Proposition 5.12 but with the estimate e L ψ t L 1 →L ∞ ≤ Ct −1/2 of Proposition 4.3 replacing Proposition 4.4. The control of the nonlinearities in the stronger norm L 1 0,1 obtained in the proof of Proposition 5.12 is sufficient to close the argument.
It only remains to estimate ψ x (t) in the spaces encoded in the definition of Θ(t), (5.2). To do this, we differentiate the variation of constants formula (5.7), obtaining ψ x (t) = ∂ x (e L ψ t ψ 0 ) + ∂ x I ψ,1 (t) + ∂ x I ψ,2 (t) + ∂ x I ψ,3 (t). (5.31) Notice that the linear estimates on ∂ x e L ψ t all measure the initial data in the L 1 norm. Hence the nonlinear estimates on derivatives are again strictly easier than those obtained in the proof of Proposition 5.12, which requires measuring the nonlinearities in the stronger norm L 1 0,1 , and so we readily obtain the following nonlinear estimates on derivatives.
Proof. We differentiate the variation of constants formula (5.7) twice, use the estimate from Lemma 4.6, and estimate the nonlinear terms in a similar manner to the proof of Proposition 5.13 for small times.
The desired control of Θ(t), Proposition 5.1, follows readily from the control of the individual terms from Propositions 5.7, 5.12, 5.13, and 5.14 together with Lemma 5.15.
Reverting to (r, ϕ) coordinates and using the fact that there exist constants c, C > 0 so that cρ 0,−1 ≤ |ω −1 q −1 * | ≤ Cρ 0,−1 , we find that these estimates are equivalent to those stated in Theorem 1, and the smallness of Ω 0 translates to the smallness condition on the initial data in the statement there. We note that from the control of Θ(t) we further obtain detailed estimates on p and ψ in stronger norms, as well as decay estimates on derivatives. | 2021-03-22T01:16:21.884Z | 2021-03-18T00:00:00.000 | {
"year": 2021,
"sha1": "32835920d34bc71439bd18f0e83415b4ba422f5d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2103.10458",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "32835920d34bc71439bd18f0e83415b4ba422f5d",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
16594771 | pes2o/s2orc | v3-fos-license | Differences in Arithmetic Performance between Chinese and German Children Are Accompanied by Differences in Processing of Symbolic Numerical Magnitude
Symbolic numerical magnitude processing skills are assumed to be fundamental to arithmetic learning. It is, however, still an open question whether better arithmetic skills are reflected in symbolic numerical magnitude processing skills. To address this issue, Chinese and German third graders were compared regarding their performance in arithmetic tasks and in a symbolic numerical magnitude comparison task. Chinese children performed better in the arithmetic tasks and were faster in deciding which one of two Arabic numbers was numerically larger. The group difference in symbolic numerical magnitude processing was fully mediated by the performance in arithmetic tasks. We assume that a higher degree of familiarity with arithmetic in Chinese compared to German children leads to a higher speed of retrieving symbolic numerical magnitude knowledge.
INTRODUCTION
According to the recently proposed "integrative theory of numerical development", numerical magnitude processing skills are at the core of numerical development and individual differences regarding these skills are assumed to be related to individual differences in arithmetic proficiency and math performance (Siegler and Lortie-Forgues, 2014). Numerical magnitude processing skills are typically assessed using magnitude comparison tasks. While non-symbolic numerical magnitude comparison tasks usually involve the comparison of two dot arrays, symbolic numerical magnitude comparison tasks involve the comparison of two Arabic digits. In either case, task difficulty is manipulated by varying the numerical distance between the stimuli to be compared. Task performance typically decreases in line with a decrease in numerical distance (e.g., Moyer and Landauer, 1967;Van Oeffelen and Vos, 1982).
Recent meta-analyses revealed a significant association between non-symbolic numerical magnitude processing skills and math performance (Chen and Li, 2014;Fazio et al., 2014) as well as between symbolic numerical magnitude processing skills and math performance (Schneider et al., 2015). It could be demonstrated that the association between non-symbolic numerical magnitude processing skills and math performance cannot entirely be attributed to general non-numerical cognitive abilities (e.g., Chen and Li, 2014). Moreover, based on the findings from longitudinal studies, Chen and Li (2014) report that while non-symbolic numerical magnitude processing skills prospectively predict later math performance, they can also be retrospectively predicted by earlier math performance. On the one hand, these findings are in line with the notion that nonsymbolic numerical magnitude processing skills are fundamental to the development of mathematical skills, on the other hand they suggest that mathematical skills shape non-symbolic numerical magnitude processing skills. Schneider et al. (2015) compared the strength of the association between non-symbolic numerical magnitude processing skills and math performance with the strength of the association between symbolic numerical magnitude processing skills and math performance. The effect size was significantly higher for symbolic than for non-symbolic numerical magnitude processing skills. Longitudinal studies indicate that symbolic numerical magnitude processing skills are predictively related to mathematical skills (e.g., De Smedt et al., 2009;Vanbinst et al., 2015). This association cannot be explained by individual differences in children's preschool mathematical abilities, intellectual abilities, processing speed, and verbal as well as visual-spatial short-term memory skills (Vanbinst et al., 2015). To our knowledge, however, the question of whether symbolic numerical magnitude processing skills may also be shaped by mathematical skills has not yet been examined. The association between symbolic numerical magnitude processing skills and more complex mathematical skills such as mental arithmetic is most consistently found for overall average reaction time in symbolic numerical magnitude comparison tasks, suggesting that children's familiarity and fluency in manipulating symbolic numbers serves as the crucial link (Lyons et al., 2015). In addition, arithmetic problem solving is supposed to involve the retrieval of numerical magnitude knowledge (e.g., Siegler and Lortie-Forgues, 2014;Schneider et al., 2015), and thus a higher familiarity and fluency with arithmetic can be assumed to induce a higher familiarity and fluency in symbolic numerical magnitude processing.
Cross-national assessments of mathematical achievement have repeatedly demonstrated that Chinese children outperform their non-Chinese peers at various ages (e.g., Lin, 2009, 2013;Mullis et al., 2012;OECD, 2013). Hence, if a higher familiarity and fluency with arithmetic is reflected in a higher familiarity and fluency in symbolic numerical magnitude processing, a superior Chinese performance should not only exist for arithmetic skills but also for symbolic numerical magnitude processing skills. Recently, Rodic et al. (2014) compared 5 to 7-year old children from China, Kyrgyzstan, Russia, and the UK with regard to simple arithmetic tasks and different precursor skills assumed to be related to the development of arithmetic skills (i.e., non-symbolic numerical magnitude comparison, dot enumeration, number naming, and symbolic numerical magnitude comparison). In line with previous findings, Chinese children significantly outperformed all other groups in the arithmetic tasks. The superior arithmetic performance of Chinese children was, however, not (exactly) mirrored in the precursor skills. Russian children, for example, did not perform significantly worse than Chinese children in any of these measures. Nevertheless, only the understanding of symbolic number explained variation in arithmetic performance in all samples and was therefore regarded as the most important predictor of individual differences in early arithmetic by the authors (see Rodic et al., 2014). Conversely, a potential influence of arithmetic skills on the understanding of symbolic number was not addressed by Rodic et al. (2014). Indeed, the influence of symbolic number processing skills on arithmetic skills might be higher than the opposite direction of influence in the relevant age group because the children were only beginning to develop arithmetic skills.
To further explore the association between arithmetic skills and symbolic number magnitude processing skills, we tested Chinese and German third graders. In their 3rd year of elementary school, children typically possess basic arithmetic skills. If better arithmetic skills are reflected in symbolic numerical magnitude processing skills, a superior Chinese performance should not only exist for arithmetic skills but also for symbolic numerical magnitude processing skills. Moreover, if arithmetic skills shape symbolic number magnitude processing skills, a performance difference between Chinese and German children in symbolic numerical magnitude processing should be mediated by arithmetic skills. As a performance difference between Chinese and German children in the arithmetic tasks as well as in the symbolic numerical magnitude comparison task might be due to the fact that Chinese number words can be verbalized more quickly than German number words (e.g., Lüer et al., 1998), we also measured children's performance in a task assessing speed of number pronunciation and included it as a control measure.
Participants
The German sample consisted of 33 third graders (18 female, mean age 9.1, range 8-10 years) recruited from a public primary school in Mühlheim am Main (Germany). The Chinese sample was the one described by Lonnemann et al. (2011): Participants were 33 (18 female) Chinese third graders (mean age 9.3, range 8-10 years) recruited from a public primary school in Shanghai (China). One Chinese child was excluded from further analysis because of exhibiting extreme scores in the addition task as well as in the symbolic numerical comparison task (addition z-score: −3.02, RT symbolic numerical magnitude comparison z-score: −1.75). Written and informed consent was obtained from the parents of all participating children.
Tasks
All children started with the symbolic numerical magnitude comparison task, then proceeded to the arithmetic tasks, and finally worked on the task assessing speed of number pronunciation. All tasks were carried out individually.
Symbolic Numerical Magnitude Comparison
In the symbolic numerical magnitude comparison task, two single-digit Arabic numbers were presented on a screen. The two stimuli were arranged in a horizontal fashion. Children had to indicate the side with the larger numerical magnitude by using the left index finger when it was larger on the left hand side and by using the right index finger when it was larger on the right hand side. Responses were given by pressing the 'S' and 'L' keys on a notebook keyboard. Comparison pairs varied along four numerical distances (see Table 1). Each of the 12 comparison pairs was presented eight times, four times with the larger number on the left hand side and four times with the larger number on the right hand side. Reaction times (RT) and error rate (ER) were recorded and the instruction stressed both speed and accuracy. The trials were pseudo-randomized so that there were no consecutive identical comparison pairs and numerical distance was not identical on more than three consecutive trials. The experiment was preceded by six warmup trials to familiarize participants with the task (data not recorded), and presented on a notebook with Presentation R software (Neurobehavioral Systems, Inc.). Black-colored Arabic digits were presented in Times 60-point font on a 17 color screen against a white background. A target stimulus was presented until the response was given but only up to a maximum duration of 4000 milliseconds (ms), and was followed by a black screen for 700 ms. If no response was given, a trial was classified as erroneous. Correct responses were used for computing mean RT. Response times below 200 ms were excluded from further analysis as well as responses outside an interval of ±3 standard deviations around the individual mean. Trimming resulted in 1.5% of response exclusions for Chinese participants and in 1.3% of response exclusions for German participants. A reciprocal transformation (dividing 1 by each score) was carried out on mean RT to yield more normally distributed data (the Shapiro-Wilk test revealed that the distribution was not significantly different from a normal distribution after transformation, for Chinese participants p = 0.19; for German participants p = 0.25). To estimate the reliability of the symbolic numerical magnitude comparison task, the Pearson correlation coefficient between reciprocal RT in odd and even trials was computed separately for Chinese and German participants (Chinese participants: r = 0.97; German participants: r = 0.97).
Arithmetic
The arithmetic tasks consisted of nine blocks of ten problems each (see Lonnemann et al., 2008Lonnemann et al., , 2011; five blocks were addition problems and four blocks were subtraction problems. The addition problems were divided into two blocks in which a single-digit number had to be added to a two-digit number (e.g., 82 + 5), with only one of these blocks requiring carrying (e.g., 43 + 9). Moreover, three blocks contained addition problems in which two two-digit numbers had to be added (e.g., 24 + 65). In only one of these latter blocks, one of the addends was a decade number (e.g., 68 + 30). Among the remaining two blocks without decade numbers, again, only one block required carrying (e.g., 13 + 88). The subtraction problems were structured in a similar way: there were two blocks in which a single-digit number had to be subtracted from a two-digit number (e.g., 25 -3) and two blocks which required subtraction of a two-digit number from another (e.g., 76 -23). In both cases, only one block required borrowing (e.g., 54 -7 or 82 -45). Children were asked to write down as many solutions as possible in 30 s per block. Total scores ranging from 0 to 90 were used to estimate arithmetic skills. To estimate the reliability of the arithmetic tasks, Cronbach's alpha was computed separately for Chinese and German participants (Chinese participants: Cronbach's α = 0.69; German participants: Cronbach's α = 0.91).
Speed of Number Pronunciation
Children received two sheets of paper, each listing 60 Arabic digits. Stimuli were arranged in six rows of ten items and presented in Times New Roman 48-point font. Children were instructed to correctly name the items as quickly as possible and to proceed from left to right, starting at the top row and continuing to the bottom row. The first sheet contained the numbers 1-3 and the second one the numbers 4-6 with no consecutive identical stimuli. Response time was measured using a stopwatch from a start signal until the child named the last stimulus. The mean response time of both sheets was used to estimate speed of number pronunciation. To yield more normally distributed data, a reciprocal transformation (dividing 1 by each score) was carried out on mean response time (the Shapiro-Wilk test revealed that the distribution was not significantly different from a normal distribution after transformation, for Chinese participants p = 0.26; for German participants p = 0.28). To estimate the reliability of the speed of number pronunciation tasks, the Pearson correlation coefficient between the response times of both sheets was computed separately for Chinese and German participants (Chinese participants: r = 0.86; German participants: r = 0.84).
Analyses
By using two-sample t-tests, Chinese and German children were compared with regard to reciprocal RT in the symbolic numerical magnitude comparison task, arithmetic skills, and reciprocal speed of number pronunciation. The Kolmogorov-Smirnov-Z test was used to compare age and ER in the symbolic numerical magnitude comparison task because the assumption of normality was violated for these variables.
To assess effects of the distance between the two to-becompared Arabic digits in the symbolic numerical magnitude comparison task, we looked for linear trends based on reciprocal RT separately for Chinese and German participants. ER was low in the symbolic numerical magnitude comparison task and it did not significantly differ between groups (see Table 2) so it was not further analyzed.
In order to test whether a possible performance difference between Chinese and German children in the symbolic numerical magnitude comparison task was mediated by arithmetic skills, we used mediation analyses. On the one hand, mediation analysis allows to investigate direct associations, used in this study to examine the relation between the factor group (Chinese vs. German) and individual performance in the symbolic numerical magnitude comparison task, while holding constant the performance in the arithmetic tasks. On the other hand, mediation analysis provides estimates of the statistical significance of indirect associations, used in this study to evaluate whether arithmetic skills mediate the association between the factor group and symbolic numerical magnitude processing skills. A second mediation model was tested to check the opposite direction of influence, i.e., to examine whether a possible performance difference between Chinese and German participants in the arithmetic tasks was mediated by the performance in the symbolic numerical magnitude comparison task. The mediation models were tested using the INDIRECT macro in SPSS (Preacher and Hayes, 2008). This macro uses the bootstrapping method with bias-corrected confidence estimates. Confidence intervals (95%) for the indirect associations were obtained using 5000 bootstrap samples. If a confidence interval does not include zero, the indirect effect is deemed statistically different from zero representing evidence for a mediating effect (Hayes and Preacher, 2014). Reciprocal speed of number pronunciation was used as control variable in the mediation analyses.
Moreover, Pearson correlation coefficients (before and after correction for attenuation) were employed to verify associations between arithmetic skills and reciprocal RT in the symbolic numerical magnitude comparison task as well as between arithmetic skills and reciprocal speed of number pronunciation, separately in both groups. The respective correlation coefficients of both groups were compared directly using the Fisher r-to-z transformation. All effects were tested using a significance level of α = 0.05.
Reaction times in the symbolic numerical magnitude comparison task increased as the numerical distance between the two to-be-compared Arabic digits decreased: significant linear trends were found for Chinese [F(1,31) = 77.51, p < 0.001, η 2 p = 0.71] and for German children [F(1,32) = 104.34, p < 0.001, η 2 p = 0.77; see Figure 1A]. The first mediation model revealed that the group difference in reciprocal RT in the symbolic numerical magnitude comparison task was no longer significant after controlling for arithmetic skills [direct effect = 0.0000, t(63) = 0.36, p = 0.72] and it was significantly mediated by the performance in the arithmetic tasks (indirect effect = 0.0002; confidence interval = 0.0001 to 0.0005; see Figure 2). Speed of number pronunciation had no significant partial effect on reciprocal RT in the symbolic numerical magnitude comparison task [t(63) = 1.03, p = 0.31]. The second mediation model showed that the group difference in arithmetic performance was still significant after controlling for reciprocal RT in the symbolic numerical magnitude comparison task [direct effect = 31, t(63) = 10.08, p < 0.001]. However, the group difference in arithmetic performance was significantly mediated by reciprocal RT in the symbolic numerical magnitude comparison task (indirect effect = 3; confidence interval = 0.45-6.31; see Figure 2). Moreover, speed of number pronunciation had a
DISCUSSION
We compared Chinese and German third graders regarding their performance in arithmetic tasks and in a symbolic numerical magnitude comparison task. Chinese children showed better performance in the arithmetic tasks, corresponding to previous findings (e.g., Lin, 2009, 2013;Mullis et al., 2012;OECD, 2013). This superior arithmetic performance of Chinese children was accompanied by a better performance of Chinese children in the symbolic numerical magnitude comparison task: Chinese children were overall faster in comparing two singledigit Arabic numbers with respect to their numerical magnitude without making more errors than German children. Thus, Chinese third graders not only showed a higher fluency in solving arithmetic tasks but were also able to compare Arabic digits at a faster pace than their German peers.
Mediation analysis revealed that the group difference in symbolic numerical magnitude processing was fully mediated by the performance in the arithmetic tasks. After controlling for arithmetic performance, the difference between Chinese and German children's performance in the symbolic numerical magnitude comparison task was no longer significant. The difference between Chinese and German children in arithmetic was partially mediated by symbolic numerical magnitude processing skills. Indeed, the group difference in arithmetic performance was significantly mediated by the performance in the symbolic numerical magnitude comparison task but it was still significant after controlling for the performance in the symbolic numerical magnitude comparison task. Hence, while the group difference in arithmetic performance was only partially mediated by symbolic numerical magnitude processing skills, the group difference in symbolic numerical magnitude processing was fully mediated by the performance in the arithmetic tasks. The influence of arithmetic skills on symbolic numerical magnitude processing skills accordingly seems to be higher than the opposite direction of influence, at least in children who have already developed basic arithmetic skills. These findings might be seen as evidence for the notion that arithmetic skills shape symbolic numerical magnitude processing skills. Based on the assumptions that (a) children's familiarity and fluency of manipulating symbolic numbers serves as the crucial link between symbolic numerical magnitude processing and arithmetic skills (Lyons et al., 2015), and (b) arithmetic problem solving involves the retrieval of numerical magnitude knowledge (e.g., Siegler and Lortie-Forgues, 2014;Schneider et al., 2015), we assume that a higher familiarity and fluency with arithmetic in Chinese compared to German children, most likely caused by a higher frequency of exposure to arithmetic (see e.g., Geary, 1996), leads to a higher speed of retrieving symbolic numerical magnitude knowledge. In addition, symbolic numerical magnitude processing skills seem to play a role in explaining the performance difference between Chinese and German third graders in arithmetic tasks. However, our findings suggest that other factors are also at play. Indeed, the speed of number pronunciation was found to be higher in Chinese children, most likely caused by the short length of Chinese number words, and it had a significant effect on arithmetic performance. This is in line with previous studies showing that the so-called "rapid automatized naming" (RAN) of Arabic digits or "number naming speed" is significantly correlated with arithmetic skills (e.g., Krajewski and Schneider, 2009). Moreover, besides educational practical factors like frequency of exposure to arithmetic, other possible explanations for the performance difference between Chinese and German children in arithmetic tasks might be found in the structure of number naming systems, cultural beliefs and values as well as parental involvement (e.g., Ng and Rao, 2010).
In accordance with previous findings, RT in the symbolic numerical magnitude comparison task correlated with arithmetic skills in German children (see e.g., Schneider et al., 2015). For Chinese children, by contrast, only a marginal correlation between RT in the symbolic numerical magnitude comparison task and arithmetic skills was found. A possible reason for these divergent findings might be that the between-subject variation in arithmetic performance was lower among Chinese participants (see Table 2; Figures 1B,C). However, comparing the respective correlation coefficients of both groups did not reveal any significant differences so that we should not assume any substantial between-group differences in the association between RT in the symbolic numerical magnitude comparison task and arithmetic performance.
It is important to note that the cross-sectional design of the current study does not offer means of assessing cause. Based on the different results of the two mediation models, we assume that a higher degree of familiarity and fluency with arithmetic in Chinese compared to German third graders causes a higher speed of retrieving symbolic numerical magnitude knowledge. To substantiate this notion, however, longitudinal studies are needed. The assessment of both the development of symbolic numerical magnitude processing skills and the development of arithmetic skills in Chinese and German children over time would lead to a better understanding of the interrelationship between these skills. Moreover, it would be possible to examine whether the direction of influence changes in the course of development and determine to what extent the developmental trajectories are culture-specific.
Another limitation of our study is that the two groups under study might have differed with respect to other factors that may account for the group differences in symbolic numerical magnitude processing and in arithmetic skills, but were not assessed in this study. For example, general cognitive abilities of Chinese and German children were not assessed. Instead of controlling for general cognitive abilities, we used a domainspecific control task, allowing us to rule out that our findings can be explained by between-group differences in the speed of number pronunciation. It can, however, not be ruled out that our findings are due to between-group differences in general intellectual abilities. Nonetheless, findings from previous studies do not support this notion but demonstrated that proficiency in comparing symbolic numbers is not related to children's intellectual abilities (De Smedt et al., 2009;Vanbinst et al., 2012Vanbinst et al., , 2015. Furthermore, findings from various studies suggest that the relationship between symbolic numerical magnitude processing and mathematics achievement cannot be explained by recourse to intelligence: first, the relationship was detected in typically developing children after controlling for intelligence (De Smedt et al., 2009;Vanbinst et al., 2012;Linsen et al., 2014). Second, compared to their typically developing peers, children with genetic syndromes that are associated with belowaverage intellectual and mathematical abilities are impaired in symbolic numerical magnitude processing after controlling for intelligence (22q11 deletion syndrome: De Smedt et al., 2007;Simon et al., 2008;Williams syndrome: O'Hearn and Landau, 2007). Third, children with developmental dyscalculia are impaired in symbolic numerical magnitude processing compared to their typically developing peers, after matching the groups on intelligence (Landerl et al., 2004;Ashkenazi et al., 2009;Mussolin et al., 2010;Desoete et al., 2012). Finally, it was demonstrated recently that children with mathematical difficulties and average intellectual abilities as well as children with mathematical difficulties and below-average intellectual abilities show similar impairments in symbolic numerical magnitude processing compared to controls. Furthermore, the difference on the symbolic numerical magnitude comparison task between children with mathematical difficulties and controls could not be explained by individual differences in working memory or general response speed (Brankaer et al., 2014). These findings, taken together, suggest that the present findings cannot be attributed to between-group differences in general cognitive abilities.
To conclude, results from our study revealed that differences in arithmetic performance between Chinese and German children are accompanied by differences in processing of symbolic numerical magnitude. Chinese third graders did not only show a higher fluency in solving arithmetic tasks but were also able to process symbolic numerical magnitude information at a faster pace than their German peers. The group difference in symbolic numerical magnitude processing was fully mediated by the performance in arithmetic tasks, suggesting that arithmetic skills shape symbolic numerical magnitude processing skills. We assume that a higher frequency of exposure to arithmetic leads to a higher degree of familiarity with arithmetic in Chinese compared to German children, in turn leading to a higher speed of retrieving symbolic numerical magnitude knowledge.
AUTHOR CONTRIBUTIONS
JL, JL, MH, and SL substantially contributed to the conception and design of the work, the acquisition, analysis, and interpretation of data for the work. JL, JL, MH, and SL substantially contributed to drafting the work and revising it critically for important intellectual content. JL, JL, MH, and SL substantially contributed to final approval of the version to be published. JL, JL, MH, and SL agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. | 2017-05-05T09:01:10.077Z | 2016-08-31T00:00:00.000 | {
"year": 2016,
"sha1": "c619d9689303b0d5ecf7c8c3df97ddb5aefde9d9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2016.01337/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c619d9689303b0d5ecf7c8c3df97ddb5aefde9d9",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119176866 | pes2o/s2orc | v3-fos-license | Technical Report: All Principal Congruence Link Groups
This is a technical report accompanying the paper"All Principal Congruence Link Groups"(arXiv:1802.01275) classifying all principal congruence link complements in S^3 by the same authors. It provides a complete overview of all cases (d,I) that had to be considered, as well as describes the necessary computations and computer programs written for the classification result.
1. Introduction We follow the notation of [BGR18], in particular, let d > 0 be a square-free integer and I ⊂ O d be an ideal in the ring of integers O d of Q( √ −d). The goal of this report is to give a complete proof of the following theorem stated in [BGR18]: Theorem 1.1. The following list of 48 pairs (d, I) describes all principal congruence subgroups Γ(I) < PSL(2, O d ) such that H 3 /Γ(I) is a link complement in S 3 : (1) d = 1: I = 2 , 2 ± i , (1 ± i) 3 , 3 , 3 ± i , 3 ± 2i , 4 ± i .
(2) d = 2: I = 1 ± √ −2 , 2 , 2 ± √ −2 , 1 ± 2 √ −2 , 3 ± √ −2 . Recall that Theorem 2.1 and 2.2 in the Section "Preliminaries and techniques" of [BGR18] reduce the proof to finitely many cases (d, I) in which we need to decide whether M = H 3 /Γ(I) is a link complement. This report is a self-contained treatise of all of these cases documenting the necessary data, computer programs, and computations. We have compiled a separate document [BGR19] containing link diagrams for all cases where a diagram is known and for which the complement in S 3 is a principal congruence manifold.
The first and third, respectively, the second author developed methods to decide whether M = H 3 /Γ(I) is a link complement independently. To reflect this, the report is split into a preamble and two parts each illustrating the method by one set of authors. The preamble contains the three special cases (1, 4 + 3 √ −1 ), (2, 1 + 3 √ −2 ), and (3, 11+ ) where M is homologically but not topologically a link complement. All other cases left by the aforementioned theorems in [BGR18] are each covered by both methods.
Thus, a complete proof of the classification result can be obtained in two ways, namely, by combining Section 3 with either Part 1 or Part 2 of this report.
Of the three special cases, (2, 1 + 3 √ −2 ) was particularly hard and required finding an automatic group structure using the program Monoid Automata Factory (MAF) [Wil17]. The other two special cases had already been addressed in [Gör15]. In that paper, the principal congruence link complements of discriminant d = 1 and d = 3 were classified using a combinatorial argument which, unlike the proof of Theorem 2.2 in [BGR18], does not rely on the 6-Theorem or a bound on the systole.
The methods in Part 1 and 2 both start with information about the Bianchi groups PSL(2, O d ) in question. For Part 1, this is in the form of a triangulated Dirichlet domain of PSL(2, O d ) together with the face-pairing matrices, see Section 8. The second author computed these using SageMath [Sag18] and they are available as Regina files, see Section 8.1 for details. For Part 2, this is in the form of presentations of PSL(2, O d ) given by Swan and Page which we also list in Section 12. That section also includes the peripheral subgroups of PSL(2, O d ) which were not given by Page and had to be derived manually by the first and third authors.
Part 1 uses the fundamental domain of PSL(2, O d ) to build a triangulation of the principal congruence manifold M = H 3 /Γ(I) which can be examined by the 3-manifold software SnapPy [CDGW18] to determine whether M is a link complement. The results are shown in diagrammatic form in Section 5. Part 2 uses the presentation of PSL(2, O d ) to obtain a presentation of the quotient group B(I) of PSL(2, O d ) by the subgroup N (I) generated by the parabolic elements of Γ(I). Using the computer algebra system Magma [BCP97], B(I) and N (I) can be examined, the first step being to decide whether |B(I)| = |PSL(2, O d /I)| which is a necessary condition for M = H 3 /Γ(I) to be a link complement (see Section 2). The results are tabulated in Section 14.
Both parts rely on Perelman's resolution of the Poincaré conjecture to show that M = H 3 /Γ(I) is a link complement in S 3 by finding a peripheral curve for each cusp of M such that the curves kill π 1 (M ). If |B(I)| = |PSL(2, O d /I)|, this is equivalent to finding a parabolic element in N (I) for each cusp of M such that the elements kill N (I) since N (I) is isomorphic to Γ(I) ∼ = π 1 (M ).
There is a large difference in the performance of the two methods and the first and third authors needed far less computer time than the second author to arrive at the classification result. The method of the second author requires at least O(|PSL(2, O d /I)|) time to construct the triangulation and the difference in performance becomes more pronounced when |PSL(2, O d /I)| is large. In these cases, we also observed that computing N (I) with Magma's NormalClosure can take much more time than the computations with B(I) to prove it large enough. We are very grateful to the people who helped us with this work and refer the reader to [BGR18] for detailed acknowledgments.
Besides the principal congruence group Γ(I) = ker(π), we will also consider another congruence group Γ 1 (I) = π −1 (P ) where P are the upper unit-triangular matrices in PSL(2, O d /I) and π : and P x (I) = P x ∩ Γ(I). Let N (I) be the normal subgroup of PSL(2, O d ) obtained as the normal closure of all P x (I). Let B(I) = PSL(2, O d )/N (I). Note that N (I) ⊂ Γ(I) so there is an epimorphism B(I) PSL(2, O d /I). As discussed in [BGR18,§2.2], the following statements are equivalent and necessary for M to be a link complement: The size of PSL(2, O d /I) is given in [BR14] by where τ = 1 when 2 ∈ I 2 otherwise and where P runs over the prime ideal divisors of I and N (P ) = |O d /P | is the norm of P (in particular, |PSL(2, O d /I)| = 6 if N (I) = 2). ) are not link complements.
We are very grateful to Alun Williams who helped us implement the above proof.
Part 1. Using the method by the second author
Introduction
In order to obtain a triangulation of a principal congruence manifold M = H 3 /Γ(I), the second author has written two computer programs available at [Goe18]. As described in Section 8, the first of these programs can compute a Dirichlet domain for PSL(2, O d ). This serves as input to the second program described in Section 9 to construct the triangulation of M given an ideal I.
This triangulation can be given to SnapPy [CDGW18] for further examination. To show that particular M is a link complement, we find Dehn-fillings trivializing the fundamental group of M , see Section 6. We can use the homology H 1 (M ) to show that M is not a link complement, see Section 7. Since computing H 1 (M ) can be prohibitively expensive in some cases, we also enabled the program to construct a triangulation of the congruence manifold M 1 = H 3 /Γ 1 (I).
We conclude this part with some remarks on how the triangulations were simplified before computing their homology in Section 10. We also want to point out that many of the homology groups could only be determined due to SnapPy's efficient implementation of homology.
Overview diagrams
This section shows the overview diagrams for the finitely many cases we need to consider. The diagrams for class number h d = 1 and for higher class numbers h d > 1 are slightly different. Hence, we split them up into two sections, each beginning with a brief explanation how to read the diagrams. 5.1. Class number one. Recall that the ideal x is the same when multiplying x by a unit and that complex conjugation only flips the orientation of the principal congruence manifold. Hence, we only need to consider generators x lying in the first quadrant or a π/4-, respectively, π/6-wedge for d = 1 or 3. Furthermore, by [BGR18,Theorem 2.2], we only need to consider those generators lying strictly within the circle of radius 6.
For each generator x, the diagram either indicates • that M = H 3 /Γ( x ) is a link complement or • gives the reason why M is not a link complement which can be that M is an orbifold that the homology H 1 (M )/ı * (H 1 (∂M )) (where ı : ∂M → M is the inclusion of the boundary) is non-trivial, e.g., Z 5 for I = 3 + 3 √ −1 , or one of the lemmas in Section 3. Table 3). 5.2. Higher class numbers. For higher class numbers h d > 1, there are non-principal ideals I requiring two generators I = x, y . For such an ideal I, we always pick as primary generator an element x ∈ I with the smallest absolute value |x|. Considerations similar to h d = 1 apply and we can pick x to be in the first quadrant lying strictly within the circle of radius √ 39. The secondary generator is the element y with the smallest absolute value generating I together with x, preferable in the first quadrant, but always in the first or second quadrant. In the diagrams, each ideal corresponds to a box. Boxes for ideals having the same primary generator x are grouped together.
Such a box either indicates that
• M = H 3 /Γ(I) is a link complement (also giving the number of components) or • gives the reason why M is not a link complement which can be that M is an orbifold that the homology H 1 (M )/ı * (H 1 (∂M )) is non-trivial or an argument from Section 7 as illustrated by Examples 7.3, 7.5, and 7.6.
The arguments from Section 7 are needed here, since, unlike for h d = 1, computing the homology for all cases in question was infeasible.
Link complement certificates
To prove M = H 3 /Γ to be a link complement in the 48 necessary cases, we produce a SnapPy triangulation of each M with meridians set in such a way that filling along each meridian trivializes the fundamental group. These peripheral curves were found using techniques similar to [Gör15,Section 7.3.2] and we provide the SnapPy triangulations at [Goe18,prinCong/LinkComplementCertificates/]. Thus, the reader can verify the result by (1, 0)-filling each cusp and checking that SnapPy's simplified presentation of the resulting fundamental group has no generators. The same directory also contains the script proveLinkComplement.py to do this automatically for all 48 cases.
Homology and covering spaces
Let M be a compact, orientable 3-manifold with boundary consisting of disjoint tori. Let ı : ∂M → M be the inclusion of the boundary. We often work with then M cannot be a link complement. Furthermore, small covers of M often cannot be link complements either: Proof. Consider the map .
A cover N → M corresponds to a subgroup Γ ⊂ π 1 (M ) and we have an analogous map .
Let N (I) = |O d /I| denote the norm of an ideal I.
then Γ(I) is not a link group.
Proof. The degree of the cover M → M 1 is |P | = N (I).
is not a principal congruence link complement.
Proof. M is a cover of the Bianchi orbifold Q d = H 3 /PSL(2, O d ) with covering group PSL(2, O d /I). Thus, the degree of the cover H 3 /Γ(J) → H 3 /Γ(I) is given by the right-hand side of the inequality.
We can compute the right hand side of the inequality in Lemma 7.4 using Equation 1 in Section 2.
Computing Dirichlet domains of Bianchi orbifolds
For constructing triangulations of (principal) congruence manifolds in Section 9, we need a triangulated fundamental domain for PSL(2, O d ) with the extra information specified in Section 8.1. Computer programs to compute fundamental domains have been written before, notably by Riley [Ril83] and more recently by Page [Pag15]. Unfortunately, the published data (e.g., [Swa71]) do not include all the information we needed, at least not in a form that was easily computer parsable.
Hence, the second author implemented his own program (using SageMath [Sag18]) to produce a Regina [Bur18] triangulation of a Bianchi orbifold with suitable PSL(2, O d )-matrix annotations. The program could, in theory, produce a non-trivial covering space of the Bianchi orbifold We can, however, easily rule this out since the volume of the Bianchi orbifold Q d is known for the d we need to consider.
During the implementation, we noticed that the Dirichlet domain in the Klein model can be scaled by √ d in one direction such that all coordinates of the vertices become rational. Proving this is the motivation for discussing the computation of Dirichlet domains in the detail we do here.
8.1. Data for a fundamental domain for a Bianchi group. The data we need about the fundamental domain for PSL(2, O d ) is the combinatorics of a fundamental polyhedron P for the Bianchi group PSL(2, O d ) together with the following information for each face f of P : • another face f of P called the mate face • for each edge of P , the singular order the edge has in the Bianchi orbifold For simplicity, we triangulate P by taking the barycentric subdivision. We index the vertices of the resulting simplices such that vertex i of a simplex corresponds to the center of an i-cell of P . This results in a triangulation where the gluing permutations are always the identity.
Each simplex ∆ j of the barycentric subdivision has a "mate" simplex and a mating matrix g j ∈ PSL(2, O d ) that takes face 3 of the simplex to face 3 of the mate simplex. We obtain a triangulation of Q d by gluing each simplex to its mate along face 3. This is the triangulation we store and the fundamental domain can be easily obtained by just ungluing each face 3. Along the triangulation, we store the mating matrices g j ∈ PSL(2, O d ) in a separate array. Note that the singular locus of Q d falls onto the edges of the faces with index 3 and for face 3 of a simplex ∆ j we obtain three numbers describing the singular orders of its three edges in Q d . We store these triples of natural numbers for all simplices ∆ j in a separate array as well.
We provide this information as Regina [Bur18] Associate to a matrix m the half space containing p 0 that is limited by the plane bisecting p 0 and the image of p 0 under the action of m. From a sample of matrices in PSL(2, O d ), we obtain a candidate polyhedron P for the Dirichlet domain by intersecting the half spaces associated to the matrices. Each face f of P comes from the intersection with a half space associated with a matrix m which will become the face-pairing matrix g f . If we can consistently recover the information described in Section 8.1, P is the fundamental domain for a (hopefully trivial) cover of the Bianchi orbifold Q d . In other words, we need to check that for each face f of P , there is a face f with matrix g f = g f −1 which will become the mate face. Furthermore, for each face f an d each vertex v of f , we need to find a vertex v of P with It is convient to let p 0 be the origin 0 in the Klein or Poincaré ball model. Unfortunately, there are matrices in PSL(2, O d ) that fix the origin. But we can pick a suitable matrix l ∈ PSL(2, Q( √ −d)) and let m = l −1 ml instead of m ∈ PSL(2, O d ) act on H 3 or B 3 . 8.3. Poincaré extension for the Poincaré ball. Let H denote Hamilton's quaternions and H 3 = {z + tj : z ∈ C, t > 0} ⊂ H and B 3 = {x + yj + zk : x 2 + y 2 + z 2 < 1} be the upper half space, respectively, Poincaré ball model of hyperbolic 3-space. There is an action of suitable 2 × 2 matrices with quaternions as coefficients on H ∪ {∞} given by Lemma 8.1. Let p be the point in the Poincaré ball model with Euclidean coordinates (x p , y p , z p ). The result of taking the hyperbolic midpoint between p and the origin and then converting that midpoint to the Klein model also has coordinates (x p , y p , z p ), see Figure 15. Thus, the plane bisecting p and the origin has equation x p x + y p y + z p z = x 2 p + y 2 p + z 2 p in the Klein model.
Proof. Let r Klein and r Poincaré be the Euclidean distance of the origin to a point in the Klein model, respectively, the corresponding point in the Poincaré ball model. We have Note that this is the same relationship we have between the Euclidean distance r Poincaré of a point in Poincaré ball model and r mid,Poincaré of the hyperbolic midpoint between that point and the origin: Thus, we have r mid,Klein = r Poincaré .
The image of the origin in B 3 is given by ( . Note that the origin in B 3 corresponds to j in H 3 and a standard calculation gives: Unfortunately, we do need to deal with a further quadratic extension of Q( √ d) when verifying the correspondences between the vertices v and v of a face f and its mate face f .
Triangulations of (principal) congruence manifolds
Let I be an ideal in O d . In this section we describe how to construct a triangulation of M = H 3 /Γ(I), respectively, M 1 = H 3 /Γ 1 (I) using copies of the triangulated fundamental polyhedron P of PSL(2, O d ) from the data in Section 8.1. 9.1. Principal congruence manifolds. We label each copy of P by a matrix m ∈ PSL(2, O d /I). We use the following algorithm: (1) Start with a "base" copy P Id .
(2) While there is a copy P m with an unglued face f : In the implementation, we store the P m in an array and use a dictionary mapping matrices m ∈ PSL(2, O d /I) to an index in the array for fast lookups.
To determine whether H 3 /Γ(I) is an orbifold, we can compare the degrees of the edges of the resulting triangulation with the degrees of the edges of the triangulation of Q d multiplied by the respective orbifold orders stored with the triangulation of Q d .
9.2. Congruence manifolds H 3 /Γ 1 (I). Note that H 3 /Γ 1 (I) is obtained from H 3 /Γ(I) as quotient by the action of upper, respectively, lower unit-triangular matrices. In other words, each copy of P in the triangulation of H 3 /Γ 1 (I) is labeled by a vector v ∈ (O d /I) 2 / ± 1 which corresponds to the first row of a matrix m ∈ PSL(2, O d /I). When computing the label v ∈ (O d /I) 2 / ± 1 for a neighboring copy, we need a lift of v to m ∈ PSL(2, O d /I) so that v is given as first row of mg f . For efficiency, we remember such a matrix m ∈ PSL(2, O d /I) for each copy P v of P . In other words, we store pairs (P v , m) in an array and use a dictionary mapping v ∈ (O d /I) 2 / ± 1 to an index in the array for fast lookups.
(2) While there is a copy P v with an unglued face f : (a) Let m be the matrix stored with P v . Compute m = mg f ∈ PSL(2, O d /I) and let v be the first row of m . (b) If there is no copy P v yet, create a copy P v and store m with it.
(c) Glue face f of P v to the mate face f of P v such that the vertices are matching as described in the information about the fundamental polyhedron. Given a 2-vector v, the reduced form of v with respect to the vectors v 1 and v 2 is the element in v + Zv 1 + Zv 2 in the parallelogram spanned by v 1 and v 2 . In other words, v is reduced with respect to v 1 and v Let us associate the vector (a, b) to an element a + b √ −d ∈ O d . Let us fix two vectors v 1 and v 2 that span I as a lattice. We can then reduce a representative in O d by reducing the associated vector (a, b) by v 1 and v 2 .
It is left to find such v 1 and v 2 given generators x 1 , . . . , x k ∈ O d of the ideal. As a lattice, I is spanned by the vectors associated to x 1 , x 1 ω d , . . . , x k , x k ω d . We need a procedure that takes such a set of vectors and returns two vectors spanning the same lattice as the input vectors. By iterating, it suffices to have a method that produces two vectors spanning the same lattice as three given vectors. This can be done by repeatedly reducing one vector by the other two vectors until one of them is zero.
Technical remarks about computing homologies
The triangulation of a (principal) congruence manifold produced in Section 9 has both finite and ideal vertices and can be quiet large. Even though the result is the same, computing the homology of the unsimplified triangulation with finite and ideal vertices is much slower than first simplifying the triangulation and then computing the homology of the simplified triangulation. For example, the largest triangulation we encountered has 1843200 simplices in the case (31, 5 )). Removing finite vertices reduced the number of tetrahedra to 122704 simplices.
SnapPy [CDGW18] has a procedure to remove all finite vertices of a triangulation of a cusped manifold. However, this procedure does not scale to large triangulations. Thus, we implemented our own method to simplify the triangulation: (1) Perform a coarsening of the barycentric subdivision: there is a group of four simplices about each edge from vertex 1 to 2; collapse all these groups to a single simplex each simultaneously. (2) Collapse edges (preferring edges with high order) for as long as there is an edge which can be collapsed without changing the topology -similar to what the method collapseEdge in Regina [Bur18] does.
Even though this procedure might not in general remove all finite vertices, it does so for all the triangulations we needed to consider here.
Unfortunately, Regina's collapseEdge invalidates and recomputes the entire skeleton of the triangulation each time an edge is collapsed. Hence, the above procedure would not scale to large triangulations using Regina's implementation. Therefore, we reimplemented Regina's method so that it performs a more targeted invalidation: only the edge classes near the collapsed edge recomputed and the two vertices at the ends of the collapsed edge are merged.
We use SnapPy to compute the homology of the triangulation simplified this way. For large triangulations, Dunfield and Culler implemented the homology as follows: (1) Using a sparse-matrix representation, simplify the matrix performing row operations as long as there is a ±1 in the matrix. (2) Compute the Smith normal form using algorithms described in [Coh93,Chapter 2.4
Part 2. Using the method by the first and third authors
Introduction
The main computational tool used by the first and third authors to establish when a candidate principal congruence manifold is, or is not, homeomorphic to a link complement in S 3 is Magma [BCP97]. We refer the reader to the papers [BR14] and [BR18] for more on the background to these methods. However, we note that, thanks to the second author, many of the Magma routines have now been automated, and this is what is included in this report. In addition, we take the opportunity to correct some mistakes in some of the entries in the tables of [BR18] that were uncovered whilst checking our calculations. This did not affect the outcome of whether the principal congruence manifold was a link complement.
All files needed to reproduce the results in this part are available at [Goe18, prinCong/magma/].
Presentations for the Bianchi groups
12.1. Class number one. The following presentations are from [Swa71]: The matrices are given by In each case, the parabolic group fixing ∞ corresponding to the peripheral subgroup of the one-cusped Bianchi orbifold Q d = H 3 /PSL(2, O d ) is given by: P ∞ = t, u .
12.2.
Swan's presentations for higher class numbers (d = 5, 6, 15). The following presentations are from [Swa71]: The matrices are given by Up to conjugacy, all parabolic subgroups P x fixing x are given by 12.3. Page's presentations for higher class numbers. The presentations for the remaining class numbers were done by A. Page using a suite of computer packages he recently developed (see [Pag15]) to study arithmetic Kleinian groups. The presentations given here were communicated to the first and third authors by A. Page.
Note that for d = 3, we need to pick the triple (n, k, l) such that I = nZ + (k + lω 2 3 )Z. For the cases (d, I) that needed to be considered, the tables in Section 14.1 show the arguments n, k, l passed to the Magma functions to generate the B(I).
13.2. Higher class numbers h d with d ≡ 3 mod 4. Let us denote the parabolic subgroups and their generators given in Section 12 by P (1) = p (1),1 , p (1),2 , . . . , P (h d ) = p (h d ),1 , p (h d ),2 . For d = 5 and d = 6, the generators were chosen so that P (i) (I) is generated by p n (i),1 and p k (i),1 p l (i),2 . Thus, the Magma functions to generate N (I) and B(I) similarly only need the triple (n, k, l) as input, see Tables 14 and 16. 13.3. Higher class numbers h d with d ≡ 3 mod 4. Unfortunately, we cannot find generators p (i),j for the parabolic subgroups of PSL(2, O d ) such that each P (i) (I) is again always generated by p n (i),1 and p k (i),1 p l (i),2 . Instead, the Magma functions (e.g., in Tables 18 and 24) generating N (I) and B(I) take triples (n 1 , k 1 , l 1 ), . . . , (n h d , k h d , l h d ) that must be chosen such that p ni (i),1 , p ki (i),1 p li (i),2 = P (i) (I). These triples are shown in, e.g., Tables 19 and 25. These triples were determined by computing the matrices for the p (i),j . In [BR14] and [BR18] these were computed manually. However, this has been automated, and we provide code for SageMath [Sag18] to verify that the triples given in these tables are valid. Given a triple (n i , k i , l i ) with n i , l i > 0 and an ideal I, the code will check that (1) p ni (i),1 ≡ ±Id mod I and p ki (i),1 p li (i),2 ≡ ±Id mod I and (2) for any (s, t) ∈ Z 2 \ {(0, 0)} with 0 ≤ t < l i and tk i /l i ≤ s < n i + tk i /l i , we have p s (i),1 p t (i),2 ≡ ±Id mod I. For a triple (n i , k i , l i ) with n i > 0 and l i < 0, the code will flip the signs of k i and l i before performing the above checks.
Example 13.1. Running the command sage -python checkMatricesAndPeripherals15.py will output True 15 times to indicate that the triples in Table 19 for the 15 relevant ideals I ⊂ O 15 are valid.
Tables
Recall from Section 2 that it is sufficient to show |B(I)| > |PSL(2, O d /I)| to rule out M = H 3 /Γ(I) as a link complement. The same section also contained Equation 1 allowing us to compute |PSL(2, O d /I)|. To obtain |B(I)| or a lower bound for it, we use Magma's builtin Order or one of the functions in Table 1. This works for all cases except the special ones treated in Section 3. Table 1. Magma functions to give a lower bound for the size of a group. Given a subgroup H of a given group G, a lower bound on |G| is given by the product of the index [G : H] and the order of H's Abelianization. LowerBound1 returns the best lower bound obtained this way when considering all normal subgroups H up to a given index. LowerBound2 considers the subgroup H generated by g and the normal subgroup of given index that Magma found first 1 . Here, g is a generator of G specified by its index.
In Table 6, the first peripheral subgroup for the entry 2, ω 39 2 and the the second, third and fourth peripheral subgroups for the entry 2, ω 39 3 . Also note that in this latter case the order of B d (I) should be recorded as >> 1.
Finally in Appendix B, the cases of 5 splitting were not recorded in the the cases of d = 39 and d = 71.
Link complements proofs
In this section, we explain how the functions defined in the Magma file LinkComplementHelpers.m work and can be used to prove that a principal congruence manifold is a link complement. We provide Magma files to show this for all the 48 principal congruence manifolds in question at [Goe18,prinCong/magma/]. For example, to check all d = 2 cases, one can run the following shell command: magma LinkComplementHelpers.m LinkComplement2.m which produces the following output: <1+sqrt(-2)> 4-Link <2> 12-Link ...
16.1.
Class number one. We use (2, 1 + √ −2 ) as an example how to verify a principal congruence link complement M = H 3 /Γ(I) using the function VerifyLink: // Presentation of Bianchi group Bianchi2<a,t,u> := Group<a,t,u|a^2,(t*a)^3,(a*u^-1*a*u)^2,(t,u)>; // Parabolic elements fixing cusp of the Bianchi Bianchi orbifold. ; This code will output 4-Link confirming that all necessary tests have passed and M is a link complement. The first two arguments to the function VerifyLink are the finitely presented Bianchi group and the elements generating P ∞ from Section 12. The next argument is the triple (n, k, l) from Section 13.1 and the size of PSL(2, O d ). Next is a list of pairs (g, (p, q)) of an element g in the Bianchi group specifying a cusp and Dehn-filling coefficients (p, q).
The function will use these arguments to perform three tests: (1) VerifyLink checks whether |B(I)| = |PSL(2, O d /I)| using a presentation of B(I) similar to the one in, e.g., Table 4.
Recall from Section 2 that this implies that N (I) is the fundamental group of M = H 3 /Γ(I). By Perelman's Theorem, it suffices to find Dehn-fillings of M trivializing the fundamental group. This is equivalent to finding a set of primitive elements in N (I) generating N (I) such that each element is conjugate to an element in P ∞ and corresponds to a different cusp of M . These primitive elements are specified as last argument to VerifyLink, namely, a pair (g, (p, q)) presents the element g(t n ) p (t k u l ) q g −1 . For example <a, [ 0, 1]> yields a*(t*u)*a^-1, an element in Bianchi2 which is also in N (I) ⊂ PSL(2, O d ).
(2) VerifyLink checks that the elements specified this way are indeed generating N (I) or, equivalently, the quotient of N (I) by the subgroup generated by these elements is trivial. In our example, the function would perform a test equivalent to checking that the following code returns 1: N := NormalClosure(Bianchi2, sub<Bianchi2 | t^3, t*u>); Q := quo<N | (t*u), a*(t*u)*a^-1, t*a*(t*u)*a^-1*t^-1, t^-1*a*(t^-2*u)*a^-1*t>; Order(Q); ) repeat. To abbreviate the data we need to give, we can use the Expand helper requiring to specify the list of g's only once as first argument: ).
This means that every symmetry of the principal congruence manifold extends to the link (in fact, since the ideal has norm N (I) = 2, B(I) ∼ = PSL(2, O d /I) is isomorphic to A 3 and acts as such on the three cusps of M ). In cases where there is a link with this many symmetries, we can use the function VerifyLink2 to verify a principal congruence link complement: VerifyLink2(Bianchi7, Bianchi7P, [[ 0, 1]], 6); This function checks that the quotient group Q given by Q := quo<Bianchi7 | t^0 * u^1>; Order(Q); has order 6 which is equal to |PSL(2, O d /I)|. Note that the quotient Q has fewer relations than B(I), so |Q| = |PSL(2, O d /I)| implies |B(I)| = |PSL(2, O d /I)| and that N (I) is the fundamental group of M = H 3 /Γ(I). Furthermore, it shows that N (I) is generated by conjugates of a single parabolic element by PSL(2, O d ). Unless d = 1 or 3, these conjugates give a well-defined peripheral curve for each cusp of M such that Dehn-filling along them trivializes the fundamental group of M .
For higher class numbers h d > 1, we need to give a list of pairs (p 1 , q 1 ), ..., (p h d , q h d ) and the function checks the size of the quotient of PSL(2, O d ) by the normal subgroup generated by p p1 (1),1 p q1 (1),2 , . . . , p ,2 . Note that this technique does not work for d = 1 and d = 3 since the stabilizer of ∞ in PSL(2, O 1 ) and PSL(2, O 3 ) is larger than P ∞ and contains torsion elements. | 2019-02-13T03:06:47.000Z | 2019-02-13T00:00:00.000 | {
"year": 2019,
"sha1": "da3fdd60bfb93523a0befecf44f6e7986c552344",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "139fe3877dbd8d1ef8a14b136755cb1a1bf67a5f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
14625392 | pes2o/s2orc | v3-fos-license | DCE-MRI and DWI Integration for Breast Lesions Assessment and Heterogeneity Quantification
In order to better predict and follow treatment responses in cancer patients, there is growing interest in noninvasively characterizing tumor heterogeneity based on MR images possessing different contrast and quantitative information. This requires mechanisms for integrating such data and reducing the data dimensionality to levels amenable to interpretation by human readers. Here we propose a two-step pipeline for integrating diffusion and perfusion MRI that we demonstrate in the quantification of breast lesion heterogeneity. First, the images acquired with the two modalities are aligned using an intermodal registration. Dissimilarity-based clustering is then performed exploiting the information coming from both modalities. To this end an ad hoc distance metric is developed and tested for tuning the weighting for the two modalities. The distributions of the diffusion parameter values in subregions identified by the algorithm are extracted and compared through nonparametric testing for posterior evaluation of the tissue heterogeneity. Results show that the joint exploitation of the information brought by DCE and DWI leads to consistent results accounting for both perfusion and microstructural information yielding a greater refinement of the segmentation than the separate processing of the two modalities, consistent with that drawn manually by a radiologist with access to the same data.
Introduction
Responses to cancer treatment are increasingly differentiated based not only on tumor type, but also on genetic and histochemical biomarkers. Exemplifying the progress in this respect is breast cancer. Biopsy-derived histological biomarkers offer high biological specificity and play an important role in determining the choice of chemotherapeutic agent. As different parts of a tumor often show different histological signatures or have evolved to different stages of tumor progression that may impact on their response to a given therapy, it is important to obtain a complete coverage of the tumor. Biopsies, however, are difficult to localize within the breast, are subject to sampling errors, and can seldom be repeated. Thus, there is growing clinical interest in the possible role of imaging to describe anatomical and physiological heterogeneity of tumors [1,2].
Magnetic resonance imaging (MRI) methods such as dynamic contrast enhanced (DCE) and diffusion weighted (DW) MRI methods are amongst those of interest as they provide noninvasive digital biomarkers with good spatial coverage and repeatability [3]. DCE-MRI uses serial acquisition of images during and after the injection of intravenous contrast agent and has been shown to reflect tumor vascularity [4,5]. DWI, on the other hand, generates images that are sensitized to water displacement at the diffusion scale and can be used to calculate a quantitative index reflecting the apparent freedom of diffusion (apparent diffusion coefficient (ADC)). Preclinical and clinical data show that ADC reflects regional cellularity [6][7][8].
2 International Journal of Biomedical Imaging DCE-MRI has a high sensitivity for breast cancer detection (89-100%), while DWI has shown utility in predicting suitable therapies and monitoring response [9]. A recognized weakness of DCE and DW-MRI is their lack of specificity between tumor types as overlap between the findings of benign and malignant lesions results in variable specificity (37-86%) [9]. This is not entirely surprising given that across cancer types the common features tend to include such processes as cell proliferation, angiogenesis, and necrosis. The ability of DCE-and DW-MRI to provide a spatial depiction of these anatomical and physiological conditions within a tumor makes them natural tools for probing tumor heterogeneity. The reporting of MRI has long relied on visual assessment of several scans having different contrasts, but in relation to breast cancer, few studies have exploited this inherently multiparametric data in a unified manner [10][11][12]. Moreover, the most recent works mainly address the problem of comparing and retrospectively integrating the contributions from the different modalities, without exploiting the conjunct information. Nevertheless, these works have highlighted the potential of combining DCE-MRI and DWI to differentiate the core of the tumor from peritumoral tissues and normal tissues and thus provide an indication of lesion heterogeneity [13].
In this work, we propose the multimodal integration of the information provided by DCE-MRI and DWI of breast cancer lesions for evaluating their heterogeneity, that is, to divide the lesion into zones that share certain similarity when using combined information coming from different imaging domains. The ultimate intention of this protocol is to allow a more extensive, reproducible characterization of heterogeneity in tumors that have been previously identified by a clinician.
In all previous reports on breast lesion segmentation the representation of DCE curves and ADC maps has been that of features in a vector space defined by the image values [14][15][16][17]. In this work a different approach is followed exploiting dissimilarity-based representations (DBR) [18]. The concept of dissimilarity-based representation consists of focusing on the contrast, or distance, between objects and of measuring it by a suitable criterion. The term object refers, in the present context, to the information represented by each particular voxel. This information need not be of a single type and in this case consists of both signal intensities (i.e., the time-intensity enhancement curve for DCE-MRI) and the ADC parameter value (derived from DW-MRI). A key concept in DBR is that of a proximity relation between two objects, which does not need to be explicitly represented in a feature space. Objects are characterized through pairwise dissimilarities; instead of using an absolute characterization of the objects by a set of features, problem-centric knowledge is used to define a measure that estimates the dissimilarity between objects. Here, both DCE and DWI contribute to such a measure leading to a novel multimodal approach to tissue characterization. This paper is organized as follows. Section 2 describes the pipeline including the clustering and registration processing steps. Section 3 presents the results, which are then discussed in Section 4, and Section 5 derives conclusions.
Materials and Methods
This section provides an overview of the pipeline shown in Figure 1 and details the methodological choices with respect to both clustering and registration. The DCE-MRI data are first visually inspected to identify a time point where the lesion has the higher contrast with respect to the surrounding tissue. Multimodal registration is carried out between DW-MRI and DCE-MRI images, allowing a spatial mapping of both volumes. Dissimilarity-based clustering is then performed integrating information from both acquisition modalities. Statistical analysis, consisting of nonparametric tests, were applied on the ADC distributions defined by the obtained clusters. An assessment of the results was carried out by clinical experts, and, for the sake of completeness, an evaluation of the tightness and separation of the clusters was also performed.
Multimodal Registration.
In order to perform voxelwise dissimilarity-based clustering that incorporates both DCE-MRI and DWI data, it is necessary to first spatially align the two datasets. The problem of registering between DCE-MRI and DWI becomes an increasingly difficult task in a highly compressible and elastic tissues like the breast, with its inhomogeneous anisotropic soft tissue, inherent nonrigid behavior, and lack of solid landmarks to guide the registration as fixed references. A standard registration protocol was used. Due to the highly distinct contrast and intensity characteristics of the two modalities as well as the low resolution of the DWI volumes, the registration process was divided into two steps, each following a standard multiresolution strategy. In the first step, rigid and affine transformations were performed successively in order to align and match the features of the fixed (DCE-MRI) and moving (DWI) images following a 5-level Gaussian scale space. In the second step a multiresolution cubic B-spline transformation with a regularization penalty was performed to elastically refine the alignment. Lesion-specific masks based on regions delineated by clinical experts were used in order to assign a greater weight to the voxels in the lesion area [19]. Normalized mutual information (NMI) was used as registration metric. In order to regularize the deformation, we used a bending energy penalty which is based on the spatial derivatives of the transformation [20]. The methodology used for registration was implemented in Elastix [19], and all the steps have been widely validated in literature [20,21].
The registration protocol was applied to the b0 images from the DWI dataset and their transformation to the DCE-MRI space validated for each subject through visual inspection by an expert. The resulting transformation was applied to the remaining b-values, and the ADC was estimated on the transformed DWI images. in a vector space constructed by the dissimilarities to each other voxel. Usually, such a space can be safely treated as an Euclidean space equipped with the standard inner product definition. Let X = {x 1 , . . . , x n } be a voxel-based dataset. Given a dissimilarity function, a data-dependent mapping D is defined as D(·, R) : X → D n linking X to the socalled dissimilarity space [22]. The complete dissimilarity representation yields a square matrix consisting of the dissimilarities between all pairs of objects, such that every object is described by an n-dimensional dissimilarity vector
Dissimilarity-Based
A distance function D DCE based on the adaptive dissimilarity index first proposed in [23] has been exploited in a previous work [24] for calculating the pairwise proximity between DCE-MRI perfusion curves. There are two main approaches to quantifiably compare two time series: one makes use of the distances between the absolute values of their elements while the other focuses on the similarity of their behavior along time. Unlike conventional time-series distance functions, which focus only on the closeness of the values observed at corresponding points in time, ignoring the interdependence relationship between elements that characterize the time-series behavior, the proposed distance function takes into account the proximity with respect to values as well as the temporal correlation for the proximity with respect to behavior. For two voxel-derived perfusion curves S 1 = (u 1 , . . . , u p ) and S 2 = (v 1 , . . . , v p ), closeness with respect to behavior is defined as the combination of their monotonicity, that is, if both curves increase or decrease simultaneously, and the closeness of their growth rate over a determined period [23]. Both criteria are quantified by the temporal correlation present in the first term of the distance function D DCE , (1). The complete distance function D DCE for DCE-MRI derived perfusion curves is defined as follows: where S 1 = (u 1 , . . . , u p ) and S 2 = (v 1 , . . . , v p ) are two voxelderived perfusion curves sampled at time instants (t 1 , . . . , t p ) [23,25]. Cort is the temporal correlation (2), and dH is the Hausdorff distance, defined in (3), which is used to measure the distance between both voxelwise perfusion curves: The integration of the diffusion information into the dissimilarity function is accomplished through the addition of an ADC-dependent term D ADC (4). This term is defined as a sigmoid function which makes use of the normalized difference between the ADCs (ADC S1 and ADC S2 ) of the voxels under consideration, which ranges from 0 to 1: The tuning parameter k ADC weights the contribution of D ADC to the complete dissimilarity measure D by modulating the shape of the sigmoid function. When the value of the normalized difference between ADCs is low, denoting similar ADC values between voxels, the dissimilarity function D ADC approaches zero. On the contrary, when the value of the normalized difference between ADCs is high, denoting a large dissimilarity between ADC values between voxels, D ADC approaches one, making the overall dissimilarity measure approaches the value of D DCE . The impact of the different values of k ADC is illustrated in Figure 2.
The complete dissimilarity function D is then the product of D ADC and D DCE : This global measure enables the monitoring of the performance as a function of the relative weight given to the ADC, as well as of different values of k ADC .
Performance Assessment.
In each of the patients, a ROI was delineated by an expert around the lesion in the motioncorrected DCE-MRI volumes. Since unsupervised classification is sensitive to the general structure and distribution of the data, the ROI was drawn just exceeding the area of the enhancing lesion, allowing for a clear delineation of the heterogeneity of the lesion inside the ROI. The timeintensity curves normalized to the baseline at t = 0 and the corresponding ADC values from the voxels inside the ROI were treated as independent objects on a voxel by voxel basis. Using D from (5), a dissimilarity matrix was derived on a slicewise basis from the pairwise dissimilarities of the elements in the corresponding ROI. In such a space, each element was represented by a row vector whose dimensionality was defined by the cardinality of the ROI. Once the dissimilarity space was constructed, the Kmeans algorithm [26] was used to group the voxels in the ROI into clusters. The initial centroids were calculated automatically following a preliminary clustering step with a random 10% subsample, as a strategy to improve the algorithm initialization avoiding a misplacement of the initial seeds. K-means minimize the sum over all clusters of the within-cluster sums of point-to-cluster-centroid distances using, in this case, the squared Euclidean distance.
For selecting the K number of clusters the standard clinical assessment protocol has been taken into consideration. It considers only three classes (persistent, plateau, and washout). An additional has been included for the surrounding tissue considering that the ROI exceeds the estimated limits of the enhancing lesion.
In order to perform a comparison with established methods the clustering procedure was also performed following a morphologic feature-based approach. This method relies on descriptors derived from the voxelwise time-intensity curves, comprising mainly specific characteristics of the shape of such curve. The features extracted from the DCE-MRI voxelwise time-intensity curves are baseline, maximum signal difference, time to peak, area under curve, maximum enhancement, wash-in rate, maximum slope of increase, wash-out rate, and the intercept of the line fitting the tail of the time-intensity curve with the axis t = 0. The use and definition of these morphologic features to describe the contrast agent intake can be found in the related literature [14,17,27]. Further, the clustering procedure was repeated incorporating the ADC of each voxel as an additional feature to the morphologic descriptor vectors calculated previously. The ADC and the morphologic features were standardized by subtracting their mean and dividing by their standard deviation. The results of these two procedures were compared with our method in order to assess the clustering and data representation outcome.
Patient
Population. Data were acquired from 21 patients (age 50 ± 13.8 years). All the patients had been diagnosed to have primary ductal carcinoma.
DWI was acquired with a single-shot spin-echo (SE) echo planar imaging (EPI) sequence in three orthogonal diffusion encoding directions (x, y, and z) using 4 b values (0, 250, 500 and 1000 s/mm 2 ) with parallel imaging (acceleration factor 2). Subjects were breathing freely, with no gating applied.
Results
The regions resulting from dissimilarity-based clustering were rendered as colored overlays on the morphological images on each slice. The results from a representative patient are displayed in Figure 3. After clustering was performed on the normalized curves, the resulting clusters were assessed by the radiologists to validate the segmentation of both the central tumoral and surrounding regions. Figure 3(b) shows examples of the clusters obtained, while Figures 3(c) and 3(d) represent the plots of the average time-intensity perfusion curves calculated on the raw and normalized data, respectively. The plots show the impact that the normalization step has in highlighting the intercluster differences. The central region exhibits a characteristic pattern in the DCE-MRI of a high early enhancement followed by a rapid washout, indicative of angiogenesis (Figure 3(d), red line). Typically, surrounding this central region lays a cluster featuring a pattern of rapid enhancement followed by a signal plateau (Figure 3(d), orange line). The outermost cluster surrounding these two central regions features a slow enhancement behavior (Figure 3(d), yellow line).
International Journal of Biomedical Imaging The voxels corresponding to the each cluster were extracted from the spatially registered 3D ADC maps in order to perform statistical analysis. The analysis was carried out in the whole 3D ROI, that is, taking into account the ADC values corresponding to all the clustered slices as a single volume. Normality tests (Jarque-Bera) revealed that the ADC values for the different clusters analyzed were not normally distributed. Accordingly, a nonparametric test (Wilcoxonsigned-rank test) was used (P = 0.05) to evaluate whether the tumor's subregions corresponded to regions in the ADC maps with statistically different PDFs. In this way we found that the distributions of the ADC values in the DCE-MRI defined regions were statistically different, in each one of the two conditions, in 19 out of 21 patients. The radiologist reviewed the overlays in comparison to the DCE seen as a dynamic loop, the DWI images, and the ADC maps derived from them, as well as T2 STIR images. Criteria for the review were whether or not any of the subregions obtained by the method corresponded to a zone of necrosis based on the complete set of images and whether one or more regions that would be classified as either benign or malignant have been subdivided. Figure 4 illustrates a typical case setting k ADC to 1, 3, and 5. From the obtained results it was highlighted by the experts the usefulness of varying the parameter k ADC to emphasize different characteristics of the lesion. A high k ADC allows the discrimination between the core tumor and the surrounding regions by giving a higher weight to the difference between ADCs. This is mainly due to the fact that there is a progressive increase in ADC from the core of the tumor to peritumor tissues to normal tissues that leads to the possibility to use the ADC for locoregional staging [28]. Lowering k ADC allows the subdivision of the core based on DCE-MRI dissimilarity and the evaluation of the heterogeneity of the tumor thanks to the balanced contribution of DCE and DWI in the distance function D.
For the sake of cluster comparison and validation among different methods, the silhouette analysis was used in all the clustering results. The silhouette analysis measures how close each point in one cluster is to points in the same cluster and how far away it is to points in the neighboring clusters. This is performed by quantitatively comparing the clusters by their tightness and separation and its average width provides an evaluation of cluster validity [29]. The silhouette analysis highlighted an improved performance of 31% for the clustering performed using k ADC = 1 with respect to the established approach that employs morphologic features derived from the DCE-MRI time-intensity curves and the ADC as an additional feature (Table 1).
Discussion
As a general strategy, we have demonstrated a dissimilarity clustering based on multidimensional data derived from diffusion and perfusion MRI. Extension of the algorithm to additional data is straightforward, though the computational demand rises, and the similarity metric will likely need to incorporate further context-specific knowledge. As examination of tumor heterogeneity is carried out on a tumor by tumor basis, the data space can be restricted to areas containing lesions already located, but not necessarily segmented. For the specific use of DCE and DW-MRI, the lower resolution of the DWI data presents an issue of partial volume effects that affects the clustering of small lesions, but this issue is not specific to any one characterization strategy.
The two free parameters of the protocol: number of clusters (K), and relative weighting of the diffusion data (k ADC ), warrant discussion as the present work provides only a starting approximation to their choice, and the values may well be pathology dependent. For an unsupervised classification as used herein, the number of clusters should follow the actual structure and separation of the data into natural groups.
For breast tumors such as ductal carcinoma, the reporting of DCE-MRI data is currently based on a three-way division, while DWI is binary between normal and abnormal. The three DCE curve types (a rise and fall, a rise to a plateau, and a steady rise) have established clinical utility in predicting tumor malignancy [30]. This is not to say however that only three subgroups are possible, nor that these subgroupings are predictive of treatment response, which is the motive for examining tumor heterogeneity. In fact, works such as [14] have demonstrated that as the temporal resolution increases, a higher number of curve archetypes can be naturally identified and can be used for classification of voxelwise perfusion curves. We consider it noteworthy, therefore, that when K was reduced to just three or four groups, these were identifiable with the 3 enhancement patterns (or these three and nonenhancement) used in clinical practice for the assessment of the breast cancer. As well, the confines of the groups with DCE-MRI time-course patterns consistent with malignant and benign tumors coincided very closely with the tumor margin drawn by a radiologist. Increasing the K value showed the expected progressive splitting of these groups as K increased, with k ADC providing a distinction in the way this splitting proceeded based on the relative weight given to the diffusion data. The benefits of increasing the number of clusters are evident for understanding the heterogeneity of the lesion and the distribution of voxels that share certain similarities; however, the increase of the number of clusters should go hand to hand with cluster and data analysis techniques in order to avoid false or meaningless divisions. The overall protocol would also benefit from an integrating methodology such as cluster ensembles, in order to combine the multiple base clusterings done with different k ADC values into a unified consolidated clustering, reaching with this a consensus solution.
The primary criteria for noninvasive assessment of tumors based on DCE MRI involve three enhancement patterns (four including necrosis/nonenhancement). In the clinical data used for this study this assessment criteria have limited the validation to the visual interpretation of enhancement patterns based on the conventional interpretation of DCE curves, with a reader-dependent incorporation of ADC information. Ultimately, the envisaged application is in anticipating and evaluating treatment response. If tumor heterogeneity in terms of both perfusion and diffusion is to be encompassed, the conventional 3-way categorization may not be adequate or appropriate and indeed for other organs this rating is less common. We are now looking into robust methods for further validation of the processing pipeline that would enable a clinical exploitation of the multimodal analysis. Access to ground truth beyond radiological and biopsy evaluation is needed and likely requires voxelwise comparison of with histology of resections, a process that requires modifications to the surgical procedure that were not justified for this first demonstration of the method. Even were histology image data available, a significant task remains in the spatially correlation of individual MRI voxels with the histological results in order to get the requisite voxel-scale validation.
Conclusions
In this paper, we presented a general methodology for heterogeneity quantification that integrates information from diffusion (an indicator of cellularity) and perfusion (reflecting blood volume, flow, and vascular permeability) MRI images and illustrated its use in application to ductal carcinoma. The demonstration illustrated that multimodal clustering leads to improved selectivity and yields a greater refinement of the segmentation of tissues within the lesion than the separate processing of the two modalities.
By demonstrating that statistically consistent subgroups can be defined within tumors based on a combination of DCE-MRI and DWI-MRI data, we have indicated a means for objectively segmenting tumors that can be used for larger studies to examine clinical impact. Moreover, the appearance of statistically distinct perfusion regions within the tumor at moderate and low ADC weightings that in turn have statistically distinct ADC distributions suggests there is a useable distinction present that is not capitalized upon in present clinical practice. | 2016-01-07T01:57:53.067Z | 2012-11-19T00:00:00.000 | {
"year": 2012,
"sha1": "66ef33a7e9834be053e8f7965326546567be685e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijbi/2012/676808.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "66cf3e1d8f1c963dc0735f8861ea63703059d6a2",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
235504362 | pes2o/s2orc | v3-fos-license | Successful Treatment of Myxedema Coma Using Liothyronine in the Setting of Adrenal Crisis
Abstract Background: Myxedema coma (MC) represents severe decompensated hypothyroidism and is associated with mortality rates up to 50%. It is precipitated by an acute event which disrupts compensatory mechanisms present in severe hypothyroidism. Treatment includes IV levothyroxine (LT4) with consideration of liothyronine (LT3) therapy and management of any underlying stressor. Here we present a case of MC and adrenal crisis due to pituitary dysfunction successfully treated with IV LT4 and LT3. Case: A 55-year-old female with no pertinent medical history presented with two weeks of shortness of breath, anorexia, fatigue, and unexplained falls. Her initial vital signs were notable for a blood pressure of 86/60, temperature of 36.3 °C, heart rate of 59, SpO2 of 86 on room air, and respiratory rate of 21. Exam was notable for altered mentation, respiratory distress, decreased bowel sounds, and edematous facies. Initial serum studies were notable for sodium of 133 mmol/L (135-145 mmol/L), blood glucose of 50 mg/dL (74-99 mg/dL), TSH of 2.1 mIU/L (0.45-4.50 mIU/L), and blood gas showed pH of 7.27 and PaCO2 of 84.7 mmHg. She was intubated, started on vasopressors, and IV hydrocortisone 100 mg was administered. Her pretreatment serum cortisol was unmeasurable (below 0.5 mcg/dL) and ACTH resulted at 2.0 pg/mL (7-63 pg/mL). Despite hydrocortisone 50 mg q8h, her vitals worsened with HR to 40 bpm and temperature to 34.4 °C. Given concerns for MC, free and total T4 tests were obtained and both were undetectable (below 0.4 ng/dL and 4.0 ug/dL, respectively), so 300 mcg IV LT4 was administered. The next day, the patient’s vasopressor requirement increased, so 5 mcg IV LT3 q8h was added and IV LT4 was maintained at 100 mcg/day. Total T4 and T3 were measured daily and increased into the reference range over the course of 8 days and 2 days, respectively. LT3 was discontinued after 8 days and LT4 was converted to oral regimen of 125 mcg LT4 (weight expected 115 mcg) on day 14 after extubation; her hydrocortisone was tapered to a daily total of 30 mg PO. An MRI of pituitary showed an empty sella with a thin rim of normal appearing tissue without other lesions. The patient later denied any history of post-partum hemorrhage, idiopathic intracranial hypertension, pituitary surgery, radiation, or trauma. She is currently doing well on LT4 and hydrocortisone replacement. Conclusion: This case highlights the successful use of combined LT4 and LT3 in the treatment of MC with concomitant adrenal crisis. LT3 therapy may have been particularly beneficial in this case as conversion of T4 to T3 may be limited in setting of severe illness and high-dose glucocorticoid administration. Limited observational literature suggests that LT3 use can have clinical benefit, although excessive LT3 dosing may be associated with increased mortality. Further research is required to elucidate the benefits of empiric LT3 use in MC with concurrent adrenal insufficiency.
Background: Pregnancy is unusual in patients with acromegaly due to somatotropinomas or somatoprolactinomas. Fertility is impaired because of hormonal hypersecretion, pituitary damage by tumor compression or both. Managing somatoprolactinomas and fertility issues are often challenging.
Clinical Case: A 20-year woman with primary amenorrhea and headache was diagnosed with hypogonadotrophic hypogonadism secondary to hyperprolactinemia (2500 µg/L, n<23 µg/L). No other abnormalities were found on the pituitary function screening tests. MRI revealed an intra and suprasellar adenoma (2.5x1.8x1.8 cm) with optic chiasm compression. The onset of menses occurred after 11 months under dopaminergic treatment, and tumor size diminished (1.9x1.5x1.5 cm), bringing on optic chiasm decompression. She remained under dopamine agonist treatment for 6 years, when she realized extremities enlargement and height increase by 3 cm. Acromegaly was confirmed by blood levels of IGF-1 (3.37xULN), GH (8 µg/L, n<8 µg/L), and GH nadir (4.3 µg/L, n<1 µg/L) during OGTT. Then, octreotide LAR was added to cabergoline treatment while waiting for elective surgical treatment. She underwent to transsphenoidal endonasal neurosurgical microscopy approach guided by neuronavigation, with the removal of a large portion of tumor. However, it was not possible to extract the part of invasive adenoma close to right carotid artery due to the risk of vascular and intracavernous cranial nerves injury. Immunohistochemistry analysis of the adenoma was positive only for GH cells with low Ki67 index (<1%). Due to the poor biochemical control (unsuppressed post-OGTT GH, IGF-1 1.66xULN and PRL 301 µg/L) and the presence of a small stable tumor residue, treatment with cabergoline and somatostatin analogues was maintained (3-year octreotide LAR, transitioned to lanreotide in an attempt to achieve a better biochemical response). After 14 years of the initial diagnosis and 5 years post-surgery, the patient expressed the desire to get pregnant and all medications in use were suspended. In the following 3 years, she had two uneventful gestation without complications or worsen of acromegaly; she only breastfed for few months after her first pregnancy. The second one was a twin pregnancy. After one year, the MRI revealed no increase of tumor mass (1.0x0.3x1.0 cm), and PRL levels withing normal range, IGF-1 slightly elevated, but GH not suppressed by OGTT. Cabergoline was reintroduced and the biochemical control of acromegaly was achieved. Conclusion: We reported the very unusual spontaneous conception and normal course of pregnancies in a woman with acromegaly, who was submitted to successful transsphenoidal neurosurgical microscopy approach in which large part of the tumor was removed and the normal pituitary tissue was preserved, allowing fertility restoration.
Successful Treatment of Myxedema Coma Using Liothyronine in the Setting of Adrenal Crisis
mortality rates up to 50%. It is precipitated by an acute event which disrupts compensatory mechanisms present in severe hypothyroidism. Treatment includes IV levothyroxine (LT4) with consideration of liothyronine (LT3) therapy and management of any underlying stressor.
Here we present a case of MC and adrenal crisis due to pituitary dysfunction successfully treated with IV LT4 and LT3. Case: A 55-year-old female with no pertinent medical history presented with two weeks of shortness of breath, anorexia, fatigue, and unexplained falls. Her initial vital signs were notable for a blood pressure of 86/60, temperature of 36.3 °C, heart rate of 59, SpO2 of 86 on room air, and respiratory rate of 21. Exam was notable for altered mentation, respiratory distress, decreased bowel sounds, and edematous facies. Initial serum studies were notable for sodium of 133 mmol/L (135-145 mmol/L), blood glucose of 50 mg/dL (74-99 mg/dL), TSH of 2.1 mIU/L (0.45-4.50 mIU/L), and blood gas showed pH of 7.27 and PaCO2 of 84.7 mmHg. She was intubated, started on vasopressors, and IV hydrocortisone 100 mg was administered. Her pretreatment serum cortisol was unmeasurable (below 0.5 mcg/dL) and ACTH resulted at 2.0 pg/mL (7-63 pg/mL). Despite hydrocortisone 50 mg q8h, her vitals worsened with HR to 40 bpm and temperature to 34.4 °C. Given concerns for MC, free and total T4 tests were obtained and both were undetectable (below 0.4 ng/dL and 4.0 ug/dL, respectively), so 300 mcg IV LT4 was administered. The next day, the patient's vasopressor requirement increased, so 5 mcg IV LT3 q8h was added and IV LT4 was maintained at 100 mcg/day. Total T4 and T3 were measured daily and increased into the reference range over the course of 8 days and 2 days, respectively. LT3 was discontinued after 8 days and LT4 was converted to oral regimen of 125 mcg LT4 (weight expected 115 mcg) on day 14 after extubation; her hydrocortisone was tapered to a daily total of 30 mg PO. An MRI of pituitary showed an empty sella with a thin rim of normal appearing tissue without other lesions. The patient later denied any history of post-partum hemorrhage, idiopathic intracranial hypertension, pituitary surgery, radiation, or trauma. She is currently doing well on LT4 and hydrocortisone replacement. Conclusion: This case highlights the successful use of combined LT4 and LT3 in the treatment of MC with concomitant adrenal crisis. LT3 therapy may have been particularly beneficial in this case as conversion of T4 to T3 may be limited in setting of severe illness and high-dose glucocorticoid administration. Limited observational literature suggests that LT3 use can have clinical benefit, although excessive LT3 dosing may be associated with increased mortality. Further research is required to elucidate the benefits of empiric LT3 use in MC with concurrent adrenal insufficiency. Background: Acromegaly is known to cause insulin resistance through increased gluconeogenesis and reduction in peripheral glucose use; however, hypoglycemia related to acromegaly has not been reported. Clinical Case: A 58-year old man presented for evaluation of several elevated serum IGF1 levels. The patient had reported years of increased body heat but no changes in his hands or feet and no voice deepening. He recently needed 15 dental crowns due to gaps in his teeth. He also had difficult to manage OSA and weight gain. The patient reported neuroglycopenia after a high glycemic meal or drink, although he was never able to objectively measure any low blood glucoses when they occurred; these symptoms improved but did not resolve despite adhering to a low carbohydrate diet. He also had decreased libido and erectile dysfunction. Exam was significant for coarse facial features. Prior testing revealed several elevated IGF1 serum levels, the last one being 227 ng/mL . One year prior, OGTT resulted in an initial GH level of 0.1 ng/ mL with a decrease to <0.1 ng/mL after two hours. Repeat OGTT had an initial GH of 2.98 ng/mL which paradoxically rose to 12 ng/mL. Fasting BG was 90 mg/dL and peaked at 171 mg/dL. Pituitary MRI showed a 5 mm microadenoma, consistent with acromegaly from a GH secreting adenoma. He underwent a TSSC, and his heat intolerance, low libido, and symptom of hypoglycemia resolved completely. Subsequent IGF1 levels and MRI imaging normalized. Postoperatively OGTT showed a peak GH of 0.23 ng/mL with a peak glucose of 134 mg/dL. There was no paradoxical rise in GH. Discussion: Acromegaly is commonly associated with insulin resistance in ~30% of cases; however, there are no reports of associated neuroglycopenia after a carbohydraterich meal or OGTT, which in our patient resolved after successful removal of the pituitary microadenoma. His low glucose symptoms could have been a result of reactive hypoglycemia, which is often seen in patients with diabetes or even prediabetes. However this patient had no history of either. He did not have evidence of any tumors causing hypoglycemia and no gastric surgery to suggest a related etiology (e.g, dumping syndrome or nesidioblastosis). Conversely since GH is normally anabolic and stimulates insulin release, the patient's elevated GH may have caused an abnormal increase in insulin, leading to his hypoglycemia symptoms. Indeed GIP, which stimulates insulin, is thought to be the cause of the paradoxical rise in GH seen in 30% of acromegaly cases. Remarkably, the patient's hypoglycemia symptoms disappeared after treatment of the acromegaly, which leads us to consider that excess GH was the culprit.
The Biggest Man in the Room
Sarah Tariq, MD. University of Arizona, Tucson, AZ, USA.
A 42 year old gentleman who had been healthy all his life, began to develop new clinical symptoms including acid reflux. He was tested for H. Pylori by his PCP, and | 2021-06-22T17:54:58.323Z | 2021-05-03T00:00:00.000 | {
"year": 2021,
"sha1": "c4912c0d4b2c38f0e8434325c3d64f01e80d318f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1210/jendso/bvab048.1249",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0d6917abcc51b7cc8e14c19376a5f7920277d04c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
155324182 | pes2o/s2orc | v3-fos-license | China-US Relations Today: Some Aspects
The biggest problem which will be discussed in world affairs in the first decade of the 21st Century is the People’s Republic of China’s fast paced economic growth which has become a key driver of international economic growth. Today the whole world is keeping an eye on China. China’s stunning high economic growth and a more active diplomatic relations has attracted not only Asia, Africa, the Pacific region, Latin America and the whole Eurasia. According to some analysts of leading countries American age is coming to its end. According to Albert Keitel (Carnegie foundation) China’s Gross Domestic Product nominal will be the same the United States of America by 2020. In 2050 the nominal Gross Domestic Product will become more than twice the size of America. Within 10-15 years America’s great economic power certainly will slow down. Here inevitably raises important questions about what developing process will be on an international system? Will China abolish or change the present international system or try to build a new system or be one of the members of the system or become a leading power in the world? Can the US keep its present position? What will happen if the United States and China compete in the same level? What will happen with the new world future under Chinese running if the United States can’t keep present position? The answers to these questions are of enormous importance and we will try to find the answers analyzing the United States-China relations at present times.
Introduction 1.
China will play the main role in running the world economy.The Western world order will be replaced by the new Eastern structural system.China is well positioned as the sole representative from Asia and from the developing world among the permanent members of the United Nations Security Council.Beijing pushes for reforms in international financial institutions that would give China a much more prominent role in setting their policies.China expresses dissatisfaction with US-led management of the global economy.
Soon after taking office President Barack Obama's administration had to make changes in some aspects of their foreign affairs toward eastern and western direction.Former American administration policy toward East has been criticized by China.Last time some American politicians are stating new ideas toward the North-East Asian policy.At the same time, former US national security adviser Z .Brzezinski and G .Kissinger suggested the US -China to take the leading role in global politics by forming "G-2" alignment to deal with salient international problems.Magazine Financial Times of London has published Z. Brzezinski's article where he wrote: "The US and China by forming "G-2" has to develop the situation of the world".According to his words it is already time for China to "Shake the World" and take part in solving strategic problems of the region.
The same ideas were also mentioned by Chinese-American scholars and statesmen.According to them the relationships between the US and China are not only bilateral tie in world politics, steady US-China relations will promote to safeguard world peace and stability.Chinese authors say that they have important common interests; the US with China will run the future global order.In common, Chinese specialists believe that the US-China relations are developing.ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences MCSER Publishing, No 6 S1 November 2015
361
The US should treat China as an emergent global power with its increased economic weight cause the US economic growth is getting slower (Joseph S. Nye Jr, 2010).
The New Steps of the Bilateral Relations 2.
The US new Barack Obama's administration has made steps to make some changes in the US-China relations and in Asian policy searching for a new way to frame their relationship.
After the world socialist system dismissal the US became sole great power, and is trying to keep its position to rebuild geostrategic global space saving its hegemony.For this purpose the most attention is given to those states with the most important world political, economic and strategic situation.The US still remains the "sole superpower" and is jealousy trying to guard uni-polarity world not only in the region where they are manipulating, but at the same time in Asia-Pacific searching for new impulses for further development.
Washington officials consider that Asia-Pacific region was created under American interest.It goes without saying, that the US has played great role in the Far East during the Cold War.Nevertheless, China is trying to take over the US and not only trying to become its rivalry as well as trying to shake Washington authority.The most important thing for the US is to create economic and strategic base against great China.Some experts believe that Asia-Pacific region, in the near future will turn to the scene of contest between the US and China.Economic and trade development of the region in the world share is increasing rapidly year by year.50% of the world trade and 60% of GDP share belongs to Asia-Pacific region.So, the US is searching for other methods of strengthening its influence on the region.The problem of Taiwan is a real example (Reilly, J., 2012).
Nearly for more than half century the military-political situation in the Far East and Pacific were in the attention of the world.Creation of the US military-political block during the Cold war caused some regional conflicts, especially the Taiwan problem.Since then, Washington is supporting Taiwan and the whole Asia -Pacific region.The US ability to defend Taiwan is the event of an attack by China is increasingly challenged.The US and China have to take account of the problem not to complicate the situation in Asia-Pacific region.
A Taiwan issue
Disputes on Taiwan is the sensitive issue between the US and China.Seeking comprehensive cooperation and partnership with each other will be beneficial for the US-China relation.
After the Second World War, Cairo and Potsdam declarations retuned Taiwan to China.But the contradictions between Europe and socialistic world have supported the US government to provide civil war to Gomindan government and didn't give any chance to unite the island with mainland.In June, 1950 after the Korean War has begun, the President of the US G. Truman stated: "The future of Taiwan will be determined when there is stability in Pacific and peace negotiation with Japan and when UN deals with the problem.After Truman's statement, American government has sent their seventh fleet to Taiwan to prevent it from any kind of aggression.Since that time, the problem of Taiwan has become one of the biggest disputes and contradictions between the US and China.
To improve the US-China relations three communiqués have been singed: The first communiqué (February 28, 1972), known as the Shanghai Communiqué -The United States acknowledges that all Chinese on either side of the Taiwan Strait maintain there is but one China and that Taiwan is a part of China.2) 1978-.The US had to break its formal diplomatic relations with Taiwan, affirms the ultimate objective of the withdrawal of all U.S. forces and military installations on Taiwan as the tension in the area diminishes.3) Deng Xiaoping and Jimmy Carter sign the Joint Communiqué on the Establishment of Diplomatic Relations, reestablishing official ties between the two countries.
However, despite Jimmy Carter grants China full diplomatic recognition, Washington soon has declared the new geopolitical game openly.Jimmy Carter has signed "Taiwan Relations Act".The Act stated: "The US has made a step to establish diplomatic relations with China, to solve Taiwan's future fate peacefully.But, any impulse of solving Taiwan problem with the help of other methods will cause great danger to the parts of Pacific region and it will worry the US".In addition to that, "For security of Taiwan people and for the defense against danger of social-economic system of island or to stand against any force, the US will provide Taiwan with defensive arms to enable Taiwan to maintain a sufficient selfdefense capability, but doesn't violate the "One-China policy".
According to the Act, the US obliged to carry out two main tasks in the field of military and defense: first is to provide Taiwan with arms to build up island's defenses; secondly, to stand against force and danger and strengthen the abilities to keep peace and quiet life in island.Today The US is still continuing tradition on selling arms and military technology to Taiwan on the basis of the Act.And also, the Act is exactly giving chance to secure independence and develop the abilities of strengthening defense of Taiwan as de-facto.In September 1992, the US made a solution to sell F-16 A/D fighter planes to Taiwan in spite of China's objections.Washington's this action has caused a new intervention to solve Taiwan problem, at the same time it has caused negative connections to build friendly ties between two great powers.
Washington continued its policy to keep distance between the island and the mainland.In September, 1994 the US began to take some measures to develop American-Taiwan relations.If we count, the US has allowed the Minister of Economic and Trade to pay an official visit to Taiwan, and also has changed the department of West American Affairs into Taiwan Economic and Cultural Representatives in America.In addition to that, the US has allowed Taiwan representatives to pay a visit to American government on diplomatic issues.The US supported Taiwan to participate in the works of international organizations.The US-Taiwan way of solving problems together has complicated American-Chinese relations and strengthen actions of building "Two Chinas" or "One China, One Taiwan" for Taipei government.
In May, 1995 the US allowed the former head of Taiwan Li Danhuei to pay a visit to America.Changing their promises to acknowledge "One China" conception, the government of America maintained Taiwan "two Chinas" or "one China, one Taiwan".In March,1996 the US dispatched two aircraft carrier battle groups to the South-East coast during the military exercises of PRC military power.That has complicated the situation in the Gulf of Taiwan. .The Chinese side is against of any kind of relations between the US and Taiwan.The representative of Foreign Affairs of PRC Keying Xian has stated that "Taiwan problem is the most sensitive and complex issues in the bilateral US -China relationship." Beijing considered Taiwan problem as "One or united China" conception defending own independence and territorial integrity.In turn, Beijing's this position worry not only the US, but also Japan and other countries whether China "swallows" the island.
After Chen Shui-bian elected as a president of island in 2000, Washington began to support Taiwanese "pragmatic diplomacy".With the purpose of forming "Two Chinas" or "One China, One Taiwan" and "to acknowledge two sides" was the main strategy to order diplomatic relations with other countries (Kurlantzick, J., 2002).
Taiwan democratization process and the Tiananmen Incident have caused bad impact on American public opinion about China mainland.Several business groups have called PRC's advantageous sides, while others supported the island.In addition to that, the powerful Taiwan lobby in American policy became one of the most important factors in the US.
Clinton administration considered China as one of the most important partners, but his policy was characterized as enemies than allies.Strict policy toward China has been continued under the head of G. Bush.During the presidential election G. Bush has described China as powers produced strategic contest.And then State Secretary Condoleezza Rice in her speech: "China-first of all is not a state trying to hold status -quo, it is the state trying to change balance, and on the contrary it proves China as strategic rival".
The new cooperation after September 11, 2001
American-Chinese relations changed radically following the September 11, 2001 attacks.China strongly supports the US in the war on terrorism, extremism, have worked closely on regional issues.It was good chance for China to act more freely.New relations among three countries Washington-Taipei-Beijing have been established.
From the US side a real step toward dismissing the status-quo were made at the end of 2003.According to Iraq's aggression, Washington officially has called Taiwan to give up from its sovereignty, supporting Beijing.For the US, Chinese trade-economic ties are more important than Taiwan.A Beijing and Washington trade-economic tie covers milliards.
To strengthen strategic position in the Gulf of Taiwan, the US presented Japan-American treaty for establishing security in Taiwan issues.This will develop military-strategic cooperation between the US-Taiwan relations and provide the island with an advanced arms and technology and stand against any aggression and danger (first of all from Chinese side).As a result of this, Taiwan becomes one of the US clients buying military technologies from America.Taiwan is spending milliards of dollars on this purpose.
Washington's double-standard policy has always been criticized by Beijing and they strongly declared, providing Taiwan with new arms do not maintain peace and stability in the Gulf, at the same time, it influences on the US-China relations.According to this, the US must completely stop selling arms to Taiwan.
In 2005 China passed"anti-secession 'law.China as de-facto, would prepare to resort to "non-peaceful means", if the authorities of the island tried to separate the island from mainland and declared formal independence (Lomanov, A., 2008, Fewsmith, J., 2008).
Washington's official representative Condoleezza Rice has sharply criticized China's new anti-secession law, which would cause further conflicts in the region.But the US did not do any actions toward China's policy, to restrict the cooperation in comparison with Taiwan.But the US administration and Pentagon regard China as a competitor, always learning China's growth in economic and military power that plays a greater role in world affairs.American military specialists reassure that China is buying advanced arms from Washington, Russia, France, Germany and today's Chinese military is a professional force that carefully analyzes the American military to identify its weaknesses.
Nevertheless, China will not use military power.According to experts of the Asia-Pacific Center for Security Studies (APCSS) conflicts between great powers toward Taiwan issues are out of the question.All sides have been using military power cause of danger.
In practice China being one of the oldest civilizations has always been willing to take on greater international roles that enhance Chinese national pride and status but do not require costs and risk.From the very beginning Chinese Empire was founded relying on realism and real strategy.Chinese civilization owns psychology of thinking and China is prepared to work in the areas of mutual understanding.2008 Olympic Games showed the world that China had no intention of joining the island and Taiwan was referred to during the games as "Chinese Taipei," rather than "China's Taipei," as the mainland China government had proposed.Either Chinese Communist Party General Secretary Hu Jintao or new President of China Xi Jinping has declared that they are both seeking a peaceful means of joining the island with the mainland (Jacques, M., 2009, James C. Hsiung, 2009).
In world policy, conflicts among countries based on national interests are considered to be natural phenomenon.Therefore the highest pick of tensions among great powers could be viewed as global wars.The consequences of this scenario are well understood by all the sides involved.Despite the disagreements, they are making all their efforts in order to keep peace and stability in Asia-Pacific region.It is clearly seen from the statements made by state officials of two great powers.The US State Secretary Hillary Clinton has described American-Chinese relationship as "the strategic pivot of the 21st century", American statesman Henry Kissinger has stated that "I have been learning Chinese-American Relations for several ten years and I became aware that cooperation between them was as close as it is right now.Vice chairman of PRC Foreign Ministry Xui Tiakai has stated that "China and the US are being in the same boat, they have done more than any other economy, therefore to find solutions to pull the Asia-Pacific and world economy out of recession.
April 1, 2009 Chairman of PRC Hu Jintao during his meeting with his US colleague said: "Fruitful friendship relations between two countries are not only interests of two countries, it is for the benefit of prosperity of the whole world".According to Hu Jintao after Barack Obama has taken the office, China and the US relations have determined a new way to frame their relations, at the same time they could establish structural system of economic-strategic cooperation.In addition to that Hu Jintao has stated that they would demand adherence to the "one China principle" and would stand against "independent Taiwan" or "two Chinas" principle.
The US President Barack Obama reassured that they would continue relations with Taiwan, in order to improve relations, to uphold peace and stability in the island and they hoped for peaceful settlement.In addition to that, American President declared that the bilateral relations between America and China were not only interests of two countries, the future of whole world directly depended on their relations (Guoxin Xing, 2009).
The effect of relations between the US and China will affect the future world economy.It is time to build a new cooperative relationship between the US and China.Some of the global strategic issues affect all of us.For example, global recession, nuclear proliferation, energetic and climate changes that confront us demand strengthening relations between them.
3.
January 14, 2011 the US State Secretary Hillary Clinton made a statement about American-Chinese relations.She stated that The US and China relationship were facing the most difficult times and they both needed to form habits of cooperation and respect more effectively.Last two years, Barack Obama's administration has established a deep, broad, stable and positive relationship with China.They have made some good results, feeling some success and failure at the same time.If they each meet their responsibilities as great nations, the relationship of two great powers could be strong (Lyle J. Goldstein, 2012).
Several factors affect the US and China strategic rapprochement, the first factor is that they each are dependent on financially.PRC buys most of American Treasure (savings) Bonds.
China officials argue about American large amount of air forces and nuclear submarines, aircraft carries, including strategic arms and military exercises in the Pacific region.American Antimissile Defense System against nuclear MCSER Publishing, Rome-Italy Vol 6 No 6 S1 November 2015 364 weapons development in the given region is (in the first place against Chinese nuclear missile force) and all these actions worry China.Military forces intervening in a conflict and preventing China from accomplishing its military objectives.There is not any hope for deviation from these measures of American new administration.Nevertheless Beijing is patient to American military forces against their will.
As for the US, they are concerned about cyber espionage apparently originating from China.American former director of Special Service Center Allen Dallas says: "Last time, we began to talk about red Chinese cyber espionage activities.In the future, Chinese espionage will bring great danger toward Europe.If we give real estimation, today China has become great power in the Asia -Pacific region".And he called colleagues to take measures for these actions immediately (Halper, S., 2010, Glazunov, O.N., 2010).
The same opinion was given by Director of US Reconnaissance Center J. Klepper."China has full opportunity of becoming rivalry for America.The US military ships are functioning over the South -China Sea, where China is keeping an eye on them.PRC side's actions trying to pull American ships out of the region are full of aggression and danger", he added.In his opinion, America will capture all dangers in twenty years (Inkster, N., 2013).
According to some factors, it is very difficult to believe that the relationship between the US and China or their rivalry never come to its order.It goes without saying that collaboration, cooperation and profitable negotiations between relationships will continue in the process of development, but solving global problems confronting humanity positively, needs much time.
There are contradictions in American actions toward China to some degree.The US in one case is trying to hinder Chinese economy; on the other hand they remember their economic dependence on China.Today PRC is an important trade partner of America.Hindering Chinese economy in some level causes to narrowing their economic spaces.In addition to that, America aims to support Taiwan independence and defend Taiwan from Chinese force in case of uniting Taiwan.As for China, they aim to improve their world authority as well as in the region.They aim to develop their cooperation actively with America, as an important part of their fast modernization.Therefore, they rather try to avoid from complicity bilateral relations.
It is not efficient for US aims, to come to confrontation or to become too close to China.In connection with this, in our opinion, official Washington will take into account the following purposes in cooperation with China: -In case of breaking WTO rules, the US will impose embargo against Chinese firms and companies, hinder PRC economic growth, to prevent energy resources transportation which exports to China and to send American investors to the region where Chinese companies are working; -To maintain US superiority in the field of Science and Technology, not to let purchase such projects from the US, from other developed countries(including RF) or not to let these countries purchase or get it with the help of spying, to fight against all means of spying; -To increase number of services provide by the US Army, in comparison with Chinese National Liberalization Army from the perspectives of not only quantity but quality, to improve the works of military bases in neighborly countries, as well as uniting forces in East Asian regions, to take countermeasures against developing military power; -They will officially constantly inform that they are always ready to support Taiwan with military aid in any situation, to elongate the determination of the national status of Taiwan.Under such circumstances, "to provide regional security" American military forces always should be based in the region.In that case, America has the opportunity to control the situation in the region.In addition to that, by adapting bilateral and multilateral relations and cooperation to its interest, the US aims to influence the world situation from the point of their interest.Base of US military forces in the region of East Asia will become one of the important parts of American regional policy.As a result, America will defend their interest in the region by allocating American capitals in a great amount and maintaining security.
With the help of its military authority and the support of its allies, the US is controlling the world economically politically and militarily.It influence on all strategic regions of world.All these priorities are being used as a means of security.
In one of directions of the address to the people of the RK, the President of the Republic of Kazakhstan N. Nazarbayev highlighted the importance of building a nuclear weapon-free world.All new types of nuclear weapons, which are used by America in the name of security and stability, on the contrary, causes to the end of the world.All leading statesmen, leading politicians fully agree with this opinion and the given proposal of the President of the RK is supported by the whole world.Nevertheless, in April of 2011 new nuclear strategy was published in America.It was underlined that nuclear arsenal will remain one of the main ways to preserve security as one of the important means.Despite creating new types of weapons as a means of security the US cannot fully refuse from weapons ( it is true not only about the US ,but also about other countries which have weapons), on the contrary new types of nuclear weapons are being created in order to realize definite tasks.
In connection with the fact that we have the global situation as it is now (China might have acted differently be it another case) China is continuing to improve military technologies with the help of its powerful economy.America is constantly reminding the world community that China is becoming a threat number one for the USA.Documents provided by the White House have stated that China is maintaining a close strategic tie with Asian countries.
Military contest between two powers have negatively influenced on the relations between China and the USA.As a result of the meeting of B.Obama and Hu Jintao in April of 2009, in London two sides have signed an agreement on American-Chinese strategic and economic ties.These agreements include issues concerning the improvement of cooperation of military departments of the USA and China and the problem of the world crisis (Davydov, A.S., 2011).
During the American administrative negotiations it promised to cut down its financial deficiency by two times after it improves its economic situation.In its turn China promised to widen the sphere of internal demand and strengthen macrocontrols.After the London meeting many American, Chinese as well as world politicians started discussing new forms of partnership and alliance.
America was watching how, after the second decade of the XXI century China has changed its concept of the 'peaceful civilization' to 'harmonious world'.It was accepted by the American community as a strive for economic partnership during the world crisis.After B.Obama's visit to China Chinese government that was supporting the harmonious world direction agreed to improve military cooperation.Military cooperation between two countries has halted after the visit of Bush junior to Taiwan.
After the meeting of leaders in London, many articles which were published were about "G-2.In 18 of November 2009 there was a meeting between the US President and Chinese vice president Ban Xebio in Beijing.During the meeting Ban Xebio has announced that China was against "G-2".He stressed; "China is still developing country and its population is more than one and half milliard.In order to get the Modernized Country position China has to go long way.China will lead its own peaceful foreign policy and China has no intention of becoming ally with any country.The global problems should be solved not only by one or two countries' advice, but it must be solved by the participation of the whole countries" (Dobbins, J., 2012).
Cooperation between two countries has halted in 2010.Contradictions among American Google company; American obligation to provide Taiwan with arms; as a result, official China cut the military cooperation with Washington; Dalai Lama's official visit to America; China's human rights record, and US criticism of it, have long been major sources of friction in the US-China relationship; Cyber espionage apparently originating from China is a growing issue in the U.S.-China relationship; China has declined to impose its own bilateral sanctions and has criticized other countries for doing so; the sanction against Chinese business; the crisis in Korean peninsula; quarrels about Liu Xiaobo, a political dissident, writer and 2010 Nobel Peace Prize winner, all these scenes are halts of American and Chinese relations.
Despite the contradictions and conflicts relations between two countries, mutual distrust has escalated last years in the sphere of trade-economic, scientific-technology and cultural relations.They blame each other for growing tensions.Only economically dependence, China's economic weight, strategic intentions, and military capabilities will increasingly impact on US policy choices.Chinese holds US debt.China holds the US Treasury securities (about more than $900 milliard) according to some facts-$1 trillion), these resources are funding American low balance of payments (Savinski S.P. 2010, Etzioni, A., 2013).
The chief scientific worker of Russian Far East Scientific Academy Yaakov Berger gave his own opinion in his article: "China GDP nominal became the world's second largest economy and is close to the largest economy.If China continues its trends, they will overtake America not only in the sphere of economy as well as in the other spheres.American economy has slowed and the growth of unemployment has increased.These PRC trends aim to become the first great power.The US should welcome China; even they do not like the scenario (Yakov, B., 2012).
China is the major creditor nation and largest foreign holder of America.Last time has been crediting more than Word International Bank.And China is opening regional markets for their own goods.In this case is strengthening its influence on Europe.Today China has become "the heart" of world economy and is continuously working is of importance.China has appealed for the safeguarding of Chinese investments in US treasuries and called for policies that maintain the purchasing value of the Dollar.In conclusion, each sides are interested in warming algidity which was formed between two countries in 2010 (Bolutko, A.V., 2011, Kuznetsov, I., 2010).
Nevertheless Heads of China have agreed to ease the case of coming American companies to China and to give rivalry in the same level.The US in its turn agreed not to prevent China from exporting advanced technology products to the region.B. Obama has stated: "Chinese economic growth gives great opportunities to increase work places.We are ready to provide you with planes, cars and with computer programs".
In addition to economic issues, which have been discussed by leaders were Korean peninsula situation, sanctions against Iran nuclear program cooperation, as well the North Sudan referendum issue.The US President has tried to develop the theme on human rights.Hu Jintao has agreed that he was ready to have a talk on the issue, but he has warned to keep the strategy not to interfere in home affairs.Hu Jintao has admitted that they had to work out the issue of human rights.
In the XXI century China has become to play a vital role in international relations strongly influencing on international order.Economic growth, through which PRC took one of the leading place amongst top 10 most powerful countries, has taken permanent effect in long and short term prognosis.But China's GDP refers mostly to the country itself, at the same time GDP per capita is still relatively on a very low level about 6100 USD, that in comparison with, for example USA's GDP 8th time lesser (Brown, K., 2012, Yong Deng, 2014).Thus being one of the leading country in the World, China has a huge gaps in the domestic welfare, it means that quality of live in PRC still needs to be improved.
Geopolitically China's interests lie mostly in the region of Asia Pacific, prospective international markets of the South-East Asia are very interested in China's financial investments.But USA in its turn has a very strong position in the region, strategic alliances with Philippines, Australia, South Korea, Thailand, Japan provides sustain military presence.Generally bilateral relations between China and USA can be described as "reserved opposition".Neither of these two countries don't want to push forward existent issues such as Taiwan independence, of military rivalry in the regions.USA administration understands the importance of China's economy and potential in the region, the same about China's leaders.Both countries primarily interested in creating mutual profitable approach especially to the questions of trade and economy.Moreover, being the biggest creditor of USA, China directly dependent on fluctuations of world trade and market and USA has all tools to influence and interfere to these processes that are vital to PRC's economical growth.
Conclusion 4.
Development of U.S.-China relations at the present stage allows some conclusions: Firstly, the change in the relationship is mainly due to the political independence of China and its rapid economic development, improved status in the international arena and to strengthen strategic role in the relations between the two countries.Second, bilateral friendly cooperation and common development not only provide benefits to the peoples of China and the U.S., but also conducive to stability, peace and flourishing in the Asia-Pacific region and the world.Thirdly, from the settlement of the Taiwan issue in accordance with the three communiqués directly dependent stability, improvement and development of China-US relations.Fourthly, we should resolve differences in a spirit of mutual respect and equal negotiation and commitment to a common, despite the existing differences.
Despite the political challenges, China and the United States are interested in developing trade and economic relations, which is beneficial to the two peoples and contributes to stable development of bilateral relations.
In 2012, in China the authority of Hu Jintao was replaced by Xi Jinping.B. Obama will be replace after the election in 2016.First of all, all contradictions and new issues, PRC and the US internal developments depend on new leaders' actions.Nevertheless, the world relations and the future of humanity depend on the relations of these two countries; living in peaceful Kazakhstan we wish the countries to have good relations with each other. | 2018-10-15T00:13:42.733Z | 2015-11-03T00:00:00.000 | {
"year": 2015,
"sha1": "4e9e3e742dc1f2f3a59baf11e358d035b86c8751",
"oa_license": "CCBY",
"oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/8029/7694",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4e9e3e742dc1f2f3a59baf11e358d035b86c8751",
"s2fieldsofstudy": [
"Political Science",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
261351907 | pes2o/s2orc | v3-fos-license | Dual cognitive pathways to voice quality: Frequent voicers improvise, infrequent voicers elaborate
We investigate the involvement of Working Memory Capacity (WMC, the cognitive resource necessary for controlled elaborate thinking) in voice behavior (speaking up with suggestions, problems, and opinions to change the organization). While scholars assume voice requires elaborate thinking, some empirical evidence suggests voice might be more automatic. To explain this discrepancy, we distinguish between voice quantity (frequency of voice) and voice quality (novelty and value of voiced information) and propose that WMC is important for voice quality, but less for voice quantity. Furthermore, we propose that frequent voicers rely less on WMC to reach high voice quality than people who voice rarely. To test our ideas, we conducted three studies: a between-participant lab-study, a within-participant experiment, and a multi-source field-study. All studies supported our expectation that voice quantity is unrelated to WMC, and that voice quality is positively related to WMC, but only for those who rarely voice. This indicates that the decision to voice (quantity) might be more automatic and intuitive than often assumed, whereas its value to the organization (quality), relies more on the degree of cognitive elaboration of the voicer. It also suggests that frequent and infrequent voicers use distinct cognitive pathways to voice high-quality information: frequent voicers improvise, while infrequent voicers elaborate.
Introduction
We all know people in our work context who speak up frequently, fast, and fluently and individuals who are vocally more frugal. But which of the two speaks up with the best ideas, and which cognitive processes underlie whether a high-quality message is communicated? This paper aims to explore how much cognitive elaboration employees need for communicating high-quality voice, and whether the frequent and the frugal voicer reach high quality voice through different cognitive pathways.
Voice is defined as speaking up through the constructive communication of problems, challenging opinions, and suggestions for improvement [1,2]. Voice is proactive, discretionary, a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 not make employees voice more [30]. Here, we propose a way to reconcile both views, by arguing that cognitive elaboration is more important for the quality aspect of voice, than for the quantity.
Looking closely at the arguments of authors in favor of an elaborate processing view of voice [11,12], it becomes clear that the reasons why elaborate thinking is necessary for voice pertain to innovativeness of the content of voice, whereas arguments in favor of a more automatic processing view of voice pertain more to why and how people decide whether they want to voice at all, thus to the voice act, not the content. In our opinion, both views are thus correct. Which one is correct, however, depends on what voice aspect we focus on: quality, or quantity. While the mere decision to voice may be taken more or less automatically, cognitive elaboration may be important to select and communicate messages that add originality and value to the organization. Studying these aspects together but independently in relation to WMC, is thus necessary to unravel whether indeed, elaborate cognition may be less important for the quantity of voice, than for the quality of voice.
Next to its potential value to organizational practice, voice quality is thus also an important aspect of voice behavior in the area of cognitive research that has not been studied before. We present 3 studies using different methods to investigate how voice quantity and quality are related, and how they relate to individual differences in WMC and are causally affected by WMC load-manipulations. Specifically, Study 1 (lab study, between-person individual differences) and Study 2 (experiment, within-person situational differences) were designed to test the involvement of WMC in objective, behavioral measures of quantity and quality of voice. In Study 3 (multi-source survey field study), we investigated voice quantity and quality towards both colleagues and supervisors to replicate the lab-results in an organizational sample.
Theoretical framework
To express voice is to challenge the status quo with the intent of improving the situation. Employee voice is an informal, voluntary way of constructively communicating ideas, suggestions, concerns, information about problems or opinions on work-related issues. Voice is proactive and constructive in the sense that employees who voice try to anticipate future problems in the organization as opposed to merely criticizing the current state of affairs [5]. Scholars have argued that this type of dissenting behavior helps organizations to learn from their mistakes [31] to avoid crises like the explosions of NASA-space shuttles 'the Challenger' and 'the Columbia' by breaking with [32] and to increase team-innovation and effectiveness [33].
However, voice is not always appreciated by the individuals that the 'voicer' attempts to influence. For example, research shows that employees who voice do not always get favorable performance evaluations [34]. Evaluations by others play a crucial role because voice is inherently social. It always involves a target. Employees can voice towards different targets including colleagues (speaking out) or supervisors (speaking up), depending upon whom they feel might be able to set the desired change in motion [35]. However, even though voice behaviors are improvement-and socially oriented [1], the challenging nature of voice makes it interpersonally risky [36]. Since voice entails social risk and is not always effective in initiating change [37], one might assume that people think carefully before they speak out or up.
Voice quantity. Although it seems logical that people 'think' before (or while) they voice, it is not that clear yet how the voice decision process develops in the mind. Until recently, the predominant focus in the empirical voice literature has been on situational and personality factors that influence the act of voicing. Early voice measures tended to focus on whether or not people voice ideas, problems, or suggestions in organizations (e.g., 'this employee communicates his / her opinion . . . even if others disagree', [1] We refer to this as voice quantity since it assesses whether or not people speak up. More recent measures of voice do include more aspects pertaining to the content of voice, yet use items that possibly combine both quality and quantity thus making it difficult to disentangle the two [29,38]. Studies show several contextual and trait variables to be related to voice, such as safety and self-esteem [39,40], leadership [35,41], personality [4], and job autonomy [42]. Cognitive and experimental research on voice, however, is scarce and insight into the cognitive processes underlying voice decisions is limited to date. Several theoretical ideas exist on the involvement of elaborate cognition in the voice process. For example, some authors assume that voice requires elaborate cognitive processing, because it is intentional, planned, and future focused [28,29]. Others argue that voice requires a risk-reward calculus: when judgements of efficacy and safety increase, people voice; when they decrease, voice involves risk, and thus people remain silent. Factors that influence evaluations of risk and efficacy include voice climate in groups, felt psychological safety and the perceived probability of success, organizational support, and trusting relationships with voice targets [43][44][45]. However, even though there are many elements to consider when deciding to voice ('how to voice, what to voice, will someone punish me, am I able to get the message across?'), that does not necessarily mean that everyone always elaborately and consciously considers all those elements. Even complex decisions such as buying a house presumably depend largely on automatic processing [46]. So even though decisions seem highly complex, they are often taken based on automated social scripts [47].
Extensive research has shown that risky decisions are often based on heuristics and biases, reflecting automatic cognitive processing [48]. Studies show that people judge whether they feel safe or at risk based on affective properties [49], which forms a much more efficient way to navigate in a complex, uncertain world than elaborate processing. People often remain unaware of their motivations for engaging in or refraining from risk taking [50][51][52]. Voice research indeed shows that implicit theories about the effectiveness of voice predict whether people withhold voice that might be useful for management [53][54][55]. Similarly, other authors suggest that decisions to voice 'depend on general as opposed to specific effortful cognitive appraisals of context' [56] and might 'operate below the level of conscious, rational decision making' [2].
When people voice frequently, it would not be efficient to use elaborative information processing every time one decides to voice. Individuals likely simply choose to engage in the behavior that felt good in the past. Speaking up or staying silent may thus be the result of learned scripts for decisions under risk, which often involve little elaborate cognitive processing. In line with this, the only study that we know of that investigated the relationship between more general cognitive resources (GMA) and voice [30] found no association. This suggests that an ability to engage in complex thinking does not influence whether people engage in voice or not. Thus, we propose that voice decisions (voice quantity) are not the result of careful cognitive elaboration, but are rather made using fast and automatic cognitive processes.
Voice quality. A more likely part of the voice process to involve elaborate cognitive processing, is the quality of voiced content. Although pioneering voice research has focused primarily on voice quantity (frequency), more recently, scholars have started to discriminate between types of voice on the basis of content. For example, people who feel obligated to change things at work, voice more suggestions (i.e. promotive voice), people who feel safe, voice more potential problems (i.e. prohibitive voice) [2,3], and people who self-monitor voice less deviating opinions [57]. Yet, we do not know what makes voiced information, whether it be a suggestion, a problem, or an opinion, of actual importance to the organization. While more recent content-specific measures such as promotive voice [3], constructive voice [38] or prosocial voice [58] presumably also capture quality to a certain extent, they do not explicitly disentangle quality from quantity, which is, in our opinion, necessary for studying the role of cognitive elaboration. We expect voice quality and quantity to be distinct (quantity does not always lead to quality) but inter-related components of voice behavior (e.g., frequent voicers more often experience what is valued and thus learn to voice high quality over time).
We define voice quality as 'the discretionary constructive communication of suggestions, problems, or opinions that add novel and useful information to the organization'. This relates to creativity, which is the generation of original yet appropriate ideas in any domain [59,60]. Thus, originality or novelty plays a role in both voice quality and creativity. One other way to conceptualize voice quality would thus be to refer to creative voice, or proactive creativity [61]. However, we refer to voice quality for the purpose of contrasting the quantity and quality aspects of voice in this paper. This means that compared to organizational creativity measures [62,63], the voice quality concept has a broader focus. For example, besides sharing solutions and suggestions, it is also about pointing out important problems, and besides sharing novel ideas or challenging opinions, voice quality should always be useful. Even though usefulness is in the definition of creativity, it is usually underrepresented in creativity measures. Furthermore, in the currently available measures of creativity in organizations, there is no clear differentiation between proactive creativity (self-initiated), and reactive creativity (a response to a request from the supervisor). Since we are interested in the quality of voice as a proactive behavior, such a distinction is important.
The same proactive-reactive distinction applies for a clear differentiation of voice quality from creativity measures that are used in experimental research. Experimental research on the influence of elaborate cognition and attention on creativity uses tasks [64][65][66] that are reactive in nature: people are requested to generate creative ideas. They are not assumed to initiate their own creative responses and goals, and they do not socially interact with others. Investigating more proactive forms of creativity, in this case, the quality of voice, is thus important because highly creative people may generate great ideas in experiments, but never share them in a social environment. We want to study the involvement of cognitive processes not only in the generation phase, as has been done in previous work, but also within the process of deciding what to share with other people. Voice quality is thus a proactive, extra-role behavior: it refers to a voluntarily offered opinion, idea, or solution to a self-discovered problem. Voice quality is thus more social, more proactive, and contextually broader than the creativity that is typically studied in previous research.
While creativity and voice quality are thus different, we assume that they are related because both require the generation of novel information, and are thus facilitated by elaborative cognitive processing. Research suggests that Working Memory Capacity facilitates the idea generation process [65,67]. Here, we argue that WMC is likely to influence the quality of voluntarily voiced suggestions, problems, and opinions not only because it allows for original idea generation (creativity), but also because it allows for developing, selecting, and communicating those suggestions, problems, and opinions that matter to the organization (voice quality).
WMC: Cognitive elaboration resource. 'We constantly overestimate the power of consciousness in making decisions, but in truth, our capacity for conscious control is limited' [30].
The capacity for conscious control is reflected in the capacity of working memory [26], which is limited by the quantity of information it can contain. Imagine being in a secluded park with 3 children. Two of them are playing in the grass, but the third one attracts your attention because she gets in trouble climbing up a tree. Before you attend to the tree-climber, you glance at the other two, remembering their position. You keep this visual-spatial information in mind when you climb up the tree to get the third one, which allows you to quickly locate and check on the others while you are in the tree. Now imagine the same scenario, but in a petting zoo with 8 children during high-season. Most people would fail to remember the location of all remaining 7 children, especially considering the distraction of animals and other kids. The number of children to be remembered (storage capacity) and the ability to rule out irrelevant children (distractor interference), varies greatly between people: there is a capacity limit to the memory-system. Working memory holds limited information 'online' to serve cognition [68]. How much we can keep attentively online at a time (Working Memory Capacity or WMC) ultimately defines the possible depth of our thoughts as well as our capacity for conscious control [26,69].
Since WMC is central to effortful and elaborate cognitive processing [25] and attention control [70][71][72][73], it is important for performing any task that requires focused and elaborate thinking, such as planning, reasoning, and persistent future goal pursuit. More specifically, the number of items (storage capacity) one can retain in the visual working memory is assumed to be equal to the number of interrelationships between elements that can be kept active while reasoning [74]. The storage component of (visual) working memory is thus a key limiting factor in our ability to understand abstract relationships between novel items [75,76]. This means that WMC is related to fluid intelligence, but is nevertheless a different construct [77,78]. WMC is a more basic, specific measure of elaborative processing capacity, which is not distorted by domain-specific, cultural, or knowledge-dependent skills. Visual working memory tasks are only about the number of items one can retain within the visual attention scope, not about reading ability, mathematical ability, or knowledge from long-term memory (crystallized intelligence).
Another way to measure the involvement of WMC in any behavior that specifically focusses on the part of WMC that controls attention, is to put load on working memory such that it cannot be used to execute other tasks [70]. Because WMC is limited, once it is 'full', it is occupied and cannot be used to cognitively elaborate on other issues. In our studies, we therefore use both visual WMC measures as well as attentional load manipulations to study the involvement of elaborative processing in the quantity and quality of voice. We thus investigate WMC's involvement in voice both in terms of individual differences in evolutionary basic, visual WMC, and in terms of causality. As argued above, we expect that with regard to the quantity of voice, there is little elaborate cognitive processing involved, and we therefore propose the following hypothesis: H1-WMC is unrelated to voice quantity. In contrast to voice quantity, we propose that the role of WMC is more important for the quality of the voiced message. In support of a link between elaborate cognitive processing and idea development, experimental research has shown that elaborate processing is important for the generation of original ideas and the execution of musical improvisation. De Dreu and colleagues [65] showed that WMC facilitates idea generation because it allows for a persistent and perseverant focus of attention on the creative task. People high on WMC are able to stay focused on the generation process for longer periods of time and generate original ideas because they consider more possible alternatives within contextual categories.
Similarly, Benedek and colleagues [64] found effects of both WM storage capacity and WM distractor interference on creative performance, indicating that idea generation is fostered by the ability to filter out distracting information and the amount of items one can retain in working memory. If thus, elaborate processing facilitates the generation of original ideas, it may also be important for the evaluation, selection, and communication of high-quality ideas, problems, and opinions. Being able to assess whether a message contains novel information presumably requires understanding of the status quo and a thorough thought process to check whether indeed, the idea, problem, or opinion is not a mere repetition of issues that were previously raised by others.
In addition to fostering the generation and communication of original ideas, we propose that cognitive processing is likely to help people select and communicate those ideas that matter in their specific context (usefulness) [79]. describes the importance of understanding and mastering 'the context' in order for group improvisation to lead to innovations, instead of mere chaos. According to Sawyer, the innovation process is about selecting those ideas with potential, and making incremental changes in multiple rounds of improvisations. This incremental, implementation focused, usefulness element is important for voice because voice is social and needs to help the organization, as opposed to disrupting organizational functioning. Selection and communication of ideas, problems, and opinions that are not only original, novel, and challenging, but also useful, valuable, and implementable, requires understanding and shaping of the content. This makes the ability to keep a large quantity and variety of information active in the mind (WM storage capacity) important. Furthermore, as voice is social, the ability to filter out distracting environmental noise (WM distractor interference) while focusing attention on what to say and how to say it might be needed for selecting the best idea for communication.
The arguments above suggest that WMC is relevant to voice quality because the generation, selection, and communication of messages that are both novel and useful requires elaborate cognitive processing. We thus hypothesize that: H2-WMC is positively related to voice quality. Study 1 tests our hypotheses by investigating individual differences in WMC whereas in Study 2, we manipulated WMC through distracting people during the selection of voicemessages.
Ethics, design and sample
Study 1 was conducted in 2013, according to the principles expressed in the Declaration of Helsinki. Informed consent (written) was obtained from all participants. All data was examined anonymously. Participants were aware that they could terminate their participation at any time, and retract their responses within 7 days after participation. Participants were first paid, and then debriefed. The data were collected over a period of 4 weeks. The same experimenter was responsible for data collection during the full data collection process. Below, we provide a short summary of our methods, a more detailed methods section is available at dx. doi.org/10.17504/protocols.io.tubensn.
This between-subjects lab study tested whether individual differences in cognitive resources (visual WMC) predict whether people voice (voice quantity, H1), and whether the content of people's voice is useful and takes new information into account (voice quality, H2). Participants were 72 businesses students (57,7% male, 42,3% female) in all years of enrollment with an average age of 21.07 (SD = .21, range = 18-26) and a part-time job of 11.84 hours a week on average (SD = 7.39). They were tested for visual WMC and worked in a dyad on a fictional business case together with a team-leader. Participants had the chance, but were not required or requested to voice opinions and suggestions (voice quantity) to solve the case. The quality of the solution reached served as a measure of voice quality.
Procedure, tasks, and measures
Independent variable WMC. The independent variable WMC was assessed by means of a delayed serial recognition task [56][57][58][59]. Participants performed a series of 96 trials. On each trial, participants were presented with eight [8] randomly selected pictures, appearing sequentially in the center of a laptop screen. Each stimulus remained on screen for 250 milliseconds (ms), extending the complete trial over a period of 2000 ms. After a series of eight stimuli, the screen went blank for 1000 ms during which participants had to keep all information in memory. For each set we assessed response accuracy with a perfect WMC score of 96 [80].
Working Memory Capacity was calculated with a linear transformation of accuracy into capacity as used by Sligte and colleagues [64,65]. WMC was calculated by transforming the percentage of correct responses into a capacity estimate between 0 and 8, correcting for chance level (the percentage of correct responses if participants randomly select a response, which is 0.5 or 50%). We used the following formula: WMC ¼ ð% correct responses À chance levelÞ � ðn items in task � 1=1 À chance levelÞ This ultimately results in: WMC = (% correct-0.5) � (8 � 0.5). Performing at chance level would thus result in a capacity estimate of 0, and perfect performance would result in a capacity of 8. The number of items participants could retain in working memory ranged from 2.33 to 6.83 (M = 5.31, SD = .89).
Dependent variables voice quality and quantity. Following this test, participants were introduced to their team-leader with whom they performed a filler task to get acquainted. Leaders were confederates and specifically instructed and trained for the study (namely a male and a female faculty employee who were blind to the study's hypotheses and uninvolved in the study as authors). Gender of the leaders was counterbalanced between groups. To measure voice, we provided the team (participant and leader) with an information sharing task adapted from [81]. This task is specifically useful to measure voice and initiative behaviors in the social environment, because team-members need to voice in order to get to the best solution. We specifically created a leader-follower situation because we wanted any voice from participants to be upward, discretionary, risky, challenging and extra-role: the responsibility for the taskexecution was completely in the hands of the leader, and leaders had the power to distribute payment at the end of the experiment (or so we told the participants). Participants had the chance to voice, but were never specifically requested for their opinion or help during the task.
The team-leaders' task was to find the best new dean for the faculty. There were three candidates (A, B, and C) who all had different properties, which were selected via a pre-test of 283 students on favorable and unfavorable dean-attributes. The candidate profiles were composed in such a way that based on all available information, C would be the best candidate for the job, and A and B would both be unfavorable. However, the information that was provided to the individual team-members was (as it often is in practical situations of decision making), not identical. Based on only the leaders' information, B would be the best option, and based on the participants' information, A would be best. The team thus needed to share and integrate all information to get to the optimal decision, C (high quality).
After carefully reading all information on the sheet, the leader said: 'based on my information here, it is quite obvious that we should go for option B. Here is my information, confirming my decision'. If, at that point, the participant had proactively tried to find the best candidate for the job, s/he would have found out that from his/her perspective, A was the best choice. If the participant noticed this and wanted to mention it, s/he needed to challenge the leader and voice another option. The participant could now choose (1) not to voice and comply with the leader, or (2) oppose the leader and suggest another option (voice). Whether participants voiced or not, was our measure of voice quantity. Voice quality was assessed by the relative quality of the option participants voiced. As explained previously, the participant could either choose the less optimal option A, thus staying with his or her previous preference, or derive a new perspective from the leaders' information and voice option C, the best option from the combined viewpoints. The outcome difference between these suggestions, is assumed to be the result of the evaluation and selection process for the voice message that integrates novel information, and is most valuable to the organization (or in this case the team). Voice quality was thus measured as a dichotomous variable with low (option A) and high (option C) quality.
Results study 1
Our data thus consisted of two groups: participants who voiced, and participants who did not voice. Because we hypothesized that WMC is unrelated to voice quantity (H1), we expected no WMC differences between participants who voiced and who did not voice. Indeed, we found no WMC difference between these groups, F(71) = .83, p = .367, η 2 = .012. Within the group who voiced, there were participants who voiced low and high quality. Because we hypothesized a positive relationship between WMC and voice quality (H2), we expected that WMC would be highest in the group who voiced high quality. As predicted, WMC was higher in the group who voiced high quality (N = 22, M = 5.65, se = .16, CI 95 = (5.32, 5.98)) than in the group who voiced low quality (N = 27, M = 5.14, se = .16, CI 95 = (4.81, 5.47)), F(1,47) = 4.92, p = .031, η 2 = .095. To visualize these effects, we split the sample into a Low-and High WMC group (using the median: 5.33). The Low WMC group did not voice more than the High WMC group ( Fig 1A, χ 2 = .58, p = .448), whereas the High WMC group voiced more high than low quality solutions, and Low WMC group voiced more low than high-quality solutions ( Fig 1B, χ 2 = 6.20, p = .01). This supports the idea that WMC is related to voice quality (H2), but not to voice quantity (H1).
Methods study 2
Ethics, design and sample Study 2 was conducted in 2016, according to the principles expressed in the Declaration of Helsinki. Informed consent (written) was obtained from all participants. All data was Elaborate cognition in voice behavior examined anonymously. Participants were aware that they could terminate their participation at any time, and retract their responses within 7 days after participation. Participants were first paid, and then debriefed. The data were collected over a period of 4 weeks. The same experimenter was responsible for data collection during the full data collection process. Below, we provide a short summary of our methods, a more detailed methods section is available at dx. doi.org/10.17504/protocols.io.tubensn.
In order to show that our findings are not limited to between-person differences, but that they are similar within-persons when elaborate cognitive processing is either possible or limited, we designed another experiment using a within-participant design. One of the ways to study causal effects of WMC, is to put load on working memory. Taxing working memory by another activity (such as, for example, counting backwards), reduces its capacity to be used by another activity (for example, voice). We thus designed two within-person conditions in this experiment, one with, and one without distraction of working memory. All participants got the opportunity to voice in both conditions. Further, to assess voice quality, after the experiment, all voice was rated by three raters in terms of whether or not voice provided novel solutions or useful implementations superior to the status quo.
Participants were 77 (we aimed for 80, but lost 2 participants due to incomplete data, and 1 because average idea generation was more than 2.5 SD below the mean) primarily business (55%) and psychology (36%) students (67% female, Age M = 21.20, SD = 3.08) who participated in exchange for money. They were welcomed by the experiment leader and seated behind a computer. The experimenter explained that payment depended on task performance and that performance would be monitored and evaluated at distinct moments in the experiment.
Procedure, manipulations, tasks, and measures
Participants could engage in two types of payment dependent behaviors. Their primary job was the working memory load task (counting backwards), always resulting in a €2.50 salary if performed correctly. Secondary to this task, there were (extra-role) opportunities to voice, which could result in €1,bonus (for high quality) or €1,loss (for low quality) for every voiced suggestion, problem, or opinion depending on the leaders evaluation of voice quality. The experiment started with an exercise of the working memory-load task (counting backwards from 100, in steps of 2) followed by performance feedback. We explained that this task would be repeated several times during the experiment, and that this would always be their primary task with a standard reward of €2.50.
Idea generation phase. Our main aim was to find out whether WMC is only important in the idea generation phase as found in previous research [65], or also important in the 'selection for communication' phase of voice. To make sure participants would have a pool of messages to choose from, they first anonymously generated 10 problems, opinions, or suggestions for improvement of their faculty [82] in a period of 10 minutes (for the remainder of the text, we call these: ideas). We told them explicitly that these ideas would not be evaluated or used for analysis, but that this was just an anonymous practice round. We needed 10 ideas to be able to fill both conditions (No WMC Load-High WMC Load) with 5 of the participants' own ideas to potentially voice. During generation, the experimenter digitally classified the ideas (low/ high quality). The computer used these classifications for random-weighted message distribution across the WMC Load conditions so both conditions contained an equal amount of high and low-quality messages.
Practice phase with simultaneous load manipulation. The generation phase was followed by two combined load and selection practice tasks. We designed two tasks (one simple, one more complex) in which participants needed to select items from a pool of 10 words (the 3 largest animals) or 10 sentences (5 true statements about animals) while they performed a working memory load task (counting backwards). The purpose of these tasks was to familiarize participants with the combination of counting while selecting messages for voice, limiting possibilities for technical confusion during the real opportunity for voice.
Communication (voice) phase and simultaneous load manipulation. Following the practice phase, participants got the instructions that they would get to see a random selection of their own ideas, as well as a random selection of other peoples' ideas. We added 5 ideas from a database of relevant ideas to each Load condition because voice takes place in a social context. In the social environment, the evaluation of the quality of one's own ideas also depends on the evaluation of the ideas of others. These other-ideas were kept constant between participants: each Load condition contained the full quality (low-high) range of ideas as coded by two independent judges. During the 'window for voice', participants thus saw 10 messages from which they could select those ideas they found 'worthy' to voice: 5 of their own, and 5 generated by others.
We told them that they would get the opportunity to voice any idea to the experimentleader, who would evaluate it and pay them accordingly. Voice (as opposed to the generation phase) was thus no longer anonymous and moreover, financially risky: voicing a good idea would result in a €1,gain, voicing a bad idea in a €1,loss. Participants were not pushed to voice any ideas, because their primary and most profitable task was to count backwards, either from 80 in steps of 2 (High Load) or from 100 in steps of 0, thus repeating the same number vocally (No Load). Participants performed two trials with 10 messages each and a random order of High vs. No Load. Thus, participants had the opportunity to voice their previously generated ideas, while performing a WMC-load task. Voicing ideas was done digitally, by selecting and sending the message to the experimenter, who would evaluate the idea and calculate payment accordingly.
Dependent variables voice quality and quantity. Voice quantity was measured by counting the number of voiced ideas in each Load condition. Specifically, we counted how many suggestions, problems, or opinions participants sent to the experimenter (quantity) to be evaluated. Across conditions, participants voiced 2.87 ideas on average (range = 0-10, SD = 2.29). Further, whereas Study 1 measured voice quality as an objective operationalization of value and insight, we now used expert ratings of voice quality. After the experiment, all ideas from the generation phase were evaluated by three independent judges, who rated whether the ideas provided novel solutions or useful implementations superior to the status quo, on a scale from 0 (not at all) to 6 (highly) (adopted from [83]). Inter-rater reliability was reasonable for the generated quality (ICC = .659). Voice quality was the average quality of those ideas that were voiced by the participants, which was higher (M voiced = 3.06, se = .036, within CI 95 [84], as revised by Cousineau [85]. The same method will be used to report all within subjects 95% confidence intervals throughout the rest of the paper.
Results study 2
We ran all main analyses using 2-level (No-Load vs. High-Load) repeated measures ANCO-VA's, controlling for the order of the load tasks (No-Load or High-Load first). Furthermore, as both parts of voice are intertwined (voice quality cannot be evaluated without voice quantity), we controlled for voice quantity when testing WMC's effect on voice quality, and for voice quality when testing WMC's effect on voice quantity. Voice quality was entered as a continuous standardized covariate when testing for load effects on voice quantity (H1). Please note that since some participants never voiced, we could not compute their voice quality, which is modeled as a covariate. This influences the degrees of freedom. Similarly, we controlled for voice quantity, entered as a continuous covariate, when testing for load effects on voice quality (H2). We expected that load would not influence voice quantity, and thus that average voice quantity would be the same across the Load conditions. Indeed, we found little difference in voice quantity between the Load conditions. In the No Load condition, participants voiced slightly more (M No = 1.54, se = .082, within CI 95 To increase power, we repeated the analysis using the average generated quality as a covariate. Power increases because also people who did not voice (but did generate ideas) can be included in the analysis. Further, the fact that idea generation happens before voice, is of importance because it might be that people with good ideas just have better ideas to choose from, and therefore voice more. This idea is supported by the finding that idea generation quality was positively related to overall voice quantity, r (78) In contrast, we did expect that load would influence voice quality, and thus anticipated a difference in voice quality between Load conditions. We expected higher quality of voiced ideas in the No Load condition compared to the High Load condition, because load on working memory should impair elaborate processing and should thus impair effective selection of high-quality ideas. As expected, we found the predicted load effect on voice quality. Again, we controlled for voice quantity because quantity and quality are theoretically intertwined (no quality can be assessed without quantity). Participants voiced lower quality when under High Load (M High = 3.03, se = .054, within CI 95 178. Similar to Study 1, this suggests that WMC is important for deciding what to voice. However, the fact that the main effect was only significant when controlling for quantity, suggested that there might be an interaction effect. Looking at the correlations between voice quantity and voice quality in the different Load Conditions, we found a strong relationship between overall voice quantity and voice quality in the High Load condition (r (52) = .40, p = .004), but not in the No Load condition (r (60) = .01, p = .968). Also, we found a significant interaction between WMC Load and overall voice quantity on voice quality, F (1,41) = 6.88, p = .012, η 2 = .144. To shed light on the direction of this effect, we split (based on the median: 3) our sample into participants who voiced little (3 times or less, N = 21) and participants who voiced a lot (4 times or more, N = 24). We observed that there was no effect of load (MΔ NO-High = .09) in the group who voiced a lot, . It thus appears that WMC is particularly important for voice quality of those who voice little (Fig 2). Please note that the degrees of freedom differ in these analyses because there were people who did not voice at all in either of the two conditions. The results of this within-participant experiment support the idea that WMC is uninvolved in voice quantity (H1), but is involved in voice quality (H2). The findings suggest that WMC is not only related to voice quality because it fosters idea generation (as found in previous research), but also because WMC helps individuals to select the best ideas, solutions, problems, and opinions to voice. The interaction effect between voice quantity and working memory load suggests that there might be differences in the involvement of WMC between people who voice often and people who voice little. Presumably, people who voice often, do so more automatically and frequently in general. They might be more familiarized with cognitively evaluating and selecting ideas due to practice and therefore need less WMC to do so. In contrast, people who do not voice often, are not experienced at the evaluation and selection process, and therefore need more space in working memory to do so. In the third study, we set out to replicate our findings in a field setting and to test the idea that the role of WMC differs for those who voice often and those who voice rarely.
Study 3
Our next step in investigating (WMC) differences between voice quantity and quality, was to test our hypotheses in a field setting. The first challenge was to create a survey measure that Elaborate cognition in voice behavior would clearly distinguish between voice quantity and quality. Voice measures such as the often used one developed by Van Dyne and Le Pine [1] seem to tap more into the frequency of voice, than its quality. We tested this assumption in a pilot study among 53 employees and their supervisors, by comparing the voice measure by Van Dyne and LePine [1] to several newly developed voice quantity and quality items. We found that the traditional voice scale was only significantly related to voice quantity but not to voice quality (please see the supplementary materials for a full report on this pilot study).
In the following phase, we developed the measures further in order to be able to clearly differentiate between voice quality and quantity in the field. We propose that this distinction is relevant because these two components are (in part) driven by different antecedents, and in particular show differential relationships with WMC. We also explored the relationships of voice quality and quantity with other proactivity constructs, such as proactive personality [86], personal initiative [87] and taking charge [88], and more generally performance. The results of this multi-source pilot field study are reported in the supplementary materials (Table F in S1 Appendix) which presents the nomological network of voice quality and quantity. In summary, the results indicate that voice quality is more strongly related to performance (rated by managers), personal initiative (rated by colleagues), and taking charge (rated by managers) than voice quantity. Further, our Studies 1 and 2 already confirmed the relevance of this distinction for more objective measures of voice quantity and quality specifically concerning WMC.
The first goal of our field study (Study 3) was therefore to replicate those findings in organizations with our further developed voice quantity and quality scales using evaluations of supervisors and coworkers. Our second goal was to investigate whether the reliance on WMC and thus the involvement of cognitive processing in voice quality, might differ between people. Our previous study (Study 2) showed that infrequent voicers were more dependent on WMC to reach high-quality voice than those who voiced a lot. There could thus be individual differences in the way people cognitively approach the act of speaking up. Some people may think elaborately before they voice, organizing thoughts beforehand. Others may be more prone to speak while they think, improvising on the way. Presumably, people get better at selecting and communicating good ideas through practice, and practice (voice quantity) should thus decrease the involvement of WMC in voice quality.
To test this idea, we draw on examples from literature on creativity in musical improvisation WMC and other measures of controlled cognitive elaboration seem to be important for creative performance only when musicians are not skilled at improvisation (classical musicians). Those who are highly skilled at improvisation (jazz musicians) are able to perform creatively without it. In classical music undergraduates, high WMC predicted creative performance over time [48]. In contrast, in jazz, undergraduates WMC was related to idea generation (something they were presumably not trained at), but not related to creativity of the improvisational performance [89]. Brain research shows that the right dorsolateral prefrontal cortex, which has been linked to working memory processes in earlier research [90][91], is activated during improvisations of professional classical concert pianists [92]. In contrast, jazz-experts show decreased activity in brain-areas that are associated with consciously monitoring goal-directed behaviors [93]. These findings support the idea that practice reduces the role of WMC in creative performance, and that WMC might serve to overcome a lack of experience in improvising by increasing creative quality of the improvisation through cognitive elaboration.
Extensive training in a creative domain thus seems to be associated with different patterns of brain activation [94]. In sum, creating novel musical patterns may either be achieved through persistent cognitive elaboration (supported by WMC), or by more relaxed flexible and intuitive cognitive processing (supported by improvisational practice). We propose that these findings generalize also to other innovative behaviors, and voicing works in a similar vein. Thus, people who voice often may be less reliant on their cognitive resources to reach high-quality voice as they may have learned how to voice well and usefully in a given context. Those who do not voice often or who have little experience in a given context need to organize all information in their mind and focus attention on the task at hand to be able to speak up with a good comment, and they need WMC to do that. This implies that in practice, there may be two routes to voice quality: A cognitive elaboration route facilitated by WMC, and an intuitive improvisation route that is paved by voice experience (and independent of WMC). We thus hypothesize: H3-Voice quantity moderates the positive relationship between WMC and voice quality such that the relationship between WMC and voice quality is strongest if employees voice rarely.
Ethics, design and sample
Study 3 was conducted in January and February of 2014, according to the principles expressed in the Declaration of Helsinki. Informed consent (written) was obtained from all participants. All data was examined anonymously. Participants were aware that they could terminate their participation at any time, and retract their responses within 7 days after participation. Participants were not paid but participated voluntarily. We offered them a visual presentation of the full research project, and the opportunity to request their personal scores compared to the full sample in a personal visual representation after the study was finished. The data were collected over a period of 8 weeks. Participants were 158 employees ranging in age between 16
Procedure, tasks, measures
We collected visual WMC and voice data from 158 employees in 79 teams of unique matched triads. Each employee was tested for visual WMC, after which they completed a questionnaire to rate one of their colleagues' voice quantity and quality (and the colleague did the same about the focal employee). The team-supervisor (N = 79) rated both employees on voice quantity and quality. All tests and questionnaires were administered with a researcher present. Researchers made an appointment with each team, personally explained and distributed all measures, debriefed participants, and left after all data were collected. Independent variable WMC. Participants first took a visual WMC test (programmed in Presentation version 17.3) which was a combination of change localization and storage tasks which has been used in previous research on visual WMC [85] . There were three different basic tasks: change localization, serial spatial memory, and simultaneous spatial memory. All basic tasks were executed with pure storage attributes as well as distractor interference resulting in 8 different types of trials. In all trial-types, participants had to store objects (circles and rectangles) in their working memory (remember them). During this target phase, objects were displayed for 250 ms in a 5x5 grid. The location of all objects should be remembered during a delay period of 500 ms after which they disappeared. Objects were green (targets) and yellow (distractors) rectangles (120x40 pixels, 0.89 visual degree) and circles (diameter 120 pixels, 2.68 visual degree). After a 500 ms delay (objects disappeared), participants either had to recall the location of all circles or localize one rectangle that changed. The test started easy (1 object), adding objects gradually over trials, and only ended when the individual capacity limit within all trial-types was reached. The test was reliable (α = .73). WMC was calculated using Cowans' K, a capacity estimate for tasks that require participants to remember multiple items in one visual field, as opposed to the serial task we used in study 1 [95,96].
Dependent variables voice quality and quantity. Colleagues and supervisors rated focal employee voice quantity (15 items) and quality (15 items) with a questionnaire (Tables C and D in S1 Appendix). To control whether raters understood the behavior we intended to measure was a) discretionary, proactive, and constructive and b) distinct in the sense that quantity does not imply quality and vice versa, we first explained this followed by control questions. Voice quality was meant to reflect novelty and usefulness of voiced information (suggestions, problems, and opinions). Voice quantity was reflected in the frequency of voiced information (suggestions, problems, and opinions). One voice quantity item was deleted due to poor factor loadings. Cronbach's' alphas were high for colleague (α quality = .93, α quantity = .89) and supervisor ratings (α quality = .96, α quantity = .91). Descriptives and correlations can be found in Table E in S1 Appendix.
Measurement model. Our first aim was to replicate our pilot-study findings (Tables A and B in S1 Appendix) that voice quality and quantity are separate constructs. If voice quality and quantity are indeed distinct, a 4-factor (2 sources x 2 types) measurement model should fit the data better than a 2-factor (2 sources) model which represents voice as one construct. We tested this with a CFA in Mplus. As expected, the 4-factor model with a separation of quantity and quality provided a good fit (χ 2 = 167.07, df = 101, N = 144, p < .001, CFI = .97, TLI .96, RMSEA = .07, 90% CI {.05, .08 }, AIC = 4055, BIC = 4206). We compared this model to a 2 factor-model, with quantity and quality as one factor, and the source (colleague vs. supervisor) as separate factors, which showed a significantly poorer fit (Δχ 2 = 397.53, df = 2, p < .001), (χ 2 = 564.60, df = 103, N = 144, p < .001, CFI = .78, TLI = .74, RMSEA = .18, 90% CI {.16, .19 }, AIC = 4448, BIC = 4594). This again suggests that voice quality and quantity are distinct constructs, not only when rated by supervisors, but also by colleagues. More details and in-depth analyses can be found in the supplementary materials (S1 Appendix), factor loadings (range .74 -.95) and scale details can be found in Tables C and D in S1 Appendix.
Results study 3
In the previous studies, we showed that WMC relates to the quality of voice (H2), but not the quantity of voice (H1). Although our hypotheses are at the individual level, our study has a two-level data structure, with employees nested in 3-person teams with one supervisor. Employees rated each other, whereas supervisors rated both employees. To take betweengroup and within-group variance into account when using combined ratings of colleagues and supervisors, we applied multilevel structural equation modelling (MPlus) to control for teamlevel variance. Since our measurement model indicated that colleague and supervisor ratings are distinct factors and the literature implies the same, we first tested our hypotheses in separate multiple regressions.
The regression analysis of our first model (colleague ratings) showed no significant effect of WMC on the quantity of voice (β controlled for quality = .02, p = .832, se = .07, β simple effect = .10, p = .225, se = .08) whereas the effect of WMC on voice quality was positive (β controlled for quantity = .14, p = .04, se = .07, β simple effect = .18, p = .016, se = .08). Employees with high WMC thus voiced better suggestions, opinions, and problems to their colleagues than employees with low WMC (H2). However, they did not voice more often (H1). As expected, we found the same pattern of results for supervisor ratings. Employee WMC did not significantly predict voice quantity (β controlled for quality = -.03, p = .622, se = .05, β simple effect = .12, p = .095, se = .07), but did positively predict voice quality towards supervisors (β controlled for quantity = .13, p = .033, se = .06, β simple effect = .21, p = .011, se = .08). Results for the combined sources show the same pattern: WMC is unrelated to voice quantity (β controlled for quality = -.04, p = .418, se = .05) and positively related to voice quality (β controlled for quantity = .15, p = .005, se = .05). The capacity of working memory thus seems to be important for generation, selection, and communication of highquality voice, regardless of the voice target.
We hypothesized that due to practice, people who typically voice often rely less on WMC than people who tend to voice rarely (H3). Consequently, we expected negative interactions between voice quantity and WMC when predicting voice quality: WMC should have the strongest positive effect on voice quality when voice quantity is low. We first tested this assumption concerning all ratings of voice (supervisor and colleague ratings combined) in MPlus using a robust maximum likelihood estimator. We started with the total model (both sources), and continued with robustness tests with any possible combination of sources in the model. If practice indeed increases automaticity of voice and in turn decreases the influence of cognitive resources (WMC), this pattern should exist in every model, regardless of the source rating quantity and quality. As expected, we found a negative interaction effect (β = -.17, p = .003, se = .06 ). Simple slope analysis (based on a split through the mean) showed that the relationship between WMC and voice quality was positive in the group who voices little (Low Quantity : β = .40, t = 3.33, p = .001) and non-significant in the group who voices often (High Quantity : β = .15, t = 1.30, p = .199). This interaction (Fig 3) is in line with our findings in Study 2 (Fig 2), that WMC load affected voice quality only for participants who voiced little. Next, we proceeded with the same analyses for every possible combination of sources. We consistently found the same pattern of results in all models, which can be seen in Table 1.
To summarize all results of Study 3, in Fig 4, we provide a visual overview of all hypothesized direct and interaction effects. This model shows that WMC relates to voice quality (H2), but not voice quantity (H1), and that quantity of voice (which is positively related to quality), moderates the relationship between WMC and voice quality (H3).
General discussion
This paper aimed to incorporate knowledge from experimental psychology and cognitive science to further our understanding of the cognitive processes underlying voice behavior in organizations. We proposed and found that voice quality and quantity differ with respect to their relationship with elaborate cognitive processing in terms of WMC. As hypothesized, the results show that voice behavior in and of itself (i.e., voice quantity or frequency) does not require either the ability (Working Memory Capacity) or the opportunity (no load on Working Memory) to engage in elaborative thinking. Generating, selecting, and communicating high-quality messages to voice (voice quality), however, does require cognitive elaboration, especially for people who rarely voice.
First, our results showed that voice quantity required less cognitive elaboration than voice quality. Voice is socially risky behavior, and previous research has shown that assessment of risk is often not rational, but rather based on affect and heuristics [52,97,98]. When people decide to voice, they might just do what felt good in the past, relying on social schema's and affect and thereby leaving cognitive elaboration unnecessary to decide whether to voice or not. Even though voice is a proactive behavior, and proactive behavior is assumed to be discretionary and agentic, the existence of constructs like proactive personality [86] and personal initiative [99] indicate that there is a trait-like, automatized component to proactivity that is relatively stable across situations. If people are consistently proactive (or un-proactive), they might not always carefully deliberate whether they want to be proactive or not. Our findings indicate that when it comes to proactive voice behavior, this could be the case. Whether (study 1) and how often (study 3) people voice opinions, problems, or suggestions to colleagues or to supervisors was not related to individual differences in WMC, and unaffected by WMC-load (Study 2).
Our findings add to previous research showing that silence is often the result of fast, implicit, and affective reactions, not of careful cognitive elaboration [53,54]. Silence, however is not the exact opposite of voice [100]: where silence means that the employee knowingly withholds information, not voicing could mean that there simply is no information to share. People high in WMC may pick up more information and therefore theoretically may have more to share, which could result in more voice. Our first lab-study however, in which participants had to detect and voice information inconsistencies, showed that WMC was equal between the group who voiced and the group who did not voice. So even if high WMC increases the capacity to detect and solve problems and inconsistencies, this does not mean that this capacity is used, at least not in a short time frame such as our experiment. Our results thus indicate that not only the decision to remain silent, but also the decision to voice might be more automatic than elaborate and cognitively complex. Based on the relationship between WMC and creativity [27,64], we hypothesized that WMC plays a role in reaching high-quality voice content through facilitating persistent, elaborate thought. The more information one can actively maintain in awareness (storage capacity), and the better one's capacity to control attentional resources (distractor interference), the more elaborate and complex thinking is possible. This should increase chances to come up with new and useful ideas (creativity), but should also help to evaluate, select and communicate the best information (voice quality). Indeed, we found positive relationships between the cognitive resource WMC and objective (Study 1) as well as multi-source perceptual measures (Study 3) of voice quality. Furthermore, hindering elaborate systematic thinking (through WMC-load, Study 2), impaired the selection of high-quality ideas for voice. People presumably need space in their working memory for voice, but mostly when they try to evaluate and select high-quality messages to convey. Third, our results also point at individual differences in using elaborate cognitive processing for voice behavior. More specifically, the involvement of cognitive resources in voice quality seems to differ between people who voice often and people who voice little. The data in study 2 and 3 indicates that the involvement of WMC in voice quality is strong for people who voice little, but nonexistent for people who voice often. There are several theoretical explanations for these findings which we discuss below. We start with our main explanation, namely that those who voice rarely rely on cognitive elaboration (and thus WMC), whereas those who voice frequently have learned to improvise high voice quality over time.
Cognitive elaboration versus improvisation
We argued that voicing often would result in experience with evaluation and selection of messages and with typical reactions from others in a given context. People who voice more than others should thus decrease their reliance on cognitive resources for high-quality voice over time, allowing to become faster and cognitively more effortless at voicing high quality. People who voice little, however, need time and cognitive elaboration to reach high voice quality. These results mirror the findings from creativity research that high WMC musicians only become more creative when they spend longer periods of time on an improvisation [65]. WMC is thus beneficial to those who spend time on elaboration. Frequent voicers might be better able to anticipate reactions of others due to previous experience, and may therefore act more intuitively during conversations, making the advantages of WMC less important to reach high-quality voice.
The fact that to infrequent voicers, voice is more novel than to frequent voicers, is important because of the storage capacity function of WMC. People who voice often may not need the same degree of WMC to form complex connections between concepts, because these already exist in long term memory and automatized patterns. Because they have experience with voice (and thus thinking about what to say), they can chunk more information, making it easier to keep track of previously mentioned information, maintaining quality over time. People who voice little however, cannot rely on long term memory because to them, message evaluation and selection is novel, and thus their storage capacity determines whether or not they are able to voice high quality.
Alternative explanations
Another, more trait-based interpretation of our results, indicates that people differ in their use of working memory resources because of personality (as opposed to learning). For example, introverts voice less than extraverts [101], but they are not necessarily less creative [102]. Because WMC enhances creativity, high WMC introvert individuals may voice rarely due to their personality, but voice high quality nonetheless because they have a high-quality pool of ideas to begin with. Extraverts may use a quality through quantity route: if one voices a lot of ideas, chances are that one of those ideas will be of high quality may increase. It could also be that some people are generally more creative than others [103], and just have a better quality idea pool to begin with because they spend more time generating ideas in general. These explanations are supported by the fact that the quality of ideas in the generation phase of study 2 was related to the number of voiced ideas in both WMC load conditions. There might thus be a path from creativity, through voice quantity, to voice quality, or from voice quantity to creativity, to voice quality. However, our data does not allow for testing the order of the variables in this chain, and longitudinal research is needed here to better explain this process.
Strengths, limitations, and future research
Sample and experimental design. An obvious limitation of this study is the use of a student sample and of simulated organizational tasks. Although we tried our best to simulate the organizational environment carefully by using an actual social interaction, leaders, evaluations, and (financial) riskiness of voice, the external validity of our two experiments may have suffered from the highly controlled settings. The findings from the two laboratory studies thus require a replication in samples of employees (which we did in study 3), but also in field experiments and in real-time business environments (e.g., a study with recordings of actual voice behavior during meetings [104]. Voice measures. Because we wanted to show that our hypotheses hold across different types of research methods, we also used different types of voice measures, which we feel is a strength. We have measured the quantity and quality of voice differently in all studies with the intention of showing that the distinction between voice quantity and quality is not measurement (and thus method) specific. The convergence of the results across all three studies provides evidence for the validity of our theoretical distinction. Yet, we acknowledge that our measure of voice quality in study 1 is closer to decision making quality than our measures of voice quality in Studies 2 and 3. However, we aimed to explore what mentally happens when people decide to voice or not, and what they decide to voice (and what the quality of that 'what' is), and our voice measure in Study 1 covers the core aspects of voice even though it incorporates the originality and usefulness of voice elements differently than the measures in Studies 2 and 3.
One other point of note, is that the correlation between supervisor-ratings of voice quantity and quality was rather high (higher than for colleague ratings). This could indicate that supervisors are less able or inclined to separate the two. It could also imply that if employees choose to voice they may be more selective in what they voice to supervisors compared to what they voice towards colleagues. First, if an employee feels that s/he has something valuable or useful to say, the likelihood of voicing to a supervisor presumably increases. Second, if voice quality towards supervisors is low, negative feedback might influence quantity of voice more severely than in the case of voice towards colleagues. Furthermore, even though the relationship between supervisor rated quantity and quality of voice was relatively strong, WMC was still only related to voice quality, not to voice quantity. So even though voice towards supervisors might be riskier than voice towards colleagues, WMC is not more strongly related to how often employees voice towards supervisors than to how often they voice to colleagues. This is in line with our idea that deciding to voice is not necessarily elaborated because risky decisions rely on automaticity rather than on cognitive elaboration.
WMC measures and manipulations. Although our results show the same pattern across three studies, we have to point out that we used different types of WMC measures in each study and that the theoretical implications might be slightly different for each study. In our first experiment, we used a visual WMC measure that may not capture only working memory capacity. Because the task requires participants to recognize whether or not an object was present in the previous series, part of the score may reflect long term memory effects on top of WMC. This might explain why the mean scores were slightly higher than in Study 3, in which we used another measure of WMC based on recalling objects' locations, as opposed to recognizing whether an object was previously presented. This mean difference, however, could also be explained by the fact that Study 3 was an employee sample whereas Study 1 was a student sample (which is supported by the fact that standard deviations were larger in Study 3). Still, it might be the case that in Study 1, better long-term memory played a role in participants with high WMC scores voicing better quality ideas.
Ideally, in study 2, we should have measured individual differences in WMC on top of our Load Manipulation. The complexity of the design forced us to make additional tasks as short as possible and thus limited us in adding individual differences in WMC as a covariate. Furthermore, although the literature supports the idea that our manipulation in Study 2 put load the Capacity of Working Memory, this remains an assumption. Since the use of WMC requires attention control [71][72][73], distracting attention with another task should (partly) limit the use of WMC [65,70,105]. However, a more conservative conclusion of the results in Study 3 would be that distracting attention from voice limits the quality of those who voice rarely. Nevertheless, we found the same interaction in Study 3, indicating that this effect is likely not only driven by attention and distractor interference, but also by the storage component of WMC.
Future research. We tested our core assumptions about the relationship between WMC and voice quality and quantity in three different designs: a between-subjects lab design, a within-subjects experimental design, and a multi-source field study. The proposed (un) involvement of WMC in quantity and quality of voice was found and replicated in all three studies. Our study design only allowed us to test (and confirm) the negative interaction between WMC and voice quantity in Studies 2 and 3. It would be interesting to replicate and extend this using multiple indicators of experience and personality. Investigating differences in WMC involvement during voice for employees whose core task involves innovation and from other (non-innovation trained) departments, between teams who had proactivity training or not [106], between individuals from teams with differing climates for voice [45], or people high and low in personal initiative, would increase in-depth insight into this interaction. Furthermore, over time, WMC might influence voice quantity through quality. Even if voiced rarely, positive responses to high-quality voice will likely enhance voice quantity in the future. Investigating mediation processes between WMC, voice quality, and voice quantity, and moderating effects of different leadership styles would benefit our understanding of what the ideal climate for voice quality is.
Finally, it is not yet clear whether WMC facilitates voice quality through storage capacity, distractor interference, attention, or all of them. Further research on these separate functions of WMC might be beneficial to better understand how WMC aids voice. Furthermore, there are other cognitive resources that are likely related to WMC, such as inhibitory control [107]. The ability to inhibit or control automatically triggered responses could be important for voice quality in other ways than through focusing attention on voicing. For example, previous research has shown that voice is more effective when employees have the ability to regulate their (negative) emotions [108]. Because voice may sometimes occur as a response to a problem or dissatisfaction, the ability to inhibit responding until one has developed something truly useful to say, may benefit chances that voice will be valued.
Implications for practice. The findings of our studies offer important implications for practice. Cognitive neuroscience has shown that visual WMC might be a fundamental basis for complex thinking [24,109]. Visual working memory tests are faster, cheaper, evolutionarily more basic and less culturally biased than the more general mental ability tests that are common in management literature and selection procedures. They do not require domain specific knowledge and thus do not obstruct people with for example, dyslexia or other language (due to being from a different culture) disadvantages. In terms of enhancing diversity in organizations, this might thus be a better way to assess whether people are able to engage in elaborate thinking.
Regarding our dual pathways to voice quality, there are two important issues to consider for managers (if they wish to enhance quality voice). First, some people (infrequent voicers) need cognitive resources to voice high quality. Measuring WMC before hiring people may thus be a way to select those. However, these people also elaborate to reach high-quality voice, and elaboration takes time and effort. Merely hiring high WMC people is thus not enough, there needs to be time and space to elaborate. Second, some people (frequent voicers) need to be able to practice voice in order to learn how to improvise. Since quantity of voice seems to be more automatic (not well elaborated), trying to influence this process with rational arguments might not be very effective. People either voice often because it is in their personality, or because in their team, there is an implicit feeling that voice is safe. If people in a team generally voice rarely, it may be more useful to try to enhance voice by focusing on the implicit cues the team leader is sending, than explicitly telling people they can always share their concerns.
Theoretical contributions. The first contribution of this study is the distinction between voice quality and quantity. It opens up new avenues for studying the multidimensionality of voice behavior and emphasizes the difference between a decision to say something and the quality of what is said. This focus on content relates to both the work on creativity, such as the combination of originality and usefulness for truly innovative ideas [110], and to voice research, such as the (complexity) differences between suggestions, opinions, and problems [2], and the differences between promotive versus prohibitive voice [29]. This paper strengthens the claims of those studies that it is important to consider the content of voice if we want to know more about how effective voice is generated, selected and communicated.
Our second contribution is the investigation of the involvement of cognitive resources in voice behavior. Morrison [2] called for more research in the cognitive elaboration versus automaticity area, which we answered by challenging the idea that voice behavior in and of itself is the result of a complex cognitive calculation. We attempted to sharpen the hypotheses about the involvement of elaborate cognitive processing in voice by showing that the arguments of Chiaburu and colleagues [28] about cognitive elaboration hold, but only for the quality of voice and especially when employees don't voice often. Our results are also in line with the idea that choosing (not) to voice depends on affect and implicit theories about voice [53]. Our findings thus reconcile both views by showing that indeed, voice quality (sometimes) requires elaborate processing, whereas voice quantity relies on automatic processing. The results of this paper emphasize the importance of a cognitive-processes stream in the voice literature. Automaticity in a self-initiated, discretionary proactive behavior such as voice may seem like a paradox, but often, cognitive elaboration might be less important than we think. Third, we contributed two different experimental designs (within-and between subjects), which makes a wide range of causal research questions about individual differences as well as situational differences about voice and initiative possible. Although we did not use our first lab-setting for causal purposes, the design does offer that possibility [111]. Since experimental research in voice and proactivity literature is still scarce, we hope that this helps other researchers to tackle causal research questions concerning proactive behavior.
Supporting information S1 Appendix. Supplementary materials dual pathways to voice quality. (DOCX) | 2019-03-11T17:22:39.988Z | 2019-02-27T00:00:00.000 | {
"year": 2019,
"sha1": "f194021ab61733f9210453be14b99df82129d59f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0212608&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f52249ceb01068cbae7b5be208061d23078a51a4",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
54581865 | pes2o/s2orc | v3-fos-license | Water as the primary beverage: A predictor of pediatric obesity
Aims: The aims of this study were to assess parental awareness of their own children’s weight status and healthy habits, as well as to determine whether the daily use of water as a child’s primary beverage is a predictor of pediatric obesity or overweight. Methods: A cross-sectional study was conducted at two primary pediatric clinics and one urgent care clinic from September 2014 to November 2014. Data were collected from children’s medical records as well as from parents of children aged 2-18 years. Chi-square tests, bivariate correlations, and multivariate logistic regression procedures were performed. Results: Two-thirds of parents with obese or overweight children were not aware of their children’s weight status. Male gender emerged as the positive predictor of pediatric obesity or overweight (Odds Ratio [OR], 9.86; 95% Confidence Interval [CI], 1.21-80.4; p = .033), whereas using water as the primary beverage throughout the day, with low-fat/skim milk at mealtimes was a negative predictor (OR, 0.019; 95% CI, 0.001-0.24; p = .002). Conclusions: There were nearly 50-fold lower odds of being obese or overweight for children who use water as the primary beverage throughout the day than those who do not. Rather than focusing on negative impacts of sugar-sweetened beverages or 100% fruit juice, more attention should be paid to the positive impact of using water as the primary beverage throughout the day. Using standardized Body Mass Index (BMI) percentile growth charts, children’s weight status should be communicated to all parents. Teaching and motivating parents and children to drink water as their primary beverage throughout the day could be an effective approach to preventing and managing pediatric obesity.
INTRODUCTION
The epidemic of pediatric obesity has recently become a major public health problem in the United States, with about one-third of the pediatric population being overweight or obese. [1] Pediatric obesity can lead to numerous negative health consequences, including developmental and mental health problems, otitis media, sleep apnea, and type 2 diabetes mellitus, and frequently continues into adulthood. [2,3] It is estimated that about 40% of American adults will be obese by 2030, and if the prevalence of obesity were to be held at 15% for two decades, the potential medical cost savings could be as much as $1.9 trillion. [4] According to the Centers for Disease Control and Prevention (CDC) growth chart, pediatric overweight is defined as a body mass index (BMI) between the 85th and less than the 95th percentiles, and obesity is defined as the 95th per-centile or greater. [5] Although guidelines are available for the prevention and management of pediatric obesity in primary care settings, many providers are unaware of these guidelines or do not follow them, including the recommendation to document BMI percentiles. [6,7] The guidelines also recommend parental involvement in management of obesity, but a meta-analysis of 69 studies showed that only half of the parents with overweight/obese children were aware of their children's weight status. [8] In addition, less than a quarter of parents were apparently told by their healthcare providers that their children were overweight. [9] Parental awareness and involvement are critical in preventing and managing pediatric obesity. A growing body of evidence indicates that family-centered interventions targeting both children and parents are effective in improving parents' self-efficacy, as well as their children's obesity. [10,11] Furthermore, parents' concerns about their children's weight status and their involvement in addressing those concerns have positive relationships with improving the children's physical activity and diet, as well as decreasing consumption of sugar-sweetened beverages (SSBs). [12,13] Regular consumption of SSBs was associated with weight gain among children, according to a meta-analysis of 32 prospective cohort and randomized controlled studies. [14] Another study showed that Hispanic and African American children consumed more total SSBs compared with whites. [15] This negative impact of SSB consumption on pediatric obesity has led to a concerted effort to reduce SSB consumption in public schools. However, banning the sale of soda only resulted in students shifting to other SSBs, such as sports drinks, energy drinks, or fruit drinks. [16] In spite of the tremendous attention paid to the association between pediatric obesity and the consumption of various beverages, less attention has been paid to the positive impact of using water as the primary beverage throughout the day. [17] Consequently, there is a dearth of research examining the strength of the association between the use of water as the primary beverage throughout the day and weight status in children.
Study aims
The aims of this study were to assess parental awareness of their own children's weight status and healthy habits, as well as to determine whether the daily use of water as a child's primary beverage is a predictor of pediatric obesity or overweight.
Design and setting
A descriptive, cross-sectional design was used in the current study. Parents of children aged 2-18 years were recruited from two primary pediatric clinics and one urgent care clinic from September to November of 2014. The study was reviewed and approved by the university institutional review board. Parents who agreed to participate in the study were interviewed to collect the data.
Data collection tools
A standardized data collection tool was used to obtain children's age, gender, ethnicity, weight, and height from the medical records. Family histories of type 2 diabetes mellitus, hypertension, hyperlipidemia, obesity, and early death from heart disorder or stroke were also collected from the medical records. Children's BMI and BMI percentile were calculated based on weight, height, age, and gender, using the BMI calculator from the CDC. A code number was assigned to each child and parent to allow matching of the child's medical record data with the information from the parent.
A 10-item questionnaire assessing children's healthy habits was chosen to collect information from parents. [18] Questionnaire items included daily screen time (e.g., TV, video, or computer games); daily servings of vegetables and fruits; daily physical activity; water as the primary beverage throughout the day, with low-fat/skim milk at mealtimes; use of food to reward positive behavior; consumption of high-fat foods; outdoor activity; family physical activity; and parents' perception of their own weight. In the current study, content validity testing was performed among a panel of nine experts for clarity and relevance of each question item on a four-point, Likert-type response. The content validity indices (CVI) were all above 70%, except for one item regarding high-fat foods, which was revised by adding the phrase "such as potato chips, donuts, or ice cream." One item, "Compared to other children of the same age, do you feel your child is: underweight, average weight, overweight or obese?" was added to obtain parental awareness of their children's weight status. [19] The revised tool, containing these 11 items, was named the Pediatric Healthy Habits Assessment Tool (PHHAT).
Data analysis
Descriptive statistics included frequencies and percentages of children's demographics, weight status categories, parental awareness of their children's weight status, and pediatric healthy habits. In the current study, BMI ≥ 85th percentile was used to define pediatric obesity and/or overweight. Each child's weight status was recoded into a dichotomous variable, BMI < 85th percentile vs. BMI ≥ 85th percentile. The children's demographic variables and healthy habits were also recoded as dichotomous variables, and Chi-square tests with Fisher's exact tests were performed to compare the BMI < 85th and BMI ≥ 85th percentile groups.
Bivariate correlations procedures using Kendall's tau test were performed among dichotomous variables to identify the statistically significant relationships among demographic variables, healthy habits, and BMI ≥ 85th percentile. Demographic variables and healthy habits were entered into the multivariate logistic regression model to explore the predictors of BMI ≥ 85th percentile. The level of significance was set at p value < .05, and SPSS software version 21.0 (IBM, Chicago, IL, USA) was utilized for all data analyses.
Characteristics of children
A total of 47 parents participated in the study. The mean age of the children was 9.6 years, with the majority being male (51.1%) and of Hispanic ethnicity (63.8%). Twenty-four children (51%) had BMI ≥ 85th percentile and of whom, 16 had BMI ≥ 95th percentile. More than one-third of the children had positive family histories of hypertension (48.9%), type 2 diabetes mellitus (42.6%), or hyperlipidemia (38.3%). Table 1 presents the sample characteristics.
Parents' awareness of children's weight status and healthy habits
Among the parents of 24 children with BMI ≥ 85th percentile, sixteen parents (67%) were not aware of their children's weight status. Figure 1 shows the comparison of healthy habits among children with BMI < 85th percentile versus BMI ≥ 85th percentile. There were statistically significant differences between the two groups in daily physical activity > 60 minutes (65.2% vs. 29.2% for the BMI < 85th percentile group vs. the BMI ≥ 85th percentile group; p = .020) and water as the primary beverage throughout the day, with low-fat/skim milk at mealtimes (78.3% vs. 29.2%; p = .001). However, no statistically significant differences were found between the two groups with regards to the use of food to reward positive behavior, regular family physical activity, and daily screen time less than 2 hours.
Daily use of water as a predictor of pediatric obesity or overweight
Bivariate correlations by Kendall's tau test revealed that daily physical activity > 60 minutes (r = -0.36; p = .014) and water as the primary beverage throughout the day, with lowfat/skim milk at mealtimes (r = -0.49; p = .001) had significantly negative relationships with BMI ≥ 85th percentile (see Table 2). To identify the predictors of BMI ≥ 85th percentile, the demographic variables and pediatric healthy habits were entered into a multivariate logistic regression model. Table 3).
DISCUSSION
There were nearly 50-fold lower odds of being obese or overweight for children who use water as the primary beverage throughout the day, with low-fat or skim milk at mealtimes, than those who do not. This result from the current study is largely consistent with findings from a randomized controlled study that drinking water or non-caloric beverages in place of sugar-sweetened beverages (SSBs) significantly reduced weight gain by an average of 1.01 kg (or 2.2 lb) over 18 months in children. [20] Although the study was conducted in subjects with healthy body weight and did not examine the rate of obesity, the reduction in weight gain associated with water points to the importance of beverage choice in controlling the pediatric obesity epidemic.
Tremendous attention has been paid in the past to the impact of SSB consumption on pediatric obesity, as well as to reducing consumption of various SSBs, such as carbonated soda, fruit drinks, and energy/sports drinks. Even consumption of 100% fruit juice, which was promoted as being equivalent to the recommended consumption of whole fruit, now appears to be associated with pediatric obesity. [21] Rather than focusing on negative impacts of caloric beverages, perhaps more attention should be paid to the positive impact of using water as the primary beverage throughout the day, with low-fat/skim milk at mealtimes. Promoting water and providing low-cost, easily accessible, good-tasting water at schools, homes, and day care centers could foster the use of water as the primary beverage throughout the day and help reduce pediatric obesity and overweight.
The current study also suggests that boys had almost a 10fold higher odds of being obese or overweight than girls. Similarly, a previous study reported that boys were more likely to be obese or overweight than girls. [22] In addition, boys were more likely than girls to have their weight underestimated by their parents. [23] Gender differences in obesity, as well as differences in parental perceptions of girls and boys, may be due to differences in body fat distribution, cultural gender bias regarding obesity, physical activity, and diet. [23,24] An unexpected finding of the current study was that the use of food to reward positive behavior was a negative predictor (OR < 1) of obesity or overweight in the multivariate analysis. In contrast, a previous study reported a positive association between using food as a reward and weight gain. [25] Although information regarding the types of food used as rewards was not collected, it may be possible that the use of low-calorie food to reward good behavior may have helped to protect children from obesity. In addition, no effects resulting from screen time, daily servings of fruits and vegetables, high-fat snacks, or physical activity were found, in contrast to previous studies. [26,27] This may have been due to the small sample size in the current study, which limited the possibility of detecting small effect sizes.
Approximately half of the children in the current study were obese or overweight, but two-thirds of the parents were apparently not aware of their children's weight status, or they had inaccurate perceptions of their children's weight. This result is consistent with previous studies, which showed that approximately one-third to one-half of parents underestimated their own children's weight status. [8,23] Obesity is a complex health problem that needs to be treated as a chronic disease and managed with a chronic care model. [7,28] For nurses involved in care of pediatric population, it is essential to perform universal documentation of BMI percentiles at regular intervals to identify children at risk for obesity. [7,29] Using the gender-and age-specific BMI percentile growth charts, nurses should communicate the results with parents regarding their children's weight status. For obese or overweight children, the focus of the conversation should not be on weight loss or dieting, but rather should be on a healthy lifestyle through encouragement to grow into weight. [9] Parents' awareness of their children's weight status is the first step toward parental involvement in maintaining healthy weight and influencing their children's choice of beverages, as well as fostering healthy habits. [12] Teaching and motivating parents and children to drink water as their primary beverage throughout the day, with skim/low-fat milk at mealtimes, could be an effective approach to preventing and managing pediatric obesity. This may be especially effective in those families that consume large amounts of sugar-sweetened beverages or 100% fruit juice. A family-centered approach would target both parents and children concurrently; incorporate cultural, ethnic, and socioeconomic backgrounds; and utilize multidisciplinary resources of nutritionists, counselors, or physical therapists available in the community. [7,11]
Limitations
The current study had several limitations. First, the study findings of predictors of male gender, water use as the primary beverage, and use of food as a reward should not be taken as cause-and-effect relationships for pediatric obesity or overweight in this cross-sectional study. Second, although the effect sizes represented by the odds ratios were large and statistically significant, the 95% confidence intervals were large due to the small sample size. Third, the study findings may not be generalizable, since the study was conducted in central Texas with a sample containing a high proportion of Hispanics. Fourth, a convenience sampling method used in the current study may have introduced selection bias. Finally, the self-reported data may have over-or underestimated parental responses. Further studies, with larger sample sizes, are needed to confirm the impact on pediatric obesity and overweight of using water as the primary beverage.
CONCLUSIONS
This study indicates that the use of water as the primary beverage throughout the day, with low-fat or skim milk at mealtimes, is a strong negative predictor of pediatric obesity with nearly 50-fold lower odds of being obese or overweight.
There is an intriguing possibility that simply changing the primary beverage to water for those children who consume sugar-sweetened beverages or 100% fruit juice can reduce pediatric obesity. In addition, there appears to be room for improvement in parents' awareness of their own children's weight status. Nurses involved in care of pediatric population can play a major role in screening, educating, and motivating parents, and implementing the practice guidelines to prevent and manage pediatric obesity. If the use of water as the primary beverage throughout the day is confirmed to be an important negative predictor of obesity, focusing on the positive impact of water may facilitate the prevention and management of pediatric obesity. | 2019-03-16T13:04:34.376Z | 2016-05-29T00:00:00.000 | {
"year": 2016,
"sha1": "917ce5e337eb1f12df26d6c3c47b20805ee266fd",
"oa_license": null,
"oa_url": "http://www.sciedupress.com/journal/index.php/jnep/article/download/9170/5842",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a03d151ae39bb72c772fcbafc9640eaace75af86",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228984865 | pes2o/s2orc | v3-fos-license | Developing evaluation tools for assessing the educational potential of apps for preschool children in the UK
ABSTRACT Selecting high-quality apps can be challenging for caregivers and educators. We here develop tools evaluating educational potential of apps for preschool children. In Study 1, we developed two complementary evaluation tools tailored to different audiences. We grounded them in developmental theory and linked them to research on children’s experience with digital media. In Study 2 we applied these tools to a wide sample of apps in order to illustrate their use and to address the role of cost in quality of educational apps. There are concerns that a social disadvantage may lead to a digital disadvantage, an “app gap”. We thus applied our tools to the most popular free (N = 19) and paid (N = 24) apps targeting preschoolers. We found that the “app gap” associated with cost is only related to some aesthetic features of apps rather than any observable educational advantage proffered by paid apps. Our study adds a novel contribution to the research on children’s apps by developing tools to be used across a wide range of audiences, providing the first description of the quantity of app design features during app use and evaluating the educational potential of free and paid apps.
Introduction
Touchscreen devices are increasingly popular among children under the age of 5 (e.g., Chen & Adler, 2019). An estimated 80,000 apps claim to be "educational" (Healthy Children, 2018) within the context of an unregulated market. Yet, there is a consensus among researchers that the majority of children's apps advertised as "educational" lack educational value and any foundation in research (Ólafsson, Livingstone, & Haddon, 2013). This means that informed decisions about which apps are high quality can be challenging for parents and educators (Livingstone, Blum-Ross, Pavlick, & Ólafsson, 2018) who could potentially benefit from an app evaluation tool based on early years learning theory. An app evaluation tool could also benefit app developers who want to ensure that the products they create include high-quality features.
As can be seen in Table 1, there are a number of limitations with the existing tools, some of which were identified by the authors themselves. Specifically, almost all the tools have a long list of criteria (18-70+ items) which makes app evaluation time consuming and not practical. The majority of the tools lack examples from children's apps that could allow an in-depth understanding of the descriptors. The descriptors of the items are often not specific enough; they include ambiguous or unclear terminology. Some of the tools also lack theoretical underpinning; they do not draw clear links to developmental theory. Only two of the tools had the content validity assessed, and none of the content validity assessments involved caregivers as participants.
Importantly, only three out of eight tools were aimed at caregivers. Given that preschool-aged children use touchscreen devices frequently (e.g., according to Ofcom (2019), children aged 3-4 years living in the UK spend 48 minutes per weekday playing games on a touchscreen device), it is crucial to help parents select good quality apps for their children. The majority of the tools have not been applied to a wide range of apps in order to demonstrate their use. However, the tools that were applied to a sample of apps did not allow for quantifying the app features during app use and were applied to math and literacy apps only. Moreover, some of the tools include subjective criteria, which is difficult to objectively measure by an adult. Therefore, there is a need for a new improved tool that could address those limitations.
The aim of this paper was to create two complementary evaluation tools (adapted to the needs of different audiences) assessing the educational potential of apps for preschoolers: (1) A thorough and user-friendly tool accessible by a wide audience: app developers, researchers, caregivers and educators; (2) A tool for researchers that could be used for a more in-depth evaluation by allowing to quantify app features during app use.
Based on the previous literature on app evaluation tools, we propose a set of principles that should guide the development of such tools: (a) Be informed by the developmental theory and research on children's learning in the context of digital media; (b) Draw clear links to previously developed tools; (c) Be brief, have a simple set of clearly described criteria and clear directions on the scoring system; (d) Focus solely on the objectively measurable factors; (e) Be applied to a wide variety of apps to demonstrate their use; (f) Be validated by conducting content validity and inter-rater reliability.
In building the content of our tools, we relied in particular on the British (Department for Education, 2017) and American (Early Childhood Learning and Knowledge Centre, Rating an app as 'low', 'medium', or 'high' on each of the four pillars, and on the learning goal.
No
Guided by the Science of Learning framework (Bransford et al., 1999
2015)
, early years frameworks, which state that preschool children's development should be supported in the areas of cognitive, academic, social-emotional and physical skills. In the following section, we identify key areas that an evaluation tool ought to include based on previous literature on app evaluation tools, developmental research and theory, and evidence of children's learning from digital media. We also outline a further set of quantity of app features indicators.
Not all of the previous app evaluation tools included the criteria related to meaningful learning and solving problems. We believe that these features are critical to the learning being deeper, authentic and transferable to real life.
Feedback
Feedback plays a critical role in supporting educational performance (e.g., Mulliner & Tucker, 2017;Schwartz, Tsang, & Blair, 2016). Specific, meaningful, timely and structured feedback drives child's engagement in the activity (e.g., Hirsh-Pasek et al., 2015;Walker, 2010). Moreover, feedback should reinforce the learning goal and scaffold users' understanding of how to improve (see, e.g., Callaghan & Reich, 2018). All the previous app evaluation tools pointed to the significance of feedback. However, not all of them described explicitly how feedback should be presented by providing relevant examples from the apps.
Social interactions
Social interactions support learning from the very early stages of development (see Hirsh-Pasek et al., 2015, for a summary). Social demonstrations enhanced learning in a touchscreen puzzle task in a group of 2.5-and 3-year-olds (Zimmermann, Moser, Lee, Gerhardstein, & Barr, 2017). Apps can involve "parasocial" interactions with animated characters present onscreen, which offer symbolic experiences that can be beneficial for children's social and cognitive development (e.g., Calvert, 2015).
Only some of the previous app evaluation tools recommended the presence of highquality parasocial interactions in the apps. In our tool we specify how the parasocial character should be interacting with the child in order to support learning.
Activity structure
Apps which give the opportunity for exploratory use alongside structured activities, might increase children's intrinsic motivation and engagement. Child autonomy and the sense of agency when using interactive media is crucial for the learning process (e.g., Kirkorian, 2018;Papadakis & Kalogiannakis, 2017). Pre-schoolers who could select their learning experience in a tablet game outperformed those who had no control over the order of presentation of the material (Partridge, McGovern, Yung, & Kidd, 2015).
Importantly, almost none of the previous evaluation tools allowed assessing whether apps promote exploratory use
Narrative
Media content that is embedded in an entertaining narrative integrated at the heart of the story can benefit children's learning (e.g., Dingwall & Aldridge, 2006). Content directly linked to a narrative of a television program is recalled better than content which is irrelevant to the storyline (Fisch, 2004).
Although the role of narrative for children's learning has been established by previous research, almost none of the evaluation tools included the presence of narrative in assessment criteria.
Language
Appropriately designed digital media can be a valuable source of language input for young children. The presence of good quality language is crucial for educational potential (Rowe, 2012). Studies using lab-designed apps have shown that children aged 2-4 are able to learn labels for novel objects (Kirkorian, 2018;Russo-Johnson, Troseth, Duncan, & Mesghina, 2017) or for real-world objects (Dore et al., 2019) While two of the previous evaluation tools mentioned language as part of some other criteria, none of them focussed on assessing the quality of language directly. We fill in this gap in our tool.
Adjustable content
To ensure effective learning, the difficulty level of an app should be automatically adjusted to users' performance (e.g., Callaghan & Reich, 2018). Specifically, each level of an activity should build on the knowledge gained in earlier levels, and increase hints and feedback if a user makes repeated errors (e.g., Revelle, 2013).
The majority of the previous tools included adjustable content in their evaluation criteria, and following the theoretical motivation outlined above, we also include it in our tool.
App design
As highlighted in the previous evaluation tools (e.g., Lee & Kim, 2015), app's design should be simple and consistent, style of letters and pictures should be clear, and the arrangement of operating buttons should be appropriate. Unnecessary advertisement, additional in-app purchases and slowly loading content may impede learning. App should also be easy to use and always responsive to touch interactions.
All the previous app evaluation tools included app design in their criteria. We also acknowledged its importance for enhancing children's learning experience.
Quantity of app features indicators
The following section presents the indicators for the quantity of app features. For certain features, it is crucial to estimate how often a given feature occurs during app use, in order to determine whether children's learning environment is age appropriate and not overly complex. None of the previous evaluation tools enabled measuring the proportion or frequency of different app features during app use. Thus, the way we measure app features in our quantitative tool is novel.
Touch gestures
The direct manipulation interaction facilitates pre-schoolers' learning from touchscreen media, yet most educational apps only support tap (99% of apps) and drag (56% of apps; Nacher, Jaen, Navarro, Catala, & González, 2015). Nacher et al. (2015) found that infants aged 2-3 perform one-finger rotation and two-finger scale up and down successfully, but find double tap, long press and two-finger rotation challenging. Russo-Johnson et al. (2017) reported that 2-4-year-old children from low SES families learned more novel object labels when dragging objects versus tapping them, perhaps because tapping is a response that does not require active attention.
Active learning
High-quality apps should provide opportunities for active cognition, e.g. making cognitively challenging decisions, and solving problems (e.g., Hirsh-Pasek et al., 2015). Cognitive activities in contrast to stimulus-reaction activities during app use encourage active cognition, while variability across learning encounters has a potential to facilitate learning (e.g., Thiessen, 2011). Thus, a variety of activity goals might contribute to the app being more cognitively active.
Complexity of the learning environment
Background visual, background sound and other app interactions available on the screen contribute to the complexity of learning environment. Cognitive Theory of Multimedia Learning (Mayer, 2005(Mayer, , 2014 envisions that the child's learning might be unsuccessful if the software includes too much extraneous material. Sound effects and animation interfered with story comprehension and event sequencing in children aged 3-6, when compared with paper books (see Reich, Yau, & Warschauer, 2016, for a review). Additional interactions present on the screen alongside the main task can decrease child's engagement in the app (Hirsh-Pasek et al., 2015).
Feedback
In addition to looking at feedback qualitatively and evaluating its meaningfulness, we can also look at it quantitatively and assess its occurrence in the app, its delivery method (audio, onscreen) and its content (ostensive feedback vs other feedback). Interactive media may enhance learning if they promote contingent responses or guide visual attention to relevant information on the screen (Kirkorian, 2018).
App design sophistication
Elements on the screen during app use can either be static, move in a static way, be fully animated or be partly static and partly animated. When learning challenging or novel information, pre-schoolers might benefit more from observing noninteractive video demonstrations than from using interactive media (e.g., Aladé et al., 2016). Furthermore, sound effects and animation in ebooks can interfere with story comprehension in children aged 3-6 years, when compared with paper books (see Reich et al., 2016, for a review).
The present studies
The present paper presents two studies. Study 1 focuses on designing and validating evaluation tools for apps aimed at pre-schoolers (children aged 2-5 years). In order to illustrate the use of our tools, in Study 2 we apply them to apps distinguished in terms of their cost.
Study 1: designing and validating the evaluation tools
Developing the questionnaire for evaluating the educational potential of apps
First stage: creating a list of items and developing a rating scale
Following the literature reviewed in the introduction we defined 12 concepts (items) to be measured in the questionnaire. We included three indicator descriptors to each item (together with a few examples from the apps to each indicator), such that the app could score between 0 and 2 points for each item. The 12 initially constructed items were: Learning goal, Going beyond rote learning, Solving problems, Feedback, Social interactions, Open-ended, Plotline/narration, Appropriateness of language, Customising, Adjustable content, Suitability of design, Usability.
Second stage: conducting a content validity study with experts
Once the first version of our questionnaire was designed, we conducted a content validity study. The study was approved by ethical review board at the University of Salford. We followed the procedure outlined by McGartland Rubio, Berg-Weger, Tebb, Lee, and Rauch (2003). We recruited three professional design experts (app developers) and three user experts (early years professionals) who shared their feedback on the items' representativeness, clarity and importance in an online survey. The raters were given the following instruction: "You will be presented with each of the 12 items included in our coding scheme. Please rate each item as follows: • Please rate the representativeness on a scale of 0-4, with 4 being the most representative. Representativeness is the extent to which each item measures the educational potential of children's apps. Space is provided for you to comment on the item or to suggest revisions. • Please indicate the level of clarity for each item (how clearly the item is worded), also on a four-point scale. Again, please make comments in the space provided. • On a scale of 1-10 please rate the importance of each item for measuring educational potential, with 10 being the most important.
Finally, please evaluate the comprehensiveness of the entire coding scheme by indicating items that should be deleted or added." We calculated the Content Validity Index (CVI) for each item and for the whole scale (based on its representativeness), following the guidelines described in McGartland Rubio et al. (2003). The CVI for each item was computed by counting the number of experts who rated the item as 3 or 4 and dividing it by the total number of experts. The CVI for the whole questionnaire was obtained by calculating the average CVI across the items. A CVI of at least 0.8 is recommended for new measures. All items in our questionnaire scored either 0.8 or 1, and the CVI for the whole questionnaire was 0.88 (see Table 2).
The raters did not suggest removing any items. They also rated all items high with regards to the items' importance. Consequently, based on the experts' suggestions, we made modifications to the questionnaire. We merged two pairs of items, i.e. Customising and Adjustable content became Adjustable content; Suitability of design and Usability became App design (according to the raters, the descriptions of these two pairs of items overlapped in terms of content). We also added additional examples from the apps to improve the clarity of the grade descriptors and we reduced the use of technical language in the questionnaire (including rewording some of the items' names, see Table 2).
Third stage: content validity study with caregivers
After introducing the changes to the questionnaire, we determined whether the tool was comprehensible to caregivers. We recruited six caregivers of children aged 2-5 years to rate the representativeness and clarity of each item and provide further comments. The caregivers were given the same instruction as the experts in the first content validity study. The CVI for the whole tool based on caregivers' ratings was high, 0.75 (see Table 2).
Based on the caregivers' comments, we made further modifications to the questionnaire. Most importantly, the participants from both content validity studies pointed out that while social interactions are important for learning, the development of skills for independent learning is also important and social interactions are not congruent with the reasons caregivers might choose apps (see Broekman, Piotrowski, Beentjes, & Valkenburg, 2016 for a similar argument). To accommodate this, in our tool we focused on the highquality parasocial interactions in the apps rather than interactions with adults during app use. Our evaluation questionnaire is presented in Table A1 in Supplemental materials.
Developing the coding criteria for quantifying the app features
In addition to the questionnaire that can be easily used by caregivers and educators, we also aimed to develop a tool allowing researchers a more in-depth, quantitative assessment of apps' features. For the coding criteria, following the literature review outlined in the introduction, we grouped the app features into five broader areas. Each of these areas contains between 1 and 3 coding criteria: (1) Touch gestures (2) Active learning (a) Activity goal (b) Activity type
Study 2: applying the evaluation tools to illustrate their use and to measure the app gap
In Study 2, we applied the evaluation tools to a sample of paid and free apps in order to illustrate their use and to assess the role of cost on app quality. Digital media is now embedded in family life (Livingstone et al., 2018) and as a result there are concerns that social disadvantage could extend to a digital disadvantage (Vaala, Ly, & Levine, 2015;Zhang & Livingstone, 2019), the so-called "app gap" (Common Sense Media, 2013). The app gap can be observed, for example, in the availability of devices to go online in the household, caregivers' digital skills and cost of devices (Zhang & Livingstone, 2019). Furthermore, lower socio-economic status parents might not be able to spend substantial quality time with their children (Department for Education, 2020).
It is important to understand whether there are differences between apps that might justify differences in cost. In the present study we focus on a broad distinction between apps that are free at the point of initial access versus apps for which payment at initial access is required. Parents might not be aware of the variety of factors contributing to the app cost (e.g., business decisions that influence app developers' app pricing strategies, including the size of the market, funding opportunities, app's unique selling point) and they might link the higher cost to higher quality of app.
According to the Department for Education research report (2020), children aged 0-5 years living in lower-income households in the UK use educational apps more often than their affluent peers. However, parents in higher-income households are more likely to pay for an educational app. It is therefore crucial to establish whether children from less affluent families are disadvantaged with respect to the quality of educational apps that they use.
To the best of our knowledge, to date only one study (Callaghan & Reich, 2018) investigated the differences between educational math and literacy free and paid apps. However, Callaghan and Reich (2018) did not investigate the frequency of app features during app use but limited their analysis solely to identifying whether or not a given feature is present in the app.
App selection
We coded 44 of the most popular apps in Google, Amazon and Apple app stores. To be included in this study, apps had to target children aged 2-5 years and feature in the top 10 lists for free and paid apps in each app store. Apps were identified on 7 June 2018. Of these 60 apps, 10 were removed as duplicates and 6 were excluded (5 video-based, which only allowed passive use, 1 unresponsive after installation). The remaining 44 apps were included in the study.
App use
Each app was downloaded and a screen recording was taken while the first author used the app for 5 minutes with a systematic approach to exploring all the features. The 5-minute sample was motivated by practical constraints in terms of the intensity of encoding of the detailed app features in ELAN (described in the coding section), as well as being more practical for caregivers and educators in appraising an app in an efficient amount of time, based on our evaluation questionnaire.
To maintain parity in approach to data capture across apps, the systematic approach by the first author was to follow all the activities in an order suggested by the app design and to use all the available features on each screen only once.
Questionnaire for evaluating the educational potential
Each app could score between 0-20 points on the educational potential index (between 0-2 points for each of the 10 items, see Table A1 in Supplemental material). 5-minute app screen recordings were assessed individually by the first and last author using the scheme. The discrepancies were discussed and resolved between the coders. Inter-rater reliability was high (κ = .889, p < .001). Internal consistency of the tool was Cronbach's alpha = 0.81, which indicates good internal consistency, further validating the tool.
Coding criteria for quantifying app features
To enable coding for quantifying app features, screen recordings of the app use were coded in ELAN 5.2, software that enables adding annotations to audio and/or video streams. The coder (first author) coded each screen during the app use for the 11 coding categories (see Appendix B in Supplemental material for the details on the coding and scoring). Inter-rater reliability was determined by comparing the coding of the primary coder with the coding of a trained double coder who coded data from 5 apps independently. Inter-rater reliability was κ = 0.917, p< .0001. The small number of discrepancies were resolved by the first coder.
Additionally, in order to determine whether the majority of app features could be captured in 5 minutes of app use (regardless of the person using the app and their style of app use), we calculated inter-user reliability. This was determined by comparing coded app use data for 5 apps that were also used by a second independent user. Crucially, the second user did not receive any instruction on using the apps. Overall, inter-user reliability was κ = 0.872, p < .001, which shows that the same app features can be captured during 5 minutes of app use, regardless of the user.
Results
To illustrate the use of the tools in practice, we report differences between free and paid apps. This also enables us to determine whether there is an app gap in quality that is reflected in cost, which could contribute to a digital disadvantage. The final sample included 19 free and 24 paid apps (one app was excluded because it was duplicated between two app stores and was listed as free in one store but required payment in the other).
We first report the results from the analysis of the questionnaire for evaluating the educational potential, and then the analyses of coding criteria for quantifying the app features.
Evaluating the educational potential
To test whether there is a difference in educational potential between free and paid apps, a Mann Whitney U-test was performed. The results show that free apps (M = 7.16, SD = 3.70) did not differ from paid apps (M = 6.75, SD = 4.60) on the educational potential index (U= 211, Z= −0.405, p= 0.685, r = −0.06). Figure 1 presents cumulative scores for each of the items in the evaluation questionnaire for the whole app sample (0-2 points for each item, 43 apps in the sample; maximum score was 86). Suitability of design and quality of language received the highest scores (58 and 54, respectively), while adjustable content and social interactions appear among those with the lowest scores (8 and 13, respectively).
Quantifying app features: analyses comparing free and paid apps
First, we present the descriptive statistics for app features coded in the study (see Table 3).
The analyses comparing free and paid apps are presented in Table 4. Overall, the free and paid apps differed significantly only on two features: (1) the mean number of screen elements, with paid apps having on average more screen elements than free apps; and (2) on object property, with free apps having higher frequency of animation than paid apps, but no differences in other object properties between free and paid apps.
Discussion
The primary aim of this paper was to report the design and development of two novel, transparent and comprehensive tools for evaluating the educational potential of apps aimed at 2-5-year-old children. Specifically, a questionnaire aimed at a wide audience, and coding criteria for measuring the quantity of app features aimed at researchers.
The tools were developed specifically for evaluating apps targeting pre-schoolers; they were guided by the early years foundation frameworks and informed by the developmental theory and research on children's learning from digital media. The development of the tools was preceded by a careful analysis of the previously designed evaluation tools. We identified several limitations in the previous tools, such as a long list of criteria which are not specific enough, no direction to quantify app features, and inclusion of technical language. We designed our tools with the aim to address those limitations. We also demonstrated the use of our tools on a wide range of most popular children's apps. We added a novel contribution to the research on children's apps by evaluating both the educational potential of apps and by providing the first description of the quantity of app design features during app use.
Our tool is the first to have had content validity assessed by caregivers as well as experts. We made further amendments following comments from caregivers to ensure that our tool did not include technical language. The use of examples from existing apps in our tools means that users do not require any existing knowledge of early years education frameworks which was a common limitation of previous tools (Department for Education, 2019;Hirsh-Pasek et al., 2015;Lee & Sloan Cherner, 2015). The next step in validating the tools will be to determine how preschool children interact with the apps and evaluate, rather than predict, the educational potential of the children's interactions. Free apps higher frequency of animation (p< 0.0001) than paid apps A further point for future investigation is also how the various features in apps interact with one another. This is ongoing work in our lab. Our tool development resulted in a measurement of apps in terms of an educational potential index, which was shown to be high in content validity, internal consistency and in inter-rater reliability. The comparison between free and paid apps on this index did not reveal any difference between apps. It is worth noting that the mean scores on the educational potential index for both groups were rather low (on average less than 10 out of 20). This suggests that the free and paid apps appeared to be equally low in terms of their educational potential, which is consistent with other studies underlining the disparity between the number of self-proclaimed educational apps in the markets and their poor educational value (Chau, 2014;Goodwin & Highfield, 2012;Hirsh-Pasek et al., 2015;Papadakis et al., 2018;Vaala et al., 2015).
The whole app sample showed strength as far as suitability of design and language were concerned (see Figure 1). High scores on suitability of design suggest that the apps were well prepared from the technical perspective. However, the apps showed weakness in terms of the more educational evaluation criteria, such as meaningful learning, offering users problems to solve or having a learning goal, which suggests that they do not offer a meaningful and cognitively active learning experience (in line with Papadakis et al., 2018). The apps in our sample also scored low on social interactions; they rarely encouraged high-quality interactions with characters onscreen (in line with Papadakis et al., 2018;Vaala et al., 2015).
Additionally, the apps in our sample scored particularly low on adjustable content. This means that they lacked flexibility in changing the settings and did not tailor content to users' performance. Apps should adjust the content to the user's needs if they intend to increase user's motivation and allow for gradual progress in learning (e.g., Callaghan & Reich, 2018;Papadakis & Kalogiannakis, 2017). This finding is again in line with the previous studies, which found that less than 20% (Callaghan & Reich, 2018;Vaala et al., 2015) or none of the reviewed apps (Papadakis et al., 2018) included adjustable content. Overall, our findings highlight the need for developmental psychologists to work with app developers to advance the educational potential of touchscreen apps.
As a secondary aim, we compared the free and paid apps on the coding criteria for quantifying app features in order to assess the "app gap" associated with app cost. The free and paid apps differed only on two features: (1) the number of screen elements, with paid apps having on average more elements on the screen than free apps; and (2) the frequency of animation, with free apps having more animations than paid apps. Considering that only two differences were observed, it can be concluded that free and paid apps did not differ substantially either in their educational potential or in their features and design. This is partially in line with the content analysis of Callaghan and Reich (2018) who also did not find many differences between free and paid apps with respect to their educational features. Our results suggest that paid apps might not necessarily guarantee a better app quality than free apps, at least based on our app sample.
This study also gives an insight into the educational quality and design features of apps targeting pre-schoolers. Crucially, none of the previous app evaluation reviews quantified the apps features during app use within the evaluated app sample. Thus, our descriptive statistics (see Table 3) are the first ones to present the frequency of various app features during app use, based on a wide sample of apps. In our sample, all apps had higher frequency of cognitive activities than stimulus-reaction activities. Complex sound (two or more sounds playing simultaneously) was more frequent across all the apps than simple sound, which might add to the young children's cognitive processing load while using apps (e.g., Mayer, 2014). The apps had on average 5 screen elements on each screen, and 18 different activity goals during the 5-minute use. On each screen, apart from the target interaction, there were on average 2 additional interactions available. Apps in our sample offered a high proportion of feedback to users' responses (78%), as compared to no feedback during app use (see Callaghan & Reich, 2018, for similar results), and a high proportion of that feedback was ostensive (74%), i.e. referential cues to indicate what is to be learnt. Those characteristics can serve as a reference point for other studies on app features.
Conclusion
In conclusion, we have presented comprehensive evaluation tools based on theories of learning and cognitive development and have shown how they can be implemented in the analyses of apps available to children. We found that the app gap associated with cost was not an issue in terms of the educational potential for most popular apps currently available. The app gap is instead related to aesthetic features of apps rather than any observable cognitive advantage proffered by paid apps. Notes 1. We use the term "evaluation tool" to refer to rubrics, frameworks and schemes for consistency throughout the paper. 2. The cumulative scores were not presented separately for the two groups due to the differences in sample size between the groups. We also did not present mean scores for each item for the two groups because each item was measured only on a scale 0-2. | 2020-12-14T21:04:09.317Z | 2020-10-28T00:00:00.000 | {
"year": 2021,
"sha1": "83ba5f7c7271d9a9492462200124f4794e43ab1b",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17482798.2020.1844776?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "a4ca648e584ce334eec119f2daaf47aa245f9b3c",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237262444 | pes2o/s2orc | v3-fos-license | Transcutaneous electrical acupoint stimulation for postoperative cognitive dysfunction in geriatric patients with gastrointestinal tumor: a randomized controlled trial
Background This study aimed to evaluate the effect of perioperative transcutaneous electrical acupoint stimulation (TEAS) on postoperative cognitive dysfunction (POCD) in older patients who were diagnosed with gastrointestinal tumor and received radical resection of gastrointestinal tumors under general anesthesia. Methods A total of 68 patients who received radical resection of gastrointestinal tumors under general anesthesia were randomly divided into two groups. TEAS group patients received TEAS treatment. The treatment time was 30 min before the induction of anesthesia until the end of the surgery, 1 day before operation and from the first day to the third day after the operation. Except on the day of surgery, we treated the patients for 30 min once a day. In the sham TEAS group, the electronic stimulation was not applied and the treatment was the same as the TEAS group. The primary outcome was perioperative cognition evaluated by the Mini-Mental State Examination (MMSE) and secondary outcomes were the perioperative level of interleukin-6 (IL-6), S100 calcium-binding protein β (S100β), and C-reactive protein (CRP). Results The postoperative score of MMSE, orientation, memory, and short-term recall in the sham TEAS group was significantly lower than the preoperative and TEAS group (P < 0.05). The incidence of POCD in the TEAS group (21.88%) was lower than those in the sham TEAS group (40.63%). S100β, IL-6, and CRP in the TEAS group were significantly lower than those in the sham TEAS group on the third day after the operation (P< 0.05). Postoperative S100β, IL-6, and CRP in two groups were significantly higher than those before operation except for S100β on the third day after the operation in the TEAS group (P < 0.05). Conclusions Perioperative TEAS treatment reduced the postoperative inflammatory response and increased the postoperative cognitive function score and decrease the incidence of POCD in geriatric patients with gastrointestinal tumor. Trial registration ClinicalTrials.gov NCT04606888. Registered on 27 October 2020. https://register.clinicaltrials.gov.
Keywords: POCD, TEAS, Gastrointestinal tumor
Background Postoperative cognitive dysfunction (POCD) is one of the common complications of central nervous system in cancer patients with a 8.9-46.1% incidence [1,2]. It is mainly manifested as the decline or damage of attention, memory, perception, abstract thinking, executive, language response, body movement, and other functions [3,4]. It is not easy to identify, but it can last for months, years or even become a permanent disease [5], which can severely affect patients' postoperative recovery, prolong the hospitalization time, increase the medical cost, affect the social function of patients, reduce the quality of life, and increase the mortality [6].
Surgical stress and inflammation are contributing factors for the development of POCD [7]. Surgical trauma can induce a systematic inflammatory response and release systematic inflammatory mediators, such as CRP, IL-6, which can enter the central nervous system (CNS) via the relatively permeable blood-brain barrier (BBB) and activate microglial cells to secrete additional cytokines and lead to CNS inflammatory [1]. In order to prevent the development of POCD among older patients, the discovery of effective interventions reducing inflammatory response is important. The treatment of parecoxib sodium [8], dexmedetomidine [9], and urinastatin [10] in the perioperative period can reduce inflammatory reaction and increase postoperative cognitive function. However, these drugs have side effects, such as neutropenia, diarrhea, skin redness, and other adverse reactions. The intervention of nerve block also makes the sensory and motor function of the innervated area temporarily lose and blocks the upload of noxious stimulation, which can inhibit the adverse stress reaction in the perioperative period and reduce the incidence of POCD [11]. But it is invasive and requires advanced requirements for anesthesia technology, and it may increase the expenditure of patients.
Transcutaneous electrical acupoint stimulation (TEAS) is a non-invasive acupoint stimulation therapy [12] which combined the preponderances of both acupuncture and transcutaneous electrical nerve stimulation (TENS). It can input low-frequency pulse current into human acupoints through electrodes pasted on the surface of skin [13]. TEAS treatment can reduce the intraoperative anesthetic consumption, decrease the incidence of postoperative nausea and vomiting (PONV) [14], and improve the postoperative recovery of patients [12]. Recently, TEAS treatment was found to improve the cognitive function of geriatric patients with silent lacunar infarction [13]. However, previous studies of TEAS treatment on cognition mainly focused on the intraoperative while the effect of perioperative TEAS treatment on POCD is not clear.
Purpose
We aimed to evaluate the effect of perioperative TEAS on POCD in older patients who were diagnosed with gastrointestinal tumor and received radical resection of gastrointestinal tumors under general anesthesia.
Study design
We conducted this prospective, randomized, singleblind, intervention-controlled clinical trial in Subei People's Hospital of Jiangsu province. This protocol was approved by the Ethics Committee of Subei People's Hospital of Jiangsu province (2020ky-046). This article was written using the CONSORT 2010 checklist (see Additional file 1).
Participants
Patients who volunteered for the study were selected from August 2020 to October 2020. All patients signed informed consent before this study.
The inclusion criteria are ① patients aged 60 years or older, ② patients were diagnosed with gastrointestinal tumor and received radical resection of gastrointestinal tumors under general anesthesia in Subei People's Hospital of Jiangsu province, ③ patients understood the research content and signed the informed consent form, ④ American Society of Anesthesiology (ASA) score I-III, ⑤ no frailty before operation, and ⑥ D-dimer was normal before the operation. The exclusion criteria are ① patients with cognitive dysfunction before the operation or patients with previous history of cognitive dysfunction, dementia, and delirium; ② patients with a history of severe depression, schizophrenia, and other mental and nervous system diseases or taking antipsychotic or antidepressant drugs in the past; ③ patients with severe hearing or visual impairment due to eye or ear diseases without assistive tools; ④ patients who are unable to communicate or have difficulty communicating; ⑤ according to the definition of "China chronic disease and its risk factors monitoring report (2010)" (male average daily pure alcohol intake ≥ 61 g, female average daily pure alcohol intake ≥ 41 g, alcohol volume (g) = alcohol consumption (ML) × alcohol content% × 0.8); ⑥ patients who were hospitalized for 3 months or more before surgery or had received surgical treatment within 3 months; ⑦ patients who can't take care of themselves or are physically disabled and unable to carry out nerve function test; ⑧ patients with severe heart, liver, and renal failure; ⑨ patients with hypoxemia (blood oxygen saturation < 94%) more than 10 min during operation; ⑩ patients admitted to ICU after operation; ⑪ patients who quit or died due to noncooperation or sudden situation; ⑫ patients who already participate in other clinical studies which may influence this study; ⑬ patient who underwent emergency surgery; and ⑭ patients who had a history of recent or conventional acupuncture treatment.
Sample size calculation
The sample size of this research was calculated based on the following equation: n = (Z α + Z β ) 2 × 2σ 2 /δ 2 . α is 0.05, Z is bilateral, and the degree of assurance (test efficiency) is 0.9. σ represents the average MMSE scores on the third day after operation between two groups and according to the pre-experiment, σ = 1.85, δ= 1.6. After calculation, each group needs 28 patients and expands the sample size by 20%, and the final sample size is 34 cases per group and therefore a total of 68 subjects are needed.
Randomization and blindness
68 random numbers generated by SPSS 24.0 software (seed = 20160648) were put into a sealed opaque envelope for patients to choose. Patients were assigned to either the TEAS group or sham TEAS group on the basis of random numbers at the ratio of 1:1. Patients know the number but not the allocation. The randomization process was completed by a study administrator not involved in the study. Due to the unique feature of TEAS, we can only blind the patients, data collectors, technicians who tested the blood index but not the interveners. The flow chart is presented in Fig. 1
Anesthesia management
All patients were given Tracheal Intubation General Anesthesia without preoperative medication. After entering the operating room, the intravenous access was established, and multi-functional monitor was connected to monitor hemodynamics, electrocardiography (ECG), noninvasive blood pressure, blood oxygen saturation, and heart rate. Anesthesia induction includes the following: midazolam 0.04-0.06 mg/kg, sufentanil 0.5-1 μg/kg, and cisatracurium besilate 0.1-0.4 mg/kg. After tracheal intubation, mechanical ventilation was performed. The parameters were tidal volume of 8-10 ml/kg, respiratory rate 12-16 times/min, the ratio of inhalation to exhalation 1:2, and the oxygen flow rate 2 L/min to maintain the end-tidal carbon dioxide partial pressure (PECO2) at 30-45 mmHg. During the operation, continuous oxygen was given at 2 L/min, sevoflurane 1.0-1.5%, sufentanil 0.2 μg/(kg h), dexmedetomidine 0.2-0.4 μg/(kg h), and cisatracurium besilate for injection 0.06-0.12 mg/(kg h) and propofol 6-9 g (kg h) were used to maintain the bispectral index (BIS) at 40-60. All patients were given sufentanil combined with dezocine for patient-controlled intravenous analgesia (PCIA). PCIA configuration includes the following: sufentanil 5-15 μg/kg + 0.1-0.4 mg/kg dezocine. Intervention According to the theory of "Xingnao Kaiqiao Acupuncture" in traditional Chinese medicine [15], three acupuncture points were selected as the target points: bilateral Neiguan [16] (PC6, on the palmar side of the forearm, 3 cm proximal to the transverse carpal crease, between the palmaris longus tendon and the radial flexor carpi tendon), Yintang [17] (GV29 ], on the forehead of the human body, between the two eyebrows) and bilateral Zusanli [18] (ST36, 3 inches below the outer knee). Patients in the TEAS group received perioperative TEAS by two experienced researchers and the treatment time was 30 min before the induction of anesthesia until the end of the surgery, 1 day before operation, and from the first day to the third day after the operation. Except on the day of surgery, we treated the patients 30 min once a day. We used transcutaneous electrical stimulators (SDZ-III, Suzhou medical technology, Suzhou, China) to provide an altered frequency 2/ 100 Hz, disperse-dense waves, and adjusted intensity which was less than 10 mA. In the sham TEAS group, the electrodes were placed at the same acupoints as the TEAS group, but the electronic stimulation was not applied and they were told that they might not feel electrical stimulation (Fig. 2). To deliver the intervention, the researchers need to learn the function and positioning of acupoints, the inspection and use of an instrument, and clinical practice in the traditional Chinese Medicine Department of the hospital for 7 times, 1 h each time. And only passing the final assessment of traditional Chinese medicine can researchers begin this intervention.
Primary outcome
The primary outcome of this study was the change in cognition in the morning of the day before operation and three days after the operation. Mini-Mental State Examination (MMSE) was used to evaluate the cognition by experienced researchers who were trained in Fig. 2 Location of acupoints neuropsychological assessment. The MMSE is a 30point questionnaire used to measure orientation (time and place), memory (immediate and short term), attention, calculation, and language (naming, repetition, listening, reading, and writing) [19,20]. Higher score means better cognitive function. According to the educational level of the subjects, the scores of illiterate < 17, primary school < 20, middle school, and above < 24 were defined as cognitive impairment. The score decreased by 2 points after the operation is considered to be POCD.
Secondary outcomes
Interleukin-6 (IL-6), S100 calcium-binding proteinβ (S100β), and C-reactive protein (CRP) were recorded before the operation, the 1st and 3rd day after the operation. Four milliliters blood sample was taken each time and the blood samples were immediately centrifuged at 3000 rpm for 10 min to collect serum and were stored in a freezer at − 80°C until further assayed. The concentrations of IL-6 and S100β in serum were quantified with a commercial ELISA kit (Beyotime, China) and each blood sample was repeated 3 times. The C-reactive protein (CRP) was extracted from the case data.
Statistical analyses
All research data were analyzed by IBM SPSS software 24.0. Research data of normal distribution were described as the mean±SD and the comparison between the two groups was performed by independent sample t test. Non-normal distribution was described as the median (interquartile), and the comparison between the two groups was performed by non-parametric test. Categorical variables were described as frequency (f) and numbers (%) and the comparison between the two groups was performed by chi-squared test and Fisher's exact test.
Results
A total of 68 patients were enrolled in the study; 4 (5.9%) dropped out because they declined to undergo this procedure and 64 patients' records were analyzed.
General information of patients
Smoking index is indicated by the number of cigarettes per day multiply years of smoking. No significant differences were observed between these two groups in general information (P > 0.05, Table 1).
Perioperative cognition
On the third day after the operation, the total score of MMSE, and the score of orientation, memory, and short-term recall in the sham TEAS group were significantly lower than those in preoperative and TEAS groups (P < 0.05, Table 2, Fig. 3). The incidence of POCD in the TEAS group was 21.88%, which was lower than that in the sham TEAS group (40.63%).
S100β, IL-6, and CRP Levels
There were 4 missing values of CRP before the operation, 2 in the TEAS group and 2 in the sham TEAS group. We replaced the missing value with sequence average of 3.34. S100β, IL-6, and CRP in the TEAS group were significantly lower than those in the sham TEAS group on the third day after the operation. S100β, IL-6, and CRP in two groups were all significantly higher than those before operation (P < 0.05) except for S100β on the third day after operation in the TEAS group (Fig. 4, Table 3).
Safety analysis
Any treatment-related adverse events (AEs) were monitored and documented throughout the trial within 24 h after each treatment by the direct observation of researchers and the self-report of the patients. Potential AEs of TEAS used in the trial include continuous postelectrostimulation sensation and skin numbness; allergic reaction; fainting; vomiting; redness, swelling, pain, or other injury on the skin during acupuncture treatment. And no AEs were reported in either group during the clinical trial.
Discussion
In this study, we preliminary explored the effect of perioperative TEAS treatment on the values of S100β, IL-6, CRP, and the incidence of POCD. Many studies have shown the beneficial effect of TEAS [21,22]. However, the selection of acupoints in each study is different. According to the theory of "Xingnao Kaiqiao Acupuncture" [15] in traditional Chinese medicine, we chose the bilateral Neiguan (PC6), Yintang (GV 29), and bilateral Zusanli (ST36) as the target points. PC6 belongs to the acupoints of the pericardium meridian of hand Jueyin. Pericardial meridian is related to people's mental activities and thinking consciousness and closely related to the function of the brain. It is the first choice for brain injury, which has functions of calming panic, relieving palpitation, nourishing heart and mind, broadening chest, and regulating Qi [23]. Acupuncture on PC6 can promote the balance of oxygen supply and demand of brain cells, improve the blood circulation of brain tissue, and finally improve the cognition of dementia patients [24]. GV 29 is an important acupoint of the governor vessel, which has the function of refreshing the brain and activating collaterals [24]. The electro-acupuncture at GV 29 can improve postoperative cognition [25]. ST36 is one of the main acupoints in the stomach meridian of Foot Yangming. It has the functions of regulating the spleen and stomach, tonifying the middle, replenishing Qi, dredging meridians, activating collaterals, dispersing weathering and dampness, strengthening the body, and eliminating pathogenic factors [26]. Acupuncture at ST36 can activate most parts of the brain, especially in the temporal lobe, so as to regulate body movement, sensation, language, learning and memory, mental emotion, and internal organs activities [26]. MMSE is one of the most influential, popular, and commonly used cognition screening scale in the world [27,28]. Our study showed that perioperative TEAS can improve the total score of MMSE and decrease the incidence rate of POCD on the third day after operation. Perioperative treatment of TEAS on cognition is relatively rare, and some studies have shown that TEAS treatment from 30 min before anesthesia to the end of surgery can reduce the incidence of POCD in patients undergoing radical thoracoscopic lung cancer operation and gynecological laparoscopy surgery [28]. Also, our study first showed that the TEAS treatment can increase three dimensions score of orientation, memory, and short-term recall of the cognition, and this finding needs more study to conform. S100β is a biomarker involving in the mechanism of cognitive impairment, which existed in several types of cell in the peripheral and central nervous system [CNS] [29,30]. IL-6 is a pro-inflammatory cytokine which can mediate inflammatory and immune responses in CNS [31]. CRP is a non-specific biomarker of systemic bodily inflammation [32,33] which can accelerate the development of neurodegenerative disorders through activating microglia, increasing levels of proinflammatory cytokines, and activating the complement cascade [34,35]. In this study, the value of S100β and IL-6 was detected twice. Therefore, in order to avoid the influence of experimental error on the experiment, the method of averaging was adopted to evaluate the level of S100β and IL-6. Our study showed that perioperative TEAS treatment decreased the postoperative S100β, IL-6, and CRP which was consistent with other studies [13,36,37]. However, these three inflammatory cytokines have no difference between the two groups on the first day after the operation [38,39]. And the reason for this result may be due to the small sample size, so more studies are needed to confirm this result in the future.
Owning to the situation that previous studies of the TEAS treatments on cognition mainly focused on intraoperative and the effect of perioperative TEAS on POCD is not clear, we investigated the effect of perioperative TEAS treatment on POCD. However our study had its limitations, which were as follows: we just implemented the perioperative TEAS treatment in geriatric patients with gastrointestinal tumor, whether this treatment is effective on other diseases needs more studies to be identified. Furthermore, the cognition was measured only at the third day after the operation and long-term follow-ups such as postoperative 7 days, 1 month, and 1 year may be different.
Conclusion
In summary, perioperative TEAS treatment can reduce the postoperative inflammatory response and increase the postoperative cognitive function score and decrease the incidence of POCD in geriatric patients with gastrointestinal tumor. Fig. 4 Perioperative S100β, IL-6, and CRP Levels. Note: Preoperative day (Pre); Postoperative (Post-). *P < 0.05: compared with the sham TEAS group Table 3 Perioperative S100β, IL-6, and CRP Levels of two groups of patients Abbreviations TEAS: Transcutaneous electrical acupoint stimulation; POCD: postoperative cognitive dysfunction; MMSE: Mini Mental State Examination; CRP: C-reactive protein; IL-6: Interleukin-6; S100β: S100 calcium-binding protein β; AEs: Adverse events | 2021-08-23T13:19:56.466Z | 2021-08-23T00:00:00.000 | {
"year": 2021,
"sha1": "5887f4d96968d63af4cf654dda8fd3f96365ec13",
"oa_license": "CCBY",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-021-05534-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5887f4d96968d63af4cf654dda8fd3f96365ec13",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1994218 | pes2o/s2orc | v3-fos-license | Incremental HMM Alignment for MT System Combination
Inspired by the incremental TER alignment, we re-designed the Indirect HMM (IHMM) alignment, which is one of the best hypothesis alignment methods for conventional MT system combination, in an incremental manner. One crucial problem of incremental alignment is to align a hypothesis to a confusion network (CN). Our incremental IHMM alignment is implemented in three different ways: 1) treat CN spans as HMM states and define state transition as distortion over covered n - grams between two spans; 2) treat CN spans as HMM states and define state transition as distortion over words in component translations in the CN; and 3) use a consensus decoding algorithm over one hypothesis and multiple IHMMs, each of which corresponds to a component translation in the CN. All these three approaches of incremental alignment based on IHMM are shown to be superior to both incremental TER alignment and conventional IHMM alignment in the setting of the Chinese-to-English track of the 2008 NIST Open MT evaluation.
Introduction
Word-level combination using confusion network (Matusov et al. (2006) and Rosti et al. (2007)) is a widely adopted approach for combining Machine Translation (MT) systems' output. Word alignment between a backbone (or skeleton) translation and a hypothesis translation is a key problem in this approach. Translation Edit Rate (TER, Snover et al. (2006)) based alignment proposed in Sim et al. (2007) is often taken as the baseline, and a couple of other approaches, such as the Indirect Hidden Markov Model (IHMM, He et al. (2008)) and the ITG-based alignment (Karakos et al. (2008)), were recently proposed with better results reported. With an alignment method, each hypothesis is aligned against the backbone and all the alignments are then used to build a confusion network (CN) for generating a better translation.
However, as pointed out by Rosti et al. (2008), such a pair-wise alignment strategy will produce a low-quality CN if there are errors in the alignment of any of the hypotheses, no matter how good the alignments of other hypotheses are. For example, suppose we have the backbone "he buys a computer" and two hypotheses "he bought a laptop computer" and "he buys a laptop". It will be natural for most alignment methods to produce the alignments in Figure 1a. The alignment of hypothesis 2 against the backbone cannot be considered an error if we consider only these two translations; nevertheless, when added with the alignment of another hypothesis, it produces the low-quality CN in Figure 1b, which may generate poor translations like "he bought a laptop laptop". While it could be argued that such poor translations are unlikely to be selected due to language model, this CN does disperse the votes to the word "laptop" to two distinct arcs. Rosti et al. (2008) showed that this problem can be rectified by incremental alignment. If hypothesis 1 is first aligned against the backbone, the CN thus produced (depicted in Figure 2a) is then aligned to hypothesis 2, giving rise to the good CN as depicted in Figure 2b. 1 On the other hand, the Figure 1: An example bad confusion network due to pair-wise alignment strategy correct result depends on the order of hypotheses. If hypothesis 2 is aligned before hypothesis 1, the final CN will not be good. Therefore, the observation in Rosti et al. (2008) that different order of hypotheses does not affect translation quality is counter-intuitive. This paper attempts to answer two questions: 1) as incremental TER alignment gives better performance than pair-wise TER alignment, would the incremental strategy still be better than the pairwise strategy if the TER method is replaced by another alignment method? 2) how does translation quality vary for different orders of hypotheses being incrementally added into a CN? For question 1, we will focus on the IHMM alignment method and propose three different ways of implementing incremental IHMM alignment. Our experiments will also try several orders of hypotheses in response to question 2. This paper is structured as follows. After setting the notations on CN in section 2, we will first introduce, in section 3, two variations of the basic incremental IHMM model (IncIHMM1 and IncIHMM2). In section 4, a consensus decoding algorithm (CD-IHMM) is proposed as an alternative way to search for the optimal alignment. The issues of alignment normalization and the order of hypotheses being added into a CN are discussed in sections 5 and 6 respectively. Experiment results and analysis are presented in section 7.
Preliminaries: Notation on Confusion Network
Before the elaboration of the models, let us first clarify the notation on CN. A CN is usually described as a finite state graph with many spans. Each span corresponds to a word position and contains several arcs, each of which represents an alternative word (could be the empty symbol , ) at that position. Each arc is also associated with M weights in an M -way system combination task. Follow Rosti et al. (2007), the i-th weight of an arc is r 1 1+r , where r is the rank of the hypothesis in the i-th system that votes for the word represented by the arc. This conception of CN is called the conventional or compact form of CN. The networks in Figures 1b and 2b are examples.
On the other hand, as a CN is an integration of the skeleton and all hypotheses, it can be conceived as a list of the component translations. For example, the CN in Figure 2b can be converted to the form in Figure 3. In such an expanded or tabular form, each row represents a component translation. Each column, which is equivalent to a span in the compact form, comprises the alternative words at a word position. Thus each cell represents an alternative word at certain word position voted by certain translation. Each row is assigned the weight 1 1+r , where r is the rank of the translation of some MT system. It is assumed that all MT systems are weighted equally and thus the Figure 3: An example of confusion network in tabular form rank-based weights from different system can be compared to each other without adjustment. The weight of a cell is the same as the weight of the corresponding row. In this paper the elaboration of the incremental IHMM models is based on such tabular form of CN.
Let E I 1 = (E 1 . . . E I ) denote the backbone CN, and e J 1 = (e 1 . . . e J ) denote a hypothesis being aligned to the backbone. Each e j is simply a word in the target language. However, each E i is a span, or a column, of the CN. We will also use E(k) to denote the k-th row of the tabular form CN, and E i (k) to denote the cell at the k-th row and the i-th column. W (k) is the weight for E(k), and Note that E(k) contains the same bag-of-words as the k-th original translation, but may have different word order. Note also that E(k) represents a word sequence with inserted empty symbols; the sequence with all inserted symbols removed is known as the compact form of E(k).
The Basic IncIHMM Model
A naïve application of the incremental strategy to IHMM is to treat a span in the CN as an HMM state. Like He et al. (2008), the conditional probability of the hypothesis given the backbone CN can be decomposed into similarity model and distortion model in accordance with equation 1 The similarity between a hypothesis word e j and a span E i is simply a weighted sum of the similarities between e j and each word contained in E i as equation 2: The similarity between two words is estimated in exactly the same way as in conventional IHMM alignment.
As to the distortion model, the incremental IHMM model also groups distortion parameters into a few 'buckets': The problem in incremental IHMM is when to apply a bucket. In conventional IHMM, the transition from state i to j has probability: It is tempting to apply the same formula to the transitions in incremental IHMM. However, the backbone in the incremental IHMM has a special property that it is gradually expanding due to the insertion operator. For example, initially the backbone CN contains the option e i in the i-th span and the option e i+1 in the (i+1)-th span. After the first round alignment, perhaps e i is aligned to the hypothesis word e j , e i+1 to e j+2 , and the hypothesis word e j+1 is left unaligned. Then the consequent CN have an extra span containing the option e j+1 inserted between the i-th and (i + 1)-th spans of the initial CN. If the distortion buckets are applied as in equation 3, then in the first round alignment, the transition from the span containing e i to that containing e i+1 is based on the bucket c(1), but in the second round alignment, the same transition will be based on the bucket c(2). It is therefore not reasonable to apply equation 3 to such gradually extending backbone as the monotonic alignment assumption behind the equation no longer holds. There are two possible ways to tackle this problem. The first solution estimates the transition probability as a weighted average of different distortion probabilities, whereas the second solution converts the distortion over spans to the distortion over the words in each hypothesis E(k) in the CN.
Distortion Model 1: simple weighting of covered n-grams
Distortion Model 1 shifts the monotonic alignment assumption from spans of CN to n-grams covered by state transitions. Let us illustrate this point with the following examples.
In conventional IHMM, the distortion probability p (i + 1|i, I) is applied to the transition from state i to i+1 given I states because such transition jumps across only one word, viz. the i-th word of the backbone. In incremental IHMM, suppose the i-th span covers two arcs e a and , with probabilities p 1 and p 2 = 1 − p 1 respectively, then the transition from state i to i + 1 jumps across one word (e a ) with probability p 1 and jumps across nothing with probability p 2 . Thus the transition probability should be p 1 · p (i + 1|i, I) + p 2 · p (i|i, I).
Suppose further that the (i + 1)-th span covers two arcs e b and , with probabilities p 3 and p 4 respectively, then the transition from state i to i + 2 covers 4 possible cases: 1. nothing ( ) with probability p 2 · p 4 ; 2. the unigram e a with probability p 1 · p 4 ; 3. the unigram e b with probability p 2 · p 3 ; 4. the bigram e a e b with probability p 1 · p 3 . Accordingly the transition probability should be The estimation of transition probability can be generalized to any transition from i to i by expanding all possible n-grams covered by the transition and calculating the corresponding probabilities. We enumerate all possible cell sequences S(i, i ) covered by the transition from span i to i ; each sequence is assigned the probability where the cell at the i -th span is on some row E(k). Since a cell may represent an empty word, a cell sequence may represent an n-gram where We denote |S(i, i )| to be the length of n-gram represented by a particular cell sequence S(i, i ). All the cell sequences S(i, i ) can be classified, with respect to the length of corresponding n-grams, into a set of parameters where each element (with a particular value of n) has the probability The probability of the transition from i to i is: That is, the transition probability of incremental IHMM is a weighted sum of probabilities of 'ngram jumping', defined as conventional IHMM distortion probabilities. However, in practice it is not feasible to expand all possible n-grams covered by any transition since the number of n-grams grows exponentially. Therefore a length limit L is imposed such that for all state transitions where |i − i| ≤ L, the transition probability is calculated as equation 4, otherwise it is calculated by: for some q between i and i . In other words, the probability of longer state transition is estimated in terms of the probabilities of transitions shorter or equal to the length limit. 2 All the state transitions can be calculated efficiently by dynamic programming.
A fixed value P 0 is assigned to transitions to null state, which can be optimized on held-out data. The overall distortion model is:
Distortion Model 2: weighting of distortions of component translations
The cause of the problem of distortion over CN spans is the gradual extension of CN due to the inserted empty words. Therefore, the problem will disappear if the inserted empty words are removed. The rationale of Distortion Model 2 is that the distortion model is defined over the actual word sequence in each component translation E(k). Distortion Model 2 implements a CN in such a way that the real position of the i-th word of the kth component translation can always be retrieved. The real position of E i (k), δ(i, k), refers to the position of the word represented by E i (k) in the compact form of E(k) (i.e. the form without any inserted empty words), or, if E i (k) represents an empty word, the position of the nearest preceding non-empty word. For convenience, we also denote by δ (i, k) the null state associated with the state of the real word δ (i, k). Similarly, the real length of E(k), L(k), refers to the number of non-empty words of E(k).
The transition from span i to i is then defined as where k is the row index of the tabular form CN.
Depending on E i (k) and E i (k), p k (i|i ) is computed as follows: where p refers to the conventional IHMM distortion probability as defined by equation 3.
2. if E i (k) represents a real word but E i (k) the empty word, then Like conventional HMM-based word alignment, the probability of the transition from a null state to a real word state is the same as that of the transition from the real word state associated with that null state to the other real word state. Therefore, 3. if E i (k) represents the empty word but E i (k) a real word, then The second option is due to the constraint that a null state is accessible only to itself or the real word state associated with it. Therefore, the transition from i to i is in fact composed of the first transition from i to δ(i, k) and the second transition from δ(i, k) to the null state at i.
if both E i (k)
and E i (k) represent the empty word, then, with similar logic as cases 2 and 3,
Incremental Alignment using Consensus Decoding over Multiple IHMMs
The previous section describes an incremental IHMM model in which the state space is based on the CN taken as a whole. An alternative approach is to conceive the rows (component translations) in the CN as individuals, and transforms the alignment of a hypothesis against an entire network to that against the individual translations. Each individual translation constitutes an IHMM and the optimal alignment is obtained from consensus decoding over these multiple IHMMs. Alignment over multiple sequential patterns has been investigated in different contexts. For example, Nair and Sreenivas (2007) proposed multipattern dynamic time warping (MPDTW) to align multiple speech utterances to each other. However, these methods usually assume that the alignment is monotonic. In this section, a consensus decoding algorithm that searches for the optimal (non-monotonic) alignment between a hypothesis and a set of translations in a CN (which are already aligned to each other) is developed as follows.
A prerequisite of the algorithm is a function for converting a span index to the corresponding HMM state index of a component translation. The two functions δ and δ s defined in section 3.2 are used to define a new function: Accordingly, given the alignment a J 1 = a 1 . . . a J of a hypothesis (with J words) against a CN (where each a j is an index referring to the span of the CN), we can obtain the alignmentã k = δ(a 1 , k) . . .δ(a J , k) between the hypothesis and the k-th row of the tabular CN. The real length function L(k) is also used to obtain the number of non-empty words of E(k).
Given the k-th row of a CN, E(k), an IHMM λ(k) is formed and the cost of the pair-wise alignment,ã k , between a hypothesis h and λ(k) is defined as: The cost of the alignment of h against a CN is then defined as the weighted sum of the costs of the K alignmentsã k : where Λ = {λ(k)} is the set of pair-wise IHMMs, and W (k) is the weight of the k-th row. The optimal alignmentâ is the one that minimizes this cost: A Viterbi-like dynamic programming algorithm can be developed to search forâ by treating CN spans as HMM states, with a pseudo emission probability as and a pseudo transition probability as Note that P (e j |E a j ) and P (j|i) are not true probabilities and do not have the sum-to-one property.
Alignment Normalization
After alignment, the backbone CN and the hypothesis can be combined to form an even larger CN. The same principles and heuristics for the construction of CN in conventional system combination approaches can be applied. Our incremental alignment approaches adopt the same heuristics for alignment normalization stated in He et al. (2008). There is one exception, though. All 1-N mappings are not converted to N − 1 -1 mappings since this conversion leads to N − 1 inser-tion in the CN and therefore extending the network to an unreasonable length. The Viterbi alignment is abandoned if it contains an 1-N mapping. The best alignment which contains no 1-N mapping is searched in the N-Best alignments in a way inspired by Nilsson and Goldberger (2001). For example, if both hypothesis words e 1 and e 2 are aligned to the same backbone span E 1 , then all alignments a j={1,2} = i (where i = 1) will be examined. The alignment leading to the least reduction of Viterbi probability when replacing the alignment a j={1,2} = 1 will be selected.
Order of Hypotheses
The default order of hypotheses in Rosti et al. (2008) is to rank the hypotheses in descending of their TER scores against the backbone. This paper attempts several other orders. The first one is system-based order, i.e. assume an arbitrary order of the MT systems and feeds all the translations (in their original order) from a system before the translations from the next system. The rationale behind the system-based order is that the translations from the same system are much more similar to each other than to the translations from other systems, and it might be better to build CN by incorporating similar translations first. The second one is N-best rank-based order, which means, rather than keeping the translations from the same system as a block, we feed the top-1 translations from all systems in some order of systems, and then the second best translations from all systems, and so on. The presumption of the rank-based order is that top-ranked hypotheses are more reliable and it seemed beneficial to incorporate more reliable hypotheses as early as possible. These two kinds of order of hypotheses involve a certain degree of randomness as the order of systems is arbitrary. Such randomness can be removed by imposing a Bayes Risk order on MT systems, i.e. arrange the MT systems in ascending order of the Bayes Risk of their top-1 translations. These four orders of hypotheses are summarized in Table 1. We also tried some intuitively bad orders of hypotheses, including the reversal of these four orders and the random order.
Evaluation
The (2008)). In the following sections, the incremental IHMM approaches using distortion model 1 and 2 are named as IncIHMM1 and IncIHMM2 respectively, and the consensus decoding of multiple IHMMs as CD-IHMM. The baselines include the TER-based method in Rosti et al. (2007), the incremental TER method in Rosti et al. (2008), and the IHMM approach in He et al. (2008). The development (dev) set comprises the newswire and newsgroup sections of MT06, whereas the test set is the entire MT08. The 10best translations for every source sentence in the dev and test sets are collected from eight MT systems. Case-insensitive BLEU-4, presented in percentage, is used as evaluation metric.
The various parameters in the IHMM model are set as the optimal values found in He et al. (2008). The lexical translation probabilities used in the semantic similarity model are estimated from a small portion (FBIS + GALE) of the constrained track training data, using standard HMM alignment model (Och and Ney (2003)). The backbone of CN is selected by MBR. The loss function used for TER-based approaches is TER and that for IHMM-based approaches is BLEU. As to the incremental systems, the default order of hypotheses is the ascending order of TER score against the backbone, which is the order proposed in Rosti et al. (2008). The default order of hypotheses for our three incremental IHMM approaches is N-best rank order with Bayes Risk system order, which is empirically found to be giving the highest BLEU score. Once the CN is built, the final system combination output can be obtained by decoding it with a set of features and decoding parameters. The features we used include word confidences, language model score, word penalty and empty word penalty. The decoding parameters are trained by maximum BLEU training on the dev set. The training and decoding processes are the same as described by Rosti et al. (2007). Table 2 lists the BLEU scores achieved by the three baseline combination methods and IncIHMM2. The comparison between pairwise and incremental TER methods justifies the superiority of the incremental strategy. However, the benefit of incremental TER over pair-wise TER is smaller than that mentioned in Rosti et al. (2008), which may be because of the difference between test sets and other experimental conditions. The comparison between the two pair-wise alignment methods shows that IHMM gives a 0.7 BLEU point gain over TER, which is a bit smaller than the difference reported in He et al. (2008). The possible causes of such discrepancy include the different dev set and the smaller training set for estimating semantic similarity parameters. Despite that, the pair-wise IHMM method is still a strong baseline. Table 2 also shows the performance of IncIHMM2, our best incremental IHMM approach. It is almost one BLEU point higher than the pair-wise IHMM baseline and much higher than the two TER baselines. Table 3: Comparison between the three incremental IHMM approaches els is to shift the distortion over spans to the distortion over word sequences. In distortion model 2 the word sequences are those sequences available in one of the component translations in the CN. Distortion model 1 is more encompassing as it also considers the word sequences which are combined from subsequences from various component translations. However, as mentioned in section 3.1, the number of sequences grows exponentially and there is therefore a limit L to the length of sequences. In general the limit L ≥ 8 would render the tuning/decoding process intolerably slow. We tried the values 5 to 8 for L and the variation of performance is less than 0.1 BLEU point. That is, distortion model 1 cannot be improved by tuning L. The similar BLEU scores as shown in Table 3 implies that the incorporation of more word sequences in distortion model 1 does not lead to extra improvement.
Comparison among the Incremental IHMM Models
Although consensus decoding is conceptually different from both variations of IncIHMM, it can indeed be transformed into a form similar to IncIHMM2. IncIHMM2 calculates the parameters of the IHMM as a weighted sum of various probabilities of the component translations. In contrast, the equations in section 4 shows that CD-IHMM calculates the weighted sum of the logarithm of those probabilities of the component translations. In other words, IncIHMM2 makes use of the sum of probabilities whereas CD-IHMM makes use of the product of probabilities. The experiment results indicate that the interaction between the weights and the probabilities is more fragile in the product case than in the summation case. Table 4 lists the BLEU scores on the test set achieved by IncIHMM1 using different orders of hypotheses. The column 'reversal' shows the impact of deliberately bad order, viz. more than one BLEU point lower than the best order. The random order is a baseline for not caring about order of hypotheses at all, which is about 0.7 BLEU Table 4: Comparison between various orders of hypotheses. 'System' means system-based order; 'Rank' means N-best rank-based order; 'BR' means Bayes Risk order of systems. The numbers are the BLEU scores on the test set.
Impact of Order of Hypotheses
point lower than the best order. Among the orders with good performance, it is observed that N-best rank order leads to about 0.2 to 0.3 BLEU point improvement, and that the Bayes Risk order of systems does not improve performance very much. In sum, the performance of incremental alignment is sensitive to the order of hypotheses, and the optimal order is defined in terms of the rank of each hypothesis on some system's n-best list.
Conclusions
This paper investigates the application of the incremental strategy to IHMM, one of the state-ofthe-art alignment methods for MT output combination. Such a task is subject to the problem of how to define state transitions on a gradually expanding CN. We proposed three different solutions, which share the principle that transition over CN spans must be converted to the transition over word sequences provided by the component translations. While the consensus decoding approach does not improve performance much, the two distortion models for incremental IHMM (IncIHMM1 and IncIHMM2) give superb performance in comparison with pair-wise TER, pair-wise IHMM, and incremental TER. We also showed that the order of hypotheses is important as a deliberately bad order would reduce translation quality by one BLEU point. | 2014-07-01T00:00:00.000Z | 2009-08-02T00:00:00.000 | {
"year": 2009,
"sha1": "0c6a1993fbb02a21389e62305aa35fed3ef3fdcc",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1690279&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "0d06d9b59848ea48ef92d3a6e2b957cf5cfbae17",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
266044911 | pes2o/s2orc | v3-fos-license | Smirnov words and the Delta Conjectures
We provide a combinatorial interpretation of the symmetric function Θ e k Θ e l ∇ e n − k − l | t =0 in terms of segmented Smirnov words. The motivation for this work is the study of a diagonal coinvariant ring with one set of commuting and two sets of anti-commuting variables, whose Frobenius characteristic is conjectured to be the symmetric function in question. Furthermore, this function is related to the Delta conjectures. Our work is a step towards a unified formulation of the two versions, as we prove a unified Delta theorem at t = 0
Introduction
In the 1990s, Garsia and Haiman introduced the ring of diagonal coinvariants DR n .The study of the structure of this S n -module and its generalizations has been an important research topic in algebra and combinatorics ever since.There is in fact a rich interplay at work here.Various conjectural enumerative results on the ring (graded dimensions, multiplicities, etc.) indicate some beautiful underlying combinatorial structures.Understanding the structures leads to information on the ring, while generalizing them raises new questions on the algebraic side, leading in particular to the introduction of various new coinvariant rings.Then, some new combinatorics may emerge from those, and so on.This work will be motivated by a particular coinvariant ring in this realm, for which we unearth and explore the conjectural combinatorics in detail.This in turn gives in particular new insight on the algebraic side.Before we state our results, we need to briefly recall the story of DR n and its extensions.
The ring DR n is defined as follows: consider the space C[x n , y n ] := C[x 1 , . . ., x n , y 1 , . . ., y n ] and define an S n -action as σ • f (x 1 , . . ., x n , y 1 , . . ., y n ) := f (x σ(1) , . . ., x σ(n) , y σ(1) , . . ., y σ(n) ) for all f ∈ C[x n , y n ] and σ ∈ S n .Let I(x n , y n ) be the ideal generated by the S n -invariants with vanishing constant term.Then the ring of diagonal coinvariants is defined as This space has a natural bi-grading: let DR (i,j) n be the component of DR n with homogeneous x-degree i and homogeneous y-degree j.This grading is preserved by the S n -action.Garsia and Haiman conjectured, and Haiman later proved [13], a formula for the graded Frobenius characteristic of the diagonal harmonics: grFrob(DR n ; q, t) := i,j∈N q i t j Frob(DR (i,j) n ) = ∇e n , where e n is the n th elementary symmetric function and ∇ is the operator depending on q, t introduced in [1].
In [11], the authors gave a combinatorial formula for this graded Frobenius character ∇e n , in terms of labelled Dyck paths, called the shuffle conjecture, which is now a theorem, by Carlsson and Mellit [3].The Delta conjecture is a pair of combinatorial formulas for the symmetric function ∆ ′ e n−k−1 e n in terms of decorated labelled Dyck paths, stated in [12], that reduces to the shuffle theorem when k = 0 (see ( 4) and ( 5)).Here ∆ ′ ej is another operator on symmetric functions.This extension of the combinatorial setting led Zabrocki and, later on, D'Adderio, Iraci and Vanden Wyngaerd to introduce extensions of DR n [5,26].Consider the ring C[x n , y n , θ n , ξ n ] := C[x 1 , . . ., x n , y 1 , . . ., y n , θ 1 , . . ., θ n , ξ 1 , . . ., ξ n ], where the x n , y n are the usual commuting variables, and the θ n , ξ n are anti-commuting: θ i θ j = −θ j θ i and ξ i ξ j = −ξ j ξ i for all 1 ≤ i, j ≤ n.
We sometimes call this the (2, 2) case, referring to the 2 sets of commuting and 2 sets of anticommuting variables.All these modules are actually special cases of a general family of bosonicfermionic diagonal coinvariant modules, defined by the same construction with a sets of bosonic variables and b sets of fermionic variables.These modules, which are actually S n × GL a × GL b modules with the action of the general linear group on the matrices of variables, have been studied extensively and are tied to some very nice combinatorial and algebraic results.See for example [2] for a survey on the topic and the statement of the so-called combinatorial supersymmetry.
In [26] Zabrocki conjectured that, in the (2, 1)-case of two sets of bosonic and one set of fermionic variables grFrob i,j∈N T DR (i,j,k,0) n := i,j∈N q i t j Frob(T DR (i,j,k,0) where we recognize on the right-hand size the symmetric function of the Delta conjecture.In [5], D'Adderio, Iraci and Vanden Wyngaerd introduced the symmetric function operator Θ f , for any symmetric function f , and showed that ∆ ′ n−k−1 e n = Θ e k ∇e n−k .This new operator permitted them to extend Zabrocki's conjecture to the (2, 2)-case: grFrob T DR (i,j,k,l) n := i,j∈N q i t j Frob(T DR (i,j,k,l) n ) ?
= Θ e l Θ e k ∇e n−k−l . ( This general conjecture is still open, but many special cases have been studied over the years. Other than the (1, 0)-and (0, 1)-cases, which are classical, there are several recent results.The (2, 0)-case, or the classical diagonal coinvariant case was proven by Haiman to be a consequence of his famous n!-theorem [13] as recalled above.The (0, 2)-case, or fermionic Theta case, was proved by Iraci, Rhoades, and Romero in [14].The (1, 1)-case, or the superspace coinvariant ring, is still open; Rhoades and Wilson in [21] showed that its Hilbert series agrees with the expected formula.
In this paper, we will turn our interest to the combinatorics of (1, 2)-case.In other words, we are led through the conjecture (2) to study the symmetric function Our main result, Theorem 3.1, is to give a combinatorial interpretation of this formula that is completely new, in terms of segmented Smirnov words and a pleasant q-statistic.We will also give a variant of this result which deserves to be dubbed "unified Delta theorem at t = 0", based on a q-statistic that is more involved.We also rewrite the combinatorial expansion using fundamental quasisymmetric functions, and apply this to extract the coefficient of s 1 n in (3), which is conjecturally the Hilbert series of the sign-isotypic component of the (1, 2)-case.
Let us give more details about these results.A Smirnov word is a word in the alphabet of nonnegative integers such that no two adjacent letters are the same; see [23] for instance.A segmented Smirnov word is a concatenation of Smirnov words with prescribed lengths (see Definition 1.1).Then Theorem 3.1 is a monomial expansion for the symmetric function (3) as a generating function of segmented Smirnov words, where the exponent of q is given by a new sminversion statistic on these words (see Definition 1.7).In our formula, the values of k and l in (3) correspond to the number of ascents and descents in the Smirnov word.Our proof relies on an algebraic recursion (Theorem 2.3) which we show to be a consequence of a symmetric function identity in [8, Theorem 8.2].We then show in Section 3 that the combinatorial side satisfies the same recursion, from which the theorem follows.
Now the full symmetric function on the right hand side of (2) is of particular combinatorial interest, as it may provide a unified version of the two different Delta conjectures: The sets LD(n) * k and LD(n) •k denote labelled Dyck paths of size n with k decorations on rises or valleys, respectively; and the statistics dinv and area depend on the decorations (see Section 4 for precise definitions).So ( 4) is referred to as the rise version and ( 5) as the valley version of the Delta conjecture.The rise version was recently proved in [7].
In [5], the first and third named authors of this paper, together with Michele D'Adderio, conjectured a partial formula for a possible unified Delta conjecture, for which they have significant computational evidence: where LD(n) * k,•l is the set of labelled Dyck paths of size n with k decorations on rises and l decorations on valleys.In order to get fully unified Delta conjecture, one has to find a statistic qstat : and such that when k = 0 or l = 0, the formula would reduce to (4) or (5), respectively.
We give a partial answer to this question in Section 4, namely we provide such a q-statistic at t = 0: this is Theorem 4.1, our unified Delta theorem at t = 0. We describe an explicit bijection ϕ between segmented Smirnov words and elements of LD(n) * k,•l (doubly decorated labelled Dyck paths) with area equal to 0; we then introduce a variant of the q-statistic sminv on segmented Smirnov words, which we call sdinv, so that through the bijection ϕ we recover the known dinv statistic on decorated Dyck paths when k = 0 or l = 0, solving (7) when t = 0.
In the last sections, we return to the expansion in Theorem 3.1.In Section 5, we first explicit the fundamental quasisymmetric function expansion of (3) in terms of our combinatorics (Proposition 5.3).This is then used to extract the coefficient of s 1 n in (3), which turns out to have a nice product formula, cf Proposition 5.7.We finally discuss in Section 6.3 the special case n = k + l + 1 -corresponding to an expansion in Smirnov words-that links our work with other areas of combinatorics: the chromatic quasisymmetric function of the path graph (Section 6.1), parallelogram polyominoes (Section 6.2), and noncrossing partitions (Section 6.3).
Combinatorial definitions
In this work Z + is the set of positive integers, and we will fix n ∈ Z + .We write µ ⊨ 0 n if µ is a weak composition of n, that is µ = (µ 1 , µ 2 , . ..)where the µ i are nonnegative integers that sum to n.A composition α ⊨ n is a finite sequence α = (α 1 , . . ., α s ) of positive integers that sum to n.
Definition 1.1.A Smirnov word of length n is an element w ∈ Z n + such that w i ̸ = w i+1 for all 1 ≤ i < n.A segmented Smirnov word of shape α = (α 1 , . . ., α s ) ⊨ n is an element w ∈ Z n + such that, if we write w = w 1 w 2 • • • w s as a concatenation of subwords w i of length α i , then each w i is a Smirnov word.We call w 1 , . . ., w s blocks of w.
We usually simply denote a segmented Smirnov word by w, and omit mention of the shape α.In examples, we separate blocks by vertical bars.For instance 23|1242|2|31 is a segmented Smirnov word of length 9 with shape (2, 4, 1, 2).Note that, if w is a segmented Smirnov word, we allow w i = w i+1 if they belong to different blocks.
Let SW(n) be the set of segmented Smirnov words of length n.Given µ ⊨ 0 n, we denote by SW(µ) the set of segmented Smirnov words with content µ, meaning that their multiset of letters is {i µi | i > 0} (i.e. they contain µ 1 occurrences of 1, µ 2 occurrences of 2, and so on).
It is convenient to introduce the following notation.
Definition 1.4.Consider the alphabet Z + ∪ {∞} where ∞ is larger than all "finite" letters.Given a nonempty segmented Smirnov word w = (w 1 , . . ., w s ), define the word in the alphabet Z + ∪ {∞} as the concatenation If w is a segmented Smirnov word, we say that i is a peak, valley, double rise, or double fall of w if it is one in the word a(w).
We call segmented permutation a segmented Smirnov word in SW(1 n ), that is, a segmented Smirnov word whose letters are exactly the numbers from 1 to n.Thus segmented permutations are simply pairs (σ, α) with σ ∈ S n and α ⊨ n, since the Smirnov condition is automatically satisfied.
Definition 1.5.Given a Smirnov word w, we say that i is an ascent of w if w i+1 > w i , and a descent otherwise.If w is a segmented Smirnov word, we say that i is an ascent (resp.descent) of w if it is an ascent (resp.descent) of one of its blocks.
Let us denote by SW(n, k, l) the set of segmented Smirnov words with exactly k ascents and l descents.Note that such a segmented Smirnov word has exactly n − k − l blocks, as any index 1 ≤ i < n must be either an ascent, a descent, or the last letter of a block.For µ ⊨ 0 n, we also define SW(µ, k, l) as the intersection SW(µ) ∩ SW(n, k, l).
We now introduce a statistic on segmented Smirnov words.
Definition 1.7.For a segmented Smirnov word w, we say that (i, j) with 1 ≤ i < j ≤ n is a Smirnov inversion (or sminversion) if w i > w j and one of the following holds: 1. w j is the first letter of its block; 2. w j−1 > w i ; 3. i ̸ = j − 1, w j−1 = w i , and w j−1 is the first letter of its block; We denote sminv(w) the number of sminversions of w, and q sminv(w) .
Remark 1.8.Although we gave four cases in this definition in order to be unambiguous, there is a way to think about the sminversions as certain occurrences of 2 − 31 pattern.Recall that a usual 2 − 31 occurrence is defined for (not necessarily Smirnov) words a = a 1 • • • a n as a pair (i, j) with i < j − 1 that satisfies a j < a i < a j−1 .
It is easy to see that a usual 2 − 31 occurrence in a(w) corresponds to a sminversion in w of type (1) or (2).If we extend the notion of 2 − 31 occurrence to allow a i = a j−1 but only when a j−2 > a j−1 , then these extended occurrences in a(w) are precisely our sminversions in w.
Smirnov inversions actually extend inversions on ordered multiset partitions.For µ ⊨ 0 n and r ∈ N, an ordered multiset partition with content µ and r blocks is a partition of the multiset {i µi | i > 0} into r ordered subsets, called blocks (see for example [25]).We denote by OP(µ, r) the set of such ordered multiset partitions, and we recall here the definition of a statistic on this set of objects.Definition 1.10.For an ordered set partition π = (π 1 , . . ., π r ), an inversion is a pair of elements a > b such that a ∈ π i , b = min π j , and i < j.We denote by inv(π) the number of inversions of π.
Proposition 1.11.The map Π restricts to bijections sending sminv to inv in both cases.
Proof.When k = 0 or l = 0, the blocks of a segmented Smirnov word are strictly increasing or decreasing, and thus become sets via Π.Thus, the restrictions of Π are well-defined, and it is clear that they are bijections in both cases.
Let w ∈ SW(µ, k, 0).Since there are no descents, the blocks of w are strictly increasing.Let (i, j) be a sminversion in w: then, since the blocks of w are strictly increasing, j is necessarily initial.It is therefore the minimum of its block, and so w i > w j is an inversion in the corresponding ordered set partition.Conversely, if π ∈ OP(µ, n − k), writing the blocks of π in increasing order yields a segmented Smirnov word with no descents, and the inversions of π correspond to sminversions for the same reason.
Let now w ∈ SW(µ, 0, l).Since there are no ascents, the blocks of w are strictly decreasing.Let (i, j) be a sminversion in w: then, since the blocks of w are strictly decreasing, w j is necessarily maximal among the block letters smaller than w i .It follows that j is the unique index in its block forming a sminversion with i.Furthermore, in each block to the right of i, there is one such index if and only if the final letter of the block, which is also its minimum, is smaller than w i .It follows that each sminversion in w corresponds to exactly one inversion in the corresponding ordered set partition.Conversely, if π ∈ OP(µ, n − l), writing the blocks of π in decreasing order yields a segmented Smirnov word with no descents, and the inversions of π correspond to sminversions for the same reason.
Algebraic recursion
For standard notations and conventions for symmetric functions, we refer the reader to [10,18,24].The Theta operators Θ f are introduced in [5].For example, Hλ denotes the modified Macdonald polynomial of shape λ.For any symmetric function f , the operator f ⊥ is such that ⟨f ⊥ g, h⟩ = ⟨g, f h⟩ for all symmetric functions g, h; where ⟨ , ⟩ is the Hall scalar product which can be defined by declaring the basis of Schur functions to be orthonormal.The symmetric functions E n,k are defined via the expansion where we made use of plethystic notation and the q-Pochhammer symbol.Setting z = q, we get that n k=0 E n,k = e n .The symmetric function ∇E n,k provides a nice combinatorial refinement of the famous shuffle theorem, in which the k specifies the number of times the Dyck path returns to the diagonal.
Our main algebraic recursion is the specialization t = 0 of [8,Theorem 8.2].We recall here the statement.
Let us fix the following notation: We need the following lemma.
In our case, we have that , so the thesis follows.
In the rest of this section, we will show that we can rewrite the recursion as follows.
The initial conditions are trivial.In order to remain sane when dealing with all these recursions, we will prove this theorem through a series of intermediate equalities.First, we recall two useful identities.
From now on, we use the notation B := n − k − l (the number of blocks) to make our formulae more compact.
Proof.Using Lemma 2.2 and Theorem 2.1, we have as desired.We used a → B − a in the second equality and r → j − r, a → j − a in the last one.
We are now ready to prove Theorem 2.3.
Proof of Theorem 2.3.First, we apply Proposition 2.4 to Lemma 2.6, getting the following.
and we use Proposition 2.5 with x = B, y = j − r, and z = a − i (and the corresponding off-by-one term), and get Using again Proposition 2.5 with x = B −(j −r −a)−1, y = r, and z = r −i (and the corresponding off-by-one term), we get and finally as desired.
It is convenient to rewrite the statement as follows.
Let us define the power series q sminv(w) x w where x w = n i=1 x wi .Our goal is to show the following theorem, which is our first main result: Theorem 3.1.For any n, k, l with k + l < n, we have the identity The proof is given in Section 3.2, and is a bit technical.We first give a special case of the result which serves to illustrate some of the combinatorics involved, and is interesting in itself.
The standard case
We take the inner product of the quantity in Theorem 3.1 with h 1 n , that is, we want to show ⟨SF(n, k, l), h 1 n ⟩ = SW q (1 n , k, l).Recall that the latter is the q-enumerator for segmented permutations with k ascents and l descents.By specializing Corollary 2.7 to µ = 1 n , so that j = 1 in that statement, it is readily shown that we have to prove the following recursive formula for SW q (1 n , k, l).Proposition 3.2.For any n, k, l with k + l < n, the polynomials SW q (1 n , k, l) satisfy the recursion with initial conditions SW q (∅, k, l) = δ k,0 δ l,0 .
Before getting started with the proof, it is worth noticing that in this case the definition of sminv in Definition 1.7 simplifies significantly: since there are no equal values, we only have cases 1 and 2 to consider, which are essentially 2 − 31 patterns, cf.Remark 1.8.
Proof.Initial conditions are trivial, as the only segmented permutation on 1 element has no ascent or descent.To get a recursive formula, will insert the value n in a segmented permutation on n − 1 elements.It can be done in four distinct ways: • as a new singleton block (keeping the number of ascents and descents the same, and increasing the number of blocks by one), or • at the beginning of a block (creating no ascent and one descent, and keeping the number of blocks the same), or • at the end of a block (creating one ascent and no descents, and keeping the number of blocks the same), or • replacing a block separator (creating both an ascent and a descent, and decreasing the number of blocks by one).
We want at the end, a permutation σ in SW(1 n , k, l) to prove (8).So in these four cases the starting permutation must belong to SW( respectively, and this corresponds to the four terms on the right-hand side of Equation (8).
Note that σ has B := n − k − l blocks.We claim that each of these four types of insertion can be done in exactly B different ways.
Indeed, in the first case, the starting segmented permutation must have B − 1 blocks, so we can insert the new singleton block n in B different positions.In the second (resp.third) case, the starting segmented permutation has B blocks, and we can insert n at the beginning (resp.end) of each of these blocks.In the last case, the starting segmented permutation has B + 1 blocks, so it has B block separators, and we can replace any of them with n.
In all four cases, we thus have B possibilities of insertion, and n forms a sminversion with all and only the initial positions to its right, thus contributing the factor Moreover, since n is the biggest entry so far, all the existing sminversions remain sminversions (suitably shifted by the position of n) as is easily checked.This proves the recursive formula (8).
Proposition 3.2 is of interest in itself because it provides a combinatorial formula for the polynomial corresponding to the conjectured Hilbert series of the (1, 2)-coinvariant module, namely We now tackle the general case, which gives the full conjectured Frobenius characteristic of the same module.
The general case
In order to better understand the following proof, the reader is invited to check Example 3.8 and Example 3.9 while reading it.As in the standard case above, we have four kinds of insertions: the difference is that now the maximal letter can be inserted multiple times, and we have to be a little careful about the order in which we perform the insertions.We will detail the approach here because it is going to be convenient when we come back to it in Section 4.
There are four kinds of insertions, depending on whether m is a peak (replacing a block separator), double fall (at the beginning of a block), double rise (at the end of a block), or valley (in a new, singleton block) in w; for each kind of insertion, we have to show that sminversions of w remain sminversions in w, and we have to keep track of the amount of new sminversions the occurrences of m form with the remaining letters of w.The way we do so is by ordering the possible insertion positions from right to left, corresponding to increasing contributions, and then computing the relevant q-enumerators.
Let us start with a preliminary lemma.
Lemma 3.3.Let w ∈ SW(n), and let m ≥ max w.Let w ∈ SW(n + 1) be a word obtained from w by inserting an occurrence of m as a double fall, double rise, or singleton.Then, if (i, j) forms a sminversion in w, the corresponding indices in w also form a sminversion.If instead m is inserted as a peak, the same holds as long as m > max w.
Proof.Suppose that (i, j) is a sminversion in w; it is clear that, if m is inserted anywhere except in position j, then the corresponding pair remains a sminversion.Suppose now that m is inserted in position j: we want to show that (i, j + 1) is a sminversion of w.If w i < m, now we have w i < w j and w i > w j+1 , and so (i, j + 1) forms a sminversion.If w i = m, then the newly inserted m cannot be a peak (because (i, j) formed a sminversion in w and m = max w), and so either m is a valley or a double rise, in which case j + 1 is initial (as in Definition 1.7, case 1), or m is inserted as a double fall, in which case it is must be initial, and so (i, j + 1) is a sminversion of the kind Definition 1.7, case 3.
It is apparent from Lemma 3.3 that we should insert peaks first, as doing so when there are already occurrences of m in w can affect the number of sminversions.
Lemma 3.4.Let w ∈ SW(n, k, l), and let m > max w.The q-enumerator with respect to sminv of words obtained from w by inserting s occurrences of m in w as a peak (i.e.replacing a block separator) is n − k − l − 1 s q q sminv(w) .
Proof.Let w be a word obtained as in the statement.Since m > max w, conditions 2, 3 and 4 in Definition 1.7 never happen, and so each occurrence of m forms a sminversion with each and only the initial letters of the blocks of w to its right.Since w has n − k − l − 1 block separators, or equivalently w has n − k − l − s blocks, these sminversions are q-counted by n−k−l−1 s q , which concludes the proof.Lemma 3.5.Let w ∈ SW(n, k, l), and let m ≥ max w, with no initial m in w.The q-enumerator with respect to sminv of words obtained from w by inserting s occurrences of m in w as a double fall (i.e. as an initial element) is Proof.As in Lemma 3.4, each occurrence of m forms a sminversion with each and only the initial letters of the blocks of w (or w, as they have the same blocks) to its right.Unlike Lemma 3.4, however, we can insert at most one occurrence of m in each block.Since w has n − k − l blocks, the new sminversions are q-counted by q ( s 2 ) n−k−l s q , which concludes the proof.
Lemma 3.6.Let w ∈ SW(n, k, l), and let m ≥ max w with no final m in w.The q-enumerator with respect to sminv of words obtained from w by inserting s occurrences of m in w as a double rise (i.e. as the final element of a block) is Proof.As in Lemma 3.5, each occurrence of m forms a sminversion with each and only the initial letters of the blocks of w to its right, and we can insert at most one occurrence of m in each block.Since w has n − k − l blocks, the new sminversions are q-counted by q ( s 2 ) n−k−l s q , which concludes the proof.Lemma 3.7.Let w ∈ SW(n, k, l), and let m ≥ max w with no singleton m in w.The q-enumerator with respect to sminv of words obtained from w by inserting s occurrences of m in w as a valley (i.e. as a singleton block) is n − k − l + s s q q sminv(w) .
Proof.Each occurrence of m creates exactly one sminversion with each block of w to its right.Indeed, since m is maximal, conditions 2 and 4 in Definition 1.7 never occur; if a block to the right of our occurrence of m begins with an entry that is strictly less than m, then condition 1 occurs and we have a sminversion; if it begins with an m, then the following letter (which exists because it is not a singleton) is strictly smaller than m, so condition 3 occurs.Since w has n − k − l blocks, we have n − k − l + 1 positions where we can insert an m, each creating a number of new sminversions equal to the number of blocks of w to its right, for a total contribution of n−k−l+s s q , as desired.
We are now ready to put the pieces together.
Our goal is to show that the right hand side satisfies the same recurrence as the left hand side, that is the one given in Corollary 2.7.In other words, we have to show that, for any nonzero µ ⊨ 0 n, with m maximal such that j := µ m > 0, and µ − = (µ 1 , . . ., µ m−1 , 0, . . .), we have with initial conditions SW q (∅, k, l) = δ k,0 δ l,0 .
It is clear that initial conditions are satisfied, as the only word of length 0 has no ascents or descents.If w ∈ SW q (µ, k, l), then m is the greatest letter appearing in w, and it appears exactly j times.We interpret the variables appearing in the recursion as follows: • i is the number of occurrences of m that are neither at the beginning nor at the end of a block; • a − i is the number of occurrences of m that are at the beginning of a block of size at least 2; • r − i is the number of occurrences of m that are at the end of a block of size at least 2; • j − r − a + i is the number of occurrences of m that are singletons.
We proceed backwards, starting from a word w ∈ SW q (µ − , k − r, l − a) for all the possible values of r and a, and inserting j occurrences of m.
First, for each possible value of i, we insert i peaks.By Lemma 3.4, since w has n − k − l − (j − r − a) blocks, this yields a contribution of n−j−(k−r)−(l−a)−1 i q , and we are left with a word with n − k − l − (j − r − a + i) blocks, k − (r − i) ascents, and l − (a − i) descents.
Next, we insert a − i double falls, and r − i double rises.By Lemma 3.5 and Lemma 3.6, the former yields a contribution of q ( r−i 2 ) n−k−l−(j−r−a+i) r−i q , and the latter yields a contribution of q ( a−i 2 ) n−k−l−(j−r−a+i) a−i q .We are left with a word with the same number of blocks, k ascents, and l descents.
Finally, we insert j −r−a+i singletons.By Lemma 3.7, this yields a contribution of n−k−l j−r−a+i q , and we are left with a word in SW(n, k, l), as expected.By construction, each of these words appears exactly once in this process, so the thesis follows.
We now show two examples for this process: in Example 3.8 we start from a small word and go through all the possible insertions; in Example 3.9 we start from a bigger word and choose one possible insertion.
We start by picking an example of an element in SW(µ − , k − r, l − a) = SW((5, 3), 2, 2) The sminv of this word is equal to 3. From this word, we will build some elements of SW(µ, k, l) = SW((5, 3, 6), 6, 5) that reduce to w 0 by deleting its maximal letters that are the first or last letters of its blocks, deleting singleton blocks containing a maximal letter, and replacing all other maximal letters with a block separator.We will keep track of the number of sminversions during the process.
• Since we took i = 2, we must insert 2 maximal letters that are neither the first, nor the last letter of its block.The number of block separators of an element in among these block separators, and replace them with a maximal letter: and so the extra sminversions are accounted for by the factor Let us continue our example with the word 1321312|121 of sminv equal to 3 + 2 = 5. q We continue our example with the word 31321312|3121 of sminv equal to 5 + 1 = 6 • Now we insert r − i = 1 maximal letter at the end of blocks.
• Finally, we insert j − r − a + i = 1 maximal letter in singleton blocks.
Example 3.9.Consider the word where the disks indicate possible peak insertion positions, and the filled ones corresponding to the chosen ones.After these insertions, we obtain Insertion positions of potential and chosen initial elements are illustrated, leading to Now insertion positions of potential and actual final elements are illustrated, and we have There possible insertion positions of singleton blocks are illustrated, the index referring to the number of such blocks to be inserted there, leading us to the final word In all, we successively inserted 2, 2, 3 and 5 letters corresponding to the values i = 2, a = 5, r = 4 and j = 12 in the proof.This led from a word w 0 in SW(16, 6, 3) to a word w 4 in SW(28, 10, 8).
4 Unified Delta theorem at t = 0 Theorem 3.1 gives us a combinatorial interpretation of SF(n, k, l) in terms of segmented Smirnov words.Recall that the latter equality being [14, Lemma 3.6] (it also follows from Lemma 2.2 by taking a sum over r).We recognize this as the symmetric function of the unified Delta Conjecture (7), evaluated at t = 0.In this section, we interpret a variant of the combinatorics of Theorem 3.1 in the setting of labelled Dyck paths, which was the original motivation.As we will see, our result reinforces the idea that there should be a q-statistic on labelled Dyck paths with both decorated rises and decorated contractible valleys that extends both the rise and the valley version of the Delta conjecture.
We will define a dinv statistic on decorated Dyck paths of area zero and show the following.
The Delta conjecture
It is now time to give some more precise definitions of the combinatorics of the Delta conjecture.
Definition 4.2.A Dyck path of size n is a lattice path starting at (0, 0), ending at (n, n), using only unit North and East steps, and staying weakly above the line x = y.A labelled Dyck path is a Dyck path together with a positive integer label on each of its vertical steps such that labels on consecutive vertical steps must be strictly increasing (from bottom to top).We will draw the labels of vertical steps in the square to its right.
A rise of a labelled Dyck path is a vertical step that is preceded by another vertical step.
A valley of a labelled Dyck path is a vertical step v preceded by a horizontal step.A valley v is contractible if it is either preceded by two horizontal steps, or by a horizontal step that is itself preceded by a vertical step whose label is strictly smaller than v's label.
A decorated labelled Dyck path is a labelled Dyck path, together with a choice of rises and contractible valleys, which are decorated.We set We decorate rises with a * and valleys with a •, and these decorations will be displayed in the square to the left of the vertical step.The set of decorated labelled Dyck paths of size n with k decorated rises and l decorated valleys, is denoted by LD(n) * k,•l .
See Figure 1 for an example of an element in LD(8) * 2,•2 .Definition 4.4 (Diagonal inversions).Take D a decorated labelled Dyck path D of size n and area word a. Set l to be the word such that l i is the label of the i th vertical step of D. For 1 ≤ i < j ≤ n, we say that • (i, j) is a primary diagonal inversion if a i = a j , l i < l j and i ̸ ∈ DValley(D); • (i, j) is a secondary diagonal inversion if a i = a j + 1, l i > l j and i ̸ ∈ DValley(D).
For µ ⊨ 0 n let LD(µ) * k,•l be the subset of Dyck paths in LD(n) * k,•l with content µ, that is, their multiset of labels is {i µi | i > 0}.For D ∈ LD(µ) * k,•l , let x D := x µ .Then the Delta conjecture states that where the first equality (the rise version) is now a theorem [7].
From segmented Smirnov words to decorated Dyck paths
We have shown in Proposition 1.11 that, when k = 0 or l = 0, segmented Smirnov words reduce to ordered set partitions.Combining results from various papers in the literature [12,19,20,25], it has been shown that ordered set partitions have several equidistributed statistics, namely inv, dinv, maj and minimaj, which in turn have been shown to bijectively match all the specializations at q = 0 or t = 0 of the various versions of the Delta conjecture.
For reasons that will be clear in a moment, we are mainly interested in the inv and dinv statistics.We recalled the definition of the former in Definition 1.10; the latter is as follows.
Definition 4.6.For an ordered set partition π = (π 1 , . . ., π r ) ∈ OP(µ, r), a diagonal inversion is a triple h, i, j such that either i < j and the h th smallest element in π i is strictly greater than the h th smallest element in π j , or i > j and the (h + 1) th smallest element in π i is strictly greater than the h th smallest element in π j .We denote by dinv(π) the number of diagonal inversions of π.
•l be the subset of area 0 Dyck paths in LD(n) * k,•l , that is the subset of paths where every rise is decorated and every valley lies on the main diagonal; see for example the path on the right in Figure 1.Also let LD 0 (µ We care in particular about the bijections respectively mapping the dinv and inv statistics on ordered set partitions to the dinv on Dyck paths.See Figure 2 for an example or [12, Proposition 4.1] for more details. Remark 4.7.While not relevant to our goal, we would like to point out that (11) also sends minimaj to pmaj, thus proving the q = 0 specialization of the pmaj version of the Delta conjecture [6, Conjecture 2.5]; while the proof of this fact is trivial, the authors are not aware of it being mentioned anywhere in the literature.The case t = 0 of the same conjecture remains open.
Using again an insertion method, we will show a bijection between segmented Smirnov words of size n with k ascents and l descents, and labelled Dyck paths of size n with k decorated rises and l decorated contractible valleys, that matches (11) and ( 12) respectively when l = 0 or k = 0.
Recall that paths in LD 0 (n) * k,•l can be easily characterized as being concatenation of paths of the form N i E i , in which all rises are decorated (see Figure 1, right).This precisely ensures that the area is zero, see Definition 4.3.
For a path in LD 0 (µ) * k,•l , the maximal label m occurs j times by definition.Since m is maximal, it can only occur as the top label of consecutive vertical steps.There are four distinct possibilities, illustrated in Figure 3, top: 1. m labels a rise followed by a decorated valley.
2. m labels a rise followed by a non decorated valley (or is the last north step of the path).
3. m labels a north step on the diagonal, which is a decorated valley 4. m labels a north step on the diagonal, which is not decorated.
To delete m and get a path in LD 0 (µ − ), delete the north step carrying m and the following east step.In the first case, also delete the rise decoration on that step and the valley decoration on the following north step; in the second and third case, also delete (rise or valley) decoration on that step.See Figure 3 for an illustration.We want to understand how to reverse this procedure, and insert the labels m in a path in LD 0 (µ − ).
Let us call blocks of a decorated path with area equal to 0 the subpaths separated by the non decorated north steps on the main diagonal: then all north steps of a block are decorated except the first one, so that a path in LD 0 (n, k, l) will have n − k − l blocks.
By the inductive hypothesis, ϕ : SW(µ − ) → LD 0 (µ − ) is a bijection; we strengthen the inductive hypothesis by also asking that blocks of a Dyck path We want to insert j north steps labelled m to get a path in LD 0 (µ).We do these insertions successively, according to the four types of such labels listed above.
First, for each block of D ′ = D (0) that is not the last one, consider the last north step: one can insert a decorated rise labeled m, followed by an east step, and decorate the following valley: it was not decorated since it is between two blocks, and is contractible since it is preceded by at least two east steps.In total let i be the number of insertions of this form; call the resulting path D (1) .Each of these insertions joins two consecutive blocks of D ′ , and this corresponds to a peak insertion in w ′ = w (0) : call w (1) the word obtained by joining the corresponding blocks of w ′ , by replacing their block separators with an occurrence of m.
Then, for each block of D (1) , consider the last north step: one can insert a decorated rise labeled m, followed by an east step.Let r − i be the number of such insertions, and D (2) be the resulting path.This corresponds to a double rise insertion in the corresponding segmented Smirnov word, that is, inserting an occurrence of m at the end of the corresponding blocks of w (1) ; call w (2) the word obtained this way.Notice that, if l = 0, this corresponds to the bijection shown in Figure 2.
Next, for each block of D (2) , one can extend it by adding a north step labelled m on the diagonal, followed by an east step, and decorate this new north step which must be a contractible valley.Indeed, if the new valley is preceded by a rise, then it is always contractible; if not, since m is maximal and there are no occurrences of m on the main diagonal, then the new valley is necessarily contractible.Let a − i be the number of insertions of this form, and call the resulting path D (3) .This corresponds to a double fall insertion in the corresponding segmented Smirnov word, that is, inserting an occurrence of m at the beginning of the corresponding blocks of w (2) ; call w (3) the word obtained this way.Once again, if k = 0, this corresponds to the bijection shown in Figure 2.
Finally, one can insert singleton blocks formed by a north step labeled m on the diagonal followed by an east step among all blocks of D (3) .Several such singleton blocks can be inserted between two consecutive blocks of D (3) , as well as at the beginning and end of it.There must be j − r − a + i such insertions, so that j labels m occur in the final path D (4) = D.This corresponds to a valley insertion in the corresponding segmented Smirnov word, that is, inserting an occurrence of m as a singleton in between the corresponding blocks of w (3) ; call w (4) = w the word obtained this way.When k = 0 or l = 0, this also corresponds to the bijection shown in Figure 2.
We define ϕ(w) := D. Since each step of this construction is reversible, and we already showed in Theorem 3.1 that segmented Smirnov words can be constructed this way (recall that double rise and double fall insertions commute), the thesis follows.Moreover, we showed that, when k = 0 or l = 0, this bijection restrict to the ones shown in Figure 2, as desired.
Another statistic on segmented Smirnov words
We now need to introduce a slightly tweaked version of our statistic on segmented Smirnov words, which better matches the combinatorics of the Delta conjecture.In order to do so, we need some preliminary definitions.
Definition 4.9.Let w ∈ SW(n) and let m ∈ N.For that is, the number of letters of w strictly smaller than m that are in between w i and the first thing to the left of w i that is either a letter strictly greater than m or a block separator.
Definition 4.10.For a segmented Smirnov word w, we say that (i, j) with 1 ≤ i, j ≤ n is a diagonal inversion if w i > w j and one of the following holds: 1. i is not a peak, i < j, ht wi (i) = ht wi (j), and if j = i + 1 then j must be initial; 2. i is not a peak, i > j + 1, and ht wi (i) = ht wi (j) + 1; 3. i is a peak and (i, j) is a sminversion.
We denote sdinv(w) the number of diagonal inversions of w, and q sdinv(w) .
Remark 4.11.It is worth noticing that, if i is not a double rise, then (i, j) is a diagonal inversion if and only if it is a sminversion.Indeed, if i is a valley or a double fall, then i is either initial or a descent, and ht wi (i) = 0; it follows that, in Definition 4.10 condition 2 is never satisfied, and condition 1, namely ht wi (j) = 0, reduces to one of the four conditions in Definition 1.7.If i is a peak instead, then diagonal inversions are sminversions by definition.
So the pairs (i, j) that form diagonal inversions are ( (for which w i = 2) and (4, 8) (for which w i = 3).Thus the sdinv of this word is 10.
Example 4.13.Let us compute sminv and dinv for all the elements of SW((2, 1)): The word its sminv its dinv So we obtain: Indeed, by construction this statistic extends both inv and dinv on ordered multiset partitions; recall the map Π defined before Proposition 1.11.
Proof.Proposition 1.11 show that these are bijections, and since the statistics sminv and sdinv coincide when k = 0, we only have to deal with the case l = 0.
Let w ∈ SW(µ, k, 0).Since there are no descents, the blocks of w are strictly increasing.It follows that, for every i, if m ≥ w i , then ht m (i) = h is the same as saying that w i is the h th smallest element in its block, and so the conditions in Definition 4.10 are the same as the ones in the definition of dinv on ordered multiset partitions.The thesis follows.
Equidistribution of sminv and sdinv
The statistics sminv and sdinv on segmented Smirnov words are equidistributed.Indeed, if we let q sdinv(w) x w , we have the following.
Theorem 4.15.For any n, k, l with k + l < n, we have the identity As before, we split the proof in several lemmas.
Lemma 4.16.Let w ∈ SW(n), and let m ≥ max w.Let w ∈ SW(n + 1) be a word obtained from w by inserting an occurrence of m as a double fall, double rise, or singleton.Then, if (i, j) forms a diagonal inversion in w, the corresponding indices in w also form a diagonal inversion.If instead m is inserted as a peak, the same holds as long as m > max w.
Proof.Suppose that (i, j) is a diagonal inversion in w.If m is not inserted as a peak, then ht a does not change on any letter of w for all a ≤ m, so if two letters of w form a diagonal inversion, they will also form a diagonal inversion in w.If m is inserted as a peak instead (i.e.replacing a block separator), then ht a does not change on any letter of w for all a < m, but since every letter of w is strictly smaller than m, then if two letters of w form a diagonal inversion, they will also form a diagonal inversion in w.
By Remark 4.11 peaks, double falls, and valleys, sminversions and diagonal inversions coincide.It follows that Lemma 3.4, Lemma 3.5, and Lemma 3.7 also hold for sdinv, the proof being exactly the same.For the double rise insertion, we will need a different strategy.The reader is invited to follow along using Example 4.18.
Lemma 4.17.Let w ∈ SW(n, k, l), and let m ≥ max w with no final m in w.The q-enumerator with respect to sdinv of words obtained from w by inserting s occurrences of m in w as a double rise (i.e. as the final element of a block) is Proof.We proceed in the same fashion as in [25,Subsection 4.3].Let w = (w 1 , . . ., w h ), and let us preliminarily define |w i | m := #{a | w i a < m}, that is, the number of letters strictly smaller than m in w i .We sort the blocks lexicographically by (−|w i | m , i): we say that w j ≼ w i if either and j < i; in other words, if we disregard all the occurrences of m, the leftmost biggest block is the first in this order, and the rightmost smallest block is the last.
We claim that adding an occurrence of m at the end of w i creates #{j | w j ≺ w i } new diagonal inversions.Notice that, by definition, for each i the values of ht m on the indices (in w) corresponding to elements in {a | w i a < m} are exactly the interval [0, |w i | m −1], as w i has no entry strictly greater than a.Also notice that, if we insert an occurrence of m at the end of w i , its ht m is going to be exactly |w i | m .Now, insert an occurrence of m at the end of w i .Take j such that w j ≺ w i .If |w i | m < |w j | m , then in w j there are two letters, both strictly smaller than m, whose indices have ht m equal to |w i | m and |w i | m − 1 respectively, so whether i < j or i > j, said occurrence of m forms a diagonal inversion with some entry in w j .If |w i | m = |w j | m , then by definition i > j and the index of the last letter of j has ht m equal to |w i | m − 1, so it forms a diagonal inversion with the occurrence of m at the end of i.It is easy to see that no elements in w j with w i ≼ w j create diagonal inversions with the newly inserted letter m: they all have ht m strictly smaller that |w i | − 1, except for the letter immediately preceding the newly inserted m, with which does not create a diagonal inversion by definition.
Summarizing, said ordering of the blocks of w is such that inserting an occurrence of m at the end of the a th block creates a−1 new diagonal inversions; since we have to insert s such occurrences, and we have to do that in distinct blocks, the new diagonal inversions are q-counted by q ( s 2 ) n−k−l s q , which concludes the proof.This newly inserted letter creates diagonal inversions only with 3 letters: exactly one in each of the preceding (with respect to ≺) blocks (indicated in blue).
Theorem 4.15 now follows easily.
By the same argument used in the proof of Theorem 3.1, using Lemma 3.4, Lemma 4.17, Lemma 3.5, and Lemma 3.7, it follows that SW q (µ, k, l) satisfies the same recursion as SW q (µ, k, l) with the same initial conditions, so the two must coincide.
Remark 4.19.Proposition 4.14 holds regardless of the ordering on the blocks we use for the peak insertion: indeed, when k = 0 or l = 0, peak insertions never happen, so this adds a layer of freedom to the definition of sdinv while maintaining the same essential properties.We decided to keep the ordering linear for simplicity, but since every possible ordering defines a statistic with the same distribution; it is possible that a different ordering yields a variant of this statistic that has a simpler or more uniform description.
Unified Delta theorem at t = 0
Since our sdinv statistic reduces to the dinv and inv statistics on ordered multiset partitions when either l = 0 or k = 0, the bijection in Theorem 4.8 extends ( 11) and ( 12) also with respect to the statistic.It follows that, when k = 0 or l = 0, our bijection maps the sdinv of a segmented Smirnov word to the dinv of the corresponding decorated Dyck path; therefore, this bijection produces a q-statistic on labelled Dyck paths with both decorated rises and decorated contractible valleys that matches the expected symmetric function expression when t = 0, establishing a very solid first step towards the statement of a unified Delta conjecture.
In particular, we know that and that Θ e l ∇e n−l | t=0 = π∈OP(n,n−l) so, now that we have a unified combinatorial model for Θ e k Θ e l ∇e n−k−l | t=0 , it is natural to try to interpret it in the framework of the Delta conjecture.We achieve that with the following definition.While not entirely explicit, this statistic solves the long-standing problem of giving a unique model for the two versions of the Delta conjecture, at least when t = 0; indeed, Theorem 4.1 now follows as a trivial corollary of Theorem 4.8 and Theorem 4.15.It would be very interesting to have an explicit description of the statistic on Dyck paths, maybe using a different variant of the peak insertion (as per Remark 4.19), in order to possibly derive an extension to objects with positive area.
'Catalan' case via fundamental quasisymmetric expansion
In this section, we give an expansion for SW x;q (n, k, l) in terms of fundamental quasisymmetric functions.This is a compact way to rewrite the expansion into segmented Smirnov words.We then use it to compute ⟨SF(n, k, l), e n ⟩, which is the coefficient of s 1 n in the Schur expansion of SF(n, k, l).Is it thus the (conjectured) graded dimension of the sign-isotypical component of the (1, 2)-coinvariant module.This is often referred to as the 'Catalan' case, as, for the classical shuffle theorem, ⟨∇e n , e n ⟩ yields a (q, t)-analogue of the Catalan numbers.
Fundamental quasisymmetric expansion
We need some definitions before stating our expansion.
Definition 5.1.Let w be a segmented Smirnov word.For 1 ≤ i ≤ n, we say that i is thick if i is initial, or i is not initial and w i−1 > w i (i.e.i − 1 is a descent); we say it is thin otherwise, that is, i is not initial and w i−1 > w i (i.e.i − 1 is an ascent).
Definition 5.2.Let σ be a segmented permutation of size n, i < n, and let j be such that σ j = σ i + 1 (so j = σ −1 (σ i + 1)).We say that σ i is splitting for σ if either of the following holds: • i is thick and j is thin; • i and j are both thin and i < j; • i and j are both thick and j < i.
The main result of this section is the following.
Define the reading order of w by scanning the entries from the smallest value to the biggest, and for a given value we first read the thin entries from right to left, and then the thick ones from left to right.Explicitly, it is the linear order ≤ w on [n] defined by i < w j if either of the following holds: 1. w i < w j ; 2. w i = w j , i is thin and j is thick; 3. w i = w j , i and j are both thin and i > j; 4. w i = w j , i and j are both thick and i < j.
The standardization st(w) of w is then the segmented permutation σ of the same shape such that Example 5.4.Let w = 121|31|2132 where thin entries are underlined.Then st(w) = 152|93|6487.
Lemma 5.5.For any segmented Smirnov word w, we have that w and st(w) have the same ascents, descents, and sminversions.
Proof.Let us write σ = st(w).The fact that w and σ have the same ascents and descents follows immediately from the fact that w i < w j =⇒ i < w j.
We now want to show that (i, j) is a sminversion in w if and only if it is a sminversion in σ.Notice first that i is thick in w if and only if it is thick in σ, and that for i < j and w i = w j , we have σ i < σ j if and only if j is thick.
Suppose that (i, j) is a sminversion in w: then w i > w j , so by construction σ i > σ j .Now if j is initial in w, then it is in σ as well.If w j−1 > w i , then that is the case for σ as well.This deals with the cases (1) and ( 2) in Definition 1.7.Now cases ( 3) and ( 4) can be rewritten as i < j − 1, w j−1 = w i , and j − 1 is thick.This implies that j − 1 will come after i in the reading order, and thus σ j−1 > σ i .In each case (i, j) is a sminversion in σ, as desired.
Suppose now that (i, j) is a sminversion in σ.This means that σ i > σ j and either j is initial or σ j−1 > σ i > σ j .In either case j is thick, and in particular we must have w i > w j .If j is initial in σ, then it is in w as well.And if σ j−1 > σ i , then either w j−1 > w i , or w j−1 = w i and j − 1 is thick.
In either case (i, j) is a sminversion in w, as desired.
Proof of Proposition 5.3.Thanks to the previous lemma, it is enough to show that, for any σ ∈ SW(1 n , k, l), we have x w .
We fix σ.For w a segmented Smirnov word, write x w = x v1 x v2 • • • x vn where v j is the increasing rearrangement of w i 's according to the linear order ≤ w .Comparing the formula above with (13), we are led to show the following claim: w is a segmented Smirnov word such that st(w) = σ if and only if w is a segmented word with the same shape as σ, and for each i, j such that σ j = σ i + 1, we have w i ≤ w j , and w i < w j if i ∈ Split(σ).
Suppose first that w ∈ SW(n) satisfies st(w) = σ.By definition of standardization w and σ have the same shape.If σ j = σ i + 1, in particular σ i < σ j , so necessarily w i ≤ w j .If moreover w i = w j , then by definition i is not splitting in σ, and thus i ∈ Split(σ) implies w i < w j .
Conversely, suppose that w is a segmented word with the same shape as σ, such that if σ j = σ i + 1, then w i ≤ w j , and w i < w j if i ∈ Split(σ).This implies the Smirnov condition for w: indeed, assume we have w i = w i+1 in a block for some i.In the order ≤ σ consider the chain between i and i + 1.No element but the last one can be split by the hypothesis, so the chain consists of thin elements with decreasing values followed by thick elements with increasing values.But no two elements of such a chain can be adjacent in a block, as is checked by direct inspection.This contradicts the hypothesis, and thus w ∈ SW(n).
We must show st(w) = σ, that is, the reading order ≤ w coincides with ≤ σ .Assume then i < σ j.This first entails w i ≤ w j by hypothesis.If w i < w j then i < w j by definition of < w .Assume now w i = w j ; we can suppose i and j consecutive in < σ , that is, σ j = σ i + 1.By the hypothesis we have i / ∈ Split(σ): since thin or thick indices coincide in w and σ by the hypothesis, the definition of the reading order implies that i < w j, as desired.
The 'Catalan' case
In this section, we look at the combinatorics of ⟨Θ e k Θ e l H (n−k−l) , e n ⟩| t=0 .As it is happens often, this case is significantly simpler, and we are able to provide an explicit formula in terms of q-binomials.
The product of q-binomial coefficients above clearly enumerates sequences v of length n − 1 on the alphabet {0, 1, 2} that have n − k − 1 occurrences of 0, k occurrences of 1 and l occurrences of 2, according to inversions i < j such that v i > v j .The inversion statistic is called mahonian on {0, 1, 2}.Our starting point is the following.
Proof.Recall that, if f is symmetric, then its fundamental expansion determines its Schur expansion as follows (see [9,24]) Here F α = Q S(α),n where S(α) is the subset of [n − 1] given by the partial sums of α, and s α is given by the Jacobi-Trudi formula, and is thus equal to 0 or a Schur function (up to sign).Now Q • σ 1 = B + l; • the values 1, 2, . . ., B + l occur in σ from right to left, and are in thick positions, with B of them initial; • the values B + l + 1, . . ., n occur in σ from left to right in thin positions.
Kasraoui's result above is in fact proved bijectively.It is based on Foata's second fundamental transformation; see for instance [17, §10.6].Let us describe this construction in our case: we will define a function γ from words in X to itself, by induction on length.γ sends the empty word to itself, and γ(a) = a for a ∈ X. Assume now v has length ≥ 2. The claim is that γ is a bijection such that for any v, the number of inversions of γ(v) is inv (0,1),(0,2) (v) + maj (1,2) (v).We leave the proof to the reader, the general case being done in [15].
6 The maximal case k + l = n − 1 We focus in this last section on various aspects of the case k + l = n − 1 of Theorem 3.1.The combinatorial side now involves only Smirnov words.It is also conjecturally giving the graded Frobenius characteristic of the subspace of the (1, 2)-coinvariant space of maximum total degree in the fermionic variables ζ n , ξ n (cf.( 2)), extending (at least conjecturally) the results in [16].
It turns out that this special case is linked to familiar instances of geometric and combinatorial constructions.
Chromatic quasisymmetric functions
Smirnov words are in bijection with proper colorings of the path graph, which means that our symmetric function, suitably specialized, coincides with the chromatic quasisymmetric function of the path of length n, which also coincides with the Frobenius characteristic of the representation of S n on the cohomology of the toric variety V n associated with the Coxeter complex of type A n−1 (see [23,Section 10]).
Given a graph G = (V, E), a proper coloring is a function c : V → Z + such that {i, j} ∈ E =⇒ c(i) ̸ = c(j).If V = [n], a descent of a coloring is an edge {i, j} ∈ E such that i < j and c(i) > c(j).
The chromatic quasisymmetric function of G is defined as where des(c) is the number of descents of c.This suggests the existence of an extra q-grading on the cohomology of the permutahedral toric variety V n : indeed the graded Frobenius characteristic of that cohomology is known to be given by ωX Gn (x; u), see [22].
Parallelogram polyominoes
A parallelogram polyomino of size m × n is a pair of north-east lattice paths on a m × n grid, such that first is always strictly above the second, except on the endpoints (0, 0) and (m, n).A labelling of a parallelogram polyomino is an assignment of positive integer labels to each cell that has a north step of the first path as its left border, or an east step of the second path as its bottom border, such that columns are strictly increasing bottom to top and rows are strictly decreasing left to right.See Figure 4 for two examples of labelled parallelogram polyominoes.In [4] it is conjectured that Θ em−1 Θ en−1 e 1 enumerates labelled parallelogram polyominoes of size m × n with respect to two statistics, one of which is (a labelled version of) the area, and the other is unknown.
It is immediate to see that parallelogram polyominoes of size (n − k) × (k + 1) and area 0 are again in bijection with Smirnov words of length n with k ascents.Indeed, reading the labels of such a polyomino bottom to top, left to right, yields a Smirnov word of size n with k ascents, and the correspondence is bijective.For example, the Smirnov word corresponding to the area 0 polyomino on the right in Figure 4 is 213532142.
In particular, sminversions on Smirnov words define a statistic on this subfamily of parallelogram polyominoes, proving the conjectural identity and partially answering Problem 7.13 from [4] in the case when the area is 0.
6.3
The case q = 0 Note that in this case, it is known [14] that the symmetric function in Theorem 3.1 is the Frobenius characteristic of the (0, 2)-case.It was also shown that the high-degree part of this module has a basis indexed by noncrossing partitions [16].In particular, this means that there is a bijection between segmented permutations with one block (that is, permutations) with zero sminv, and noncrossing partitions.It would be interesting to see if, using this combinatorial model, one could get a combinatorial basis for the (1, 2)-coinvariant ring.
Proof.Let σ be a permutation, and suppose that it has a 231 pattern, that is, that there exist indices i < j < k such that σ k < σ i < σ j .Let m = min{j < a ≤ k | σ m < σ i }; by definition, i < j ≤ m − 1, and σ m−1 > σ i , so (i, m) is a sminversion of σ.It follows that permutations with zero sminv are 231-avoiding permutations.Since a sminversion in a permutation corresponds to a 231 pattern, this concludes the proof.
Let π be a noncrossing partition, and let ϕ(π) be the permutation that, in one line notation, is written by listing the blocks of π sorted by their smallest element, with the elements of each block sorted in decreasing order.Let us call decreasing run of a permutation σ a maximal subsequence of consecutive decreasing entries of σ (in one line notation): then the blocks of π correspond to the decreasing runs of ϕ(π).For instance, if π = {{1, 2, 5}, {3, 4}, {6, 8, 9}, {7}}, then ϕ(π) = 521439867.
The map ϕ defines a folklore bijection between noncrossing partitions of size n with l + 1 blocks and 231-avoiding permutations with l descents.This recovers known numerology about the (0, 2)-case.
Remark 6.2.More generally, standard segmented permutations with zero sminv can be characterized as 231-avoiding permutations where letters of a block are smaller all than letters of the blocks to its right.Of these, there are 2n+1 n as is easily shown, and thus we recover the total dimension of the (0, 2)-coinvariant ring.
we must insert 2
maximal letters at the beginning of a block.Since an element of SW(µ − , k − r, l − a) has n − j − (k − r) − (l − a) = 4 blocks, and we removed i = 2 blocks at the previous step, our words now have n − k − l − (j − r − a + i) = 2 blocks, of which we must choose a − i = 2. 1321312|121 31321312|3121 +1 Doing this in the only way possible we get the expected number of sminversions.
Definition 4 . 3 (
Area).Given a decorated labelled Dyck path D of size n, its area word is the word of non-negative integers whose i th letter equals the number of whole squares between the i th vertical step of the path and the line x = y.If a is the area word of D, the area of D is area(D) := i∈[n]\DRise(D) a i .
Figure 3 :
Figure 3: Four cases for the occurrence of the maximal label m, and how to delete it.
Example 4 . 18 ..
Let us consider a segmented Smirnov word with maximal letter 3, such that 3 is never a final element.We record ht 3 for every letter, |w i | 3 for every block and indicate the ordering on the blocks induced by ≺: the word 21 | 121 | 321 | 2 | 2132 | 12 ht 3 Let us insert a 3 as a final element into block number 3, with respect to ≺ (the letter in red): the word 21 | 121 | 3213 | 2 | 2132 | 12 ht 3 01 012 0012 0 0122 01.
Definition 1.6.An index i ∈ {1, . . ., n} is called initial (resp.final ) if it corresponds to the first (resp.last) position of a block, i.e. if it is of the form | 2023-12-08T06:42:50.355Z | 2023-12-06T00:00:00.000 | {
"year": 2023,
"sha1": "d19d8e66bc7d6ffeeadfd4801604df1bcc9aa2f1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.aim.2024.109793",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "b3d5e5cb3ed48f1683e5f8b48c0c7c6db55baf8a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
221836938 | pes2o/s2orc | v3-fos-license | SUMO4 small interfering RNA attenuates invasion and migration via the JAK2/STAT3 pathway in non-small cell lung cancer cells
Small ubiquitin-like modifier 4 (SUMO4) is the latest member of the sumoylation family, which enhances the stability of protein, regulates the distribution and localization of the protein, and affects the transcription activity of the protein. However, the role of SUMO4 in non-small cell lung cancer (NSCLC) has not yet been reported. The present study first demonstrated that SUMO4 was upregulated in a number of tissues from patients with NSCLC. Immunohistochemistry was performed to demonstrate the expression level of SUMO4 in lung cancer tumor tissues. Following the transfection, The EMT status and signaling pathway activation regulated by SUMO4-siRNA was assessed by western blotting. The Transwell and wound healing assays were performed to investigate the regulatory effect of SUMO4-siRNA on cell migration and invasion. Cell Counting Kit-8 assay was performed to investigate whether SUMO4-siRNA affected the chemosensitivity of the NSCLC cells to cisplatin. Statistical analysis of immunohistochemical results from the tissues showed that the overexpression of SUMO4 was significantly associated with sex, tumor type, history of smoking, T stage and poor prognosis. It was also identified that SUMO4 small interfering RNA attenuated invasion and migration in NSCLC cell lines, as well chemosensitivity to cisplatin via the inhibition of the JAK2/STAT3 pathway. In conclusion, SUMO4 may play an important role in the poor prognosis of patients with NSCLC. The present study indicates that SUMO4 may be a potential therapeutic target for NSCLC.
Introduction
Lung cancer has a high degree of malignancy and ranks first as the cause of cancer-associated mortality in the United States (1). Non-small cell lung cancer (NSCLC) accounts for 85% of all lung cancer cases in the United States (2). Although the diagnosis and treatment methods of NSCLC are continuously improving, metastasis via the lymph nodes and blood system occurs at an early stage, and due to insidious onset and rapid progression, the prognosis of NSCLC remains unsatisfactory. Therefore, an in-depth study on the mechanism of NSCLC metastasis is of great significance for its prevention, and to develop novel molecular targeted therapeutic drugs.
The JAK/STAT signaling pathway was found to have sustained abnormal activation in various malignant tumor cells, such as prostate cancer, sarcomas and lymphomas, regulating tumor proliferation, apoptosis and metastasis (3). As a member of STAT family, STAT3 is closely associated with the prognosis of NSCLC (4). JAK2 activates STAT3 by phosphorylation, allowing it to enter the nucleus and perform its biological function (3). It has been reported that the activation of the JAK2/STAT3 signaling pathway induced metastatic NSCLC (5). Due to the importance of JAK2/STAT3 signaling in tumor development, targeted therapy for this pathway has also become a popular research topic in the study of NSCLC. At present, although several types of JAK2/STAT3-targeted drugs are undergoing clinical trial (and have made progress), their therapeutic effects remain unsatisfactory. Therefore, identifying new targets for JAK2/STAT3 sensitization may enhance the clinical effect of JAK2/STAT3 target-based therapy and improve the prognosis of patients with NSCLC.
Sumoylation is a ubiquitin-like post-translational modification that enhances the stability of protein and regulates their distribution and localization (6). The transcriptional activity of transcription factors can be modulated through sumoylation (7). To date, four members of the small ubiquitin-like modifier (SUMO) family (SUMO1, 2, 3 and 4) have been cloned and identified (7). SUMO4 is a newly discovered member of the sumoylation family, which is mainly expressed in the immune-associated organs and kidneys. It has been confirmed to be closely associated with type I diabetes (8), coronary heart disease (9), psoriasis (10), Behcet's disease (11) and other diseases. However, research into the potential regulatory effects of SUMO4 on tumorigenesis is somewhat lacking.
A Study has shown that SUMO4 negatively regulates NF-kB transcriptional activity in a diabetic model (12). Mo et al (11)(12)(13) found that SUMO4 decreased oxidative stress by increasing antioxidant enzymatic activity and DNA damage signaling-associated protein activity, thus activating the cellular self-protection mechanisms (13). SUMO4 also directly decreased the DNA-binding activity of the STAT protein, leading to the inhibition of JAK/STAT signaling (14). These findings suggest that SUMO4 may be associated with tumor development and progression. It has been reported that SUMO4 expression was increased in thyroid cancer (15). However, the expression and function of SUMO4 during the tumorigenesis of NSCLC remain unknown. The present study investigated the expression of SUMO4 in NSCLC and identified the mechanisms of SUMO4 in augmenting the proliferation, invasion and migration of NSCLC cells.
Materials and methods
NSCLC patient samples. A total of 100 NSCLC 10 adjacent non-cancerous tissues (defined as tissue which is at least 3 cm from cancerous region; samples were collected from 100 patients (71 men and 29 women) during surgery at Zhejiang Cancer Hospital (Zhejiang, China) between January 2009 and March 2011. All patients gave written informed consent to participate in the study and to allow their samples to be biologically analyzed. The age of the patients with NSCLC enrolled in the present study ranged from 39 to 76 years (mean, 61.3 years). The exclusion criteria included patients with other types of cancer and those who had received preoperative chemoradiotherapy. Control samples were obtained from the same patient at a site at least 3 cm from the tumor and were approved as control samples by a pathologist. Tissue sections were fixed with 10% formalin for at least 24 h at room temperature and subsequently embedded in paraffin for immunohistochemistry. The tumors were staged according to the pathological tumor/node/metastasis (pTNM) classification (7th edition) of the International Union against Cancer (16). All procedures for sample collection and processing were ratified by the International Review Board of Zhejiang Cancer Hospital (Hangzhou, China; approval no. IRB-2016-134). Cell lines. Human NSCLC cell lines A549, NCI-H1650 and SK-MES-1 were purchased from The Cell Bank of Type Culture Collection of the Chinese Academy of Sciences. All cell lines were cultured in DMEM (Cytvia) supplemented with 10% fetal bovine serum and 1% penicillin-streptomycin (Transgen Biotech Co., Ltd.) in humidified air at 37˚C (5% CO 2 ).
Immunohistochemical (IHC) staining. IHC staining of tumor tissues was performed on 5-µm sections. The general procedure for IHC staining was performed as previously described (17). The sections were blocked with blocking serum from the ABC Vectastain kit (cat. no. PK-6100; Vector Laboratories, Inc.) at room temperature for 30 min, followed by incubation with primary antibody against SUMO4 (1:50; cat. no. ab126606; Abcam) overnight at 4˚C. Then the sections were incubated with a horseradish peroxidase-conjugated mouse anti-rabbit Ig antibody at room temperature for 1 h, followed by staining with the chromogen diaminobenzidine (Zhongshan, Beijing, People's Republic of China) until a brown color was shown. The slides were counterstained with Mayer's hematoxylin at room temperature for 10 min. Random fields from each slide were viewed under a light microscope (Olympus DP73; Olympus Corporation) at x20 magnification.
Western blotting. Proteins extracted from cells were denatured in RIPA buffer (150 mM NaCl, 0.1% SDS, 25 mM Tris-HCl pH 7.6, 1% sodium deoxycholate and 1% NP-40) with protease inhibitors. The concentrations of protein were detected using a BCA protein assay kit (cat. no. P0010S; Beyotime Institute of Biotechnology). Equal amounts of total protein (50 µg) were analyzed by SDS-PAGE. Proteins were separated on a 12% polyacrylamide gel and transferred to a nitrocellulose membrane. The membrane was blocked for Table I. Summary of SUMO4 expression status in NSCLC and adjacent normal tissues analyzed in the present study. 1 h at room temperature using blocking buffer (0.5% fat-free milk). After blocking, the blot was probed with primary antibodies (dilution, 1:1,000) at 4˚C overnight. Subsequently, the blot was incubated with HRP-labeled secondary antibodies diluted in PBS buffer (1:2,000; cat. no. SA00001-2; ProteinTech Group, Inc.). ECL reagent was used for visualization. The images were obtained using the Bio-Rad Imaging System. Signal quantification was obtained using Quantity One software (version 4.6.6; Bio-Rad Laboratories, Inc.) and normalized to GAPDH.
Wound-healing assay. A total of 5x10 5 H1650, A549 and sk-mes-1 cells were seeded independently into each well of 6-well plates overnight. A scratch was made on the monolayers with a 200-µl pipette tip, and washed with PBS to remove the detached cells. Fresh DMEM/F12 medium without serum was added into each well of the 6-well plate. The wounded areas were observed and imaged using a light microscope (magnification x100) at 0 and 24 h. The migration results were quantified using ImageJ software (version 1.52; National Institutes of Health).
Transwell invasion assay. The trypsinized A549, H1650 and sk-mes-1 cells were washed with PBS and resuspended in serum-free DMEM medium. Then, 200 µl cell suspension (1x10 5 /well) was added to the upper chamber with a 50 µl solidified Matrigel-coated membrane. The lower chamber was filled with 800 µl DMEM supplemented with 10% FBS. After 24 h incubation, the chambers were fixed with 100% methanol for 20 min at room temperature, followed by staining with 0.1% crystal violet for 20 min at room temperature. Images were captured with an Olympus fluorescence microscope (magnification x100).
CCK-8 assay. Transfected H1650, A549 and sk-mes-1 cells were collected (100 µl) at 24 h post-transfection and seeded into 96-well plates at a density of 3x10 3 /well independently. Following incubation for 0, 12, 24, 36 and 48 h at 37˚C, 10 µl CCK-8 reagent was added into each of the wells, and the cells were incubated at 37˚C for 2 h. The absorbance was measured at a wavelength of 450 nm using a microplate reader (Bio-Rad Laboratories, Inc.).
Statistical analysis. Statistical analysis was performed using SPSS 23.0 software (IBM Corp.). All experiments were performed in triplicate and data are presented as the mean ± standard error of the mean. The association between the expression of SUMO4 and clinicopathological parameters was examined using the χ 2 test. The overall survival (OS) and disease-free survival (DFS) curves were produced using the Kaplan-Meier method, the difference in survival was determined using the log-rank test. P<0.05 was considered to indicate a statistically significant difference. Unpaired Student's t-test was performed for western blotting and wound-healing data analysis. One-way ANOVA with Tukey's test was performed for invasion assay and phosphorylation analysis. P<0.05 was considered to indicate a statistically significant difference. Numerical data are presented as the mean ± standard deviation.
Results
Overexpression of SUMO4 is associated with a poor prognosis in human NSCLC. In the present study, 10 adjacent normal lung tissues and 100 NSCLC tissues were collected for IHC staining, in order to explore the association between SUMO4 expression and NSCLC prognosis. The results revealed that SUMO4 was expressed differently in the cytoplasm of these tissues. The tissues were divided into 'positive' and 'negative' groups, according to SUMO4 expression (Fig. 1). The expression of SUMO4 in NSCLC tissues were significantly higher compared with the adjacent normal lung tissue (Fig. 1). Quantitative analysis of the IHC staining results indicated that 69% of NSCLC samples were positive for SUMO4, while all adjacent normal lung tissues were negative via IHC analysis (Table I; P=1.6862x10 -5 ).
In order to demonstrate the association between SUMO4 expression and NSCLC prognosis, the expression of SUMO4 was systematically analyzed relative to the clinicopathological characteristics of NSCLC from 100 patients. Based on the IHC staining results of all samples, the following associations were identified: Sex, tumor type, history of smoking and T stage were significantly associated with the expression of SUMO4 (P<0.05), however, age, body mass index (BMI), hypertension, diabetes, history of drinking, tumor location, tumor size, tumor differentiation, N stage, tumor stage and recurrence were not associated with the expression of SUMO4 (P>0.05) (Table II).
To clarify the associations between SUMO4 and NSCLC prognosis, the OS and DFS curves of these patients were generated using the Kaplan-Meier method. It was identified that positive expression of SUMO4 was significantly associated with short DFS and OS times (Fig. 2), indicating poor prognosis.
SUMO4 siRNA decreases cell migration, invasiveness and epithelial-mesenchymal transition (EMT) in NSCLC cells.
To explore whether SUMO4 regulates the migratory and invasive abilities of NSCLC cells, SK-MES-1, NCI-H1650 and A549 cells which expressed SUMO4, were transfected with SUMO4 siRNA and negative control siRNA (Fig. 3A). EMT marker proteins including N-cadherin, E-cadherin and vimentin were examined by western blotting as shown in Fig. 3B. A notable decrease in vimentin and N-cadherin, as well as increase in E-cadherin was observed. Cell migration and invasion were evaluated by wound healing and Transwell assays (Fig. 3C-F). It was found that SUMO4 siRNA significantly inhibited the wound closure and invasion capacities of A549, H1975 and SK-MES-1 cells.
SUMO4 siRNA promotes the sensitivity of NSCLC cells to chemotherapy. To identify whether SUMO4 expression affects NSCLC chemosensitivity, control and SUMO4-siRNA transfected A549, H1975 and SK-MES-1 cells were treated with cisplatin at various concentrations, and the CCK-8 assay was used to assess the inhibition of cell proliferation rate. It was found that the inhibitory effect on the proliferation of cells increased with increasing concentrations of cisplatin. SUMO4 silencing dose-dependently altered inhibition rate compared with the NC group (Fig. 4).
SUMO4 siRNA decreases cell invasiveness via the JAK2/STAT3 pathway in NSCLC cells.
To explore the underlying molecular mechanism of the augmented invasion by SUMO4 depletion, the JAK2/STAT3 pathway was specifically analyzed in the present study. Using SUMO4 siRNA combined with the application of JAK2 kinase inhibitor AG490, it was found that knocking down SUMO4 slightly decreased JAK2 and STAT3 activity, as reflected by the phosphorylated forms of JAK2 and STAT3 (Fig. 5). In addition, inactivation of JAK2 by a specific inhibitor (AG490) resulted in decreased invasive ability of cells, demonstrated by the transwell invasion assay (Fig. 3D).
Discussion
Sumoylation has been shown to play important roles in various biological processes such as signal transmission, nuclear transportation, gene expression regulation and cell cycle regulation (18). It also participates in the regulation of mitochondrion division, as well as the maintenance of genome integrity (18). Sumoylation is closely associated with several human diseases, such as cancer, diabetes, Parkinson's disease and Alzheimer's disease (19). For example, sumoylation of β-catenin regulated the proliferation of myeloma (20), CDK6 sumoylation regulated the cell cycle arrest of glioma cells (21), In 2004, Guo et al (23) and Bohren et al (24) simultaneously discovered the SUMO4 gene on chromosome 6q25, which contains 702 nucleotide residues and encodes 95 amino acids. Guo et al (23) also found a close association between SUMO4 and type I diabetes, which may be a novel sensitivity gene for type I diabetes (20). In addition, it was also confirmed that SUMO4 decreased the activity of NF-kB by binding with IKB (23). Mo et al (13) showed that SUMO4 initiated cell self-protection mechanisms by increasing the activity of antioxidant enzyme and DNA damage signaling protein, thus decreasing oxidative stress. These results indicated that SUMO4 may play an important role in the development of cancer. In previous studies, it was found that SUMO4 expression was significantly increased in thyroid cancer (13). The present findings confirmed that SUMO4 was associated with tumor progression. However, the study of SUMO4 in NSCLC has not been reported, and the mechanism of SUMO4 in tumors is still unclear.
The present study first demonstrated the expression of SUMO4 in NSCLC. The expression of SUMO4 in NSCLC tissues was significantly higher than in the adjacent normal lung tissues. The results showed that SUMO4 may play an important role in the occurrence and development of NSCLC. In the clinicopathological characteristics analysis, the SUMO4 positivity in men, squamous cell carcinoma, patients with a smoking history, T3 and T4 stage was found to be significantly higher compared with women, those with adenocarcinoma and those without a smoking history, and T1 and T2 stage tumors (P<0.05). However, older patients with a lower BMI, hypertension, diabetes, moderate or heavy drinking history, larger tumors, poor differentiation, lymph node metastasis, tumor stage III or IV, and with recurrence were associated with SUMO4 positivity, although not significantly so. These results showed that SUMO4 was closely associated with T stage, but there is no significant difference between N stage and tumor stage, which may be due to the limited number of specimens. Interestingly, it was found that men with squamous cell carcinoma and a history of smoking had high positivity for SUMO4. Since the smoking rate in men is significantly higher compared with women, the association between smoking and squamous cell carcinoma is higher compared with between smoking and adenocarcinoma. Therefore, it is speculated that smoking may increase the positivity for SUMO4 and result in the occurrence and development of NSCLC. In the survival analysis, patients positive for SUMO4 expression were found to have poor prognosis. The results showed that SUMO4 may be a novel prognostic factor for NSCLC.
The mechanism underlying the regulatory effect of SUMO4 in the A549, H1650 and SK-MES-1 cell lines was further explored. The expression of SUMO4 enhanced invasion and migratory abilities, and increased EMT in all three cell lines. These results imply that SUMO4 expression could promote the metastatic capability of NSCLC. It was found that SUMO4 expression decreased cisplatin chemosensitivity. Thus, SUMO4 expression may be involved in chemotherapy resistance in NSCLC.
Wang et al (14) found that SUMO4 decreased the DNA binding activity of STAT, thus inhibiting the JAK/STAT signaling pathway. In the present study, the inhibition of SUMO4 was found to inhibit invasion and migration by downregulating the activation of the JAK2/STAT3 pathway in NSCLC cells. SUMO4 may regulate the stability of molecules which activate JAK2-STAT3 signaling pathways, which lead to tumor progression. However, the molecular mechanism still requires further exploration.
There are limitations of the present study. Firstly, due to the limited number of specimens, some subgroups had insufficient samples. The sample size needs to be increased to obtain more accurate results. Secondly, in vivo studies should be performed to explore the effect of SUMO4 on NSCLC metastasis. Furthermore, the mechanisms underlying the effect of SUMO4 on JAK2/STAT3 signal pathway activation requires further exploration, by constructing SUMO4 expression plasmids, in future studies.
In conclusion, SUMO4 was found to be expressed in NSCLC and is significantly associated with sex, tumor type, history of smoking, T stage and poor prognosis in NSCLC. SUMO4 plays a significant role in cell invasion and migration via JAK2/STAT3 pathway activation in NSCLC cell lines, which implies that SUMO4 may be a potential therapeutic target for NSCLC with positive expression of SUMO4.
Ethics approval and consent to participate
The present study was approved by the Ethics Committee of Zhejiang Cancer Hospital and written informed consent was provided by all patients. All procedures involving human participants were performed in accordance with the ethical standards of the institutional and national research committee (Hangzhou, China; approval no. IRB-2016-134).
Patient consent for publication
Not applicable. | 2020-09-23T05:03:09.577Z | 2020-09-11T00:00:00.000 | {
"year": 2020,
"sha1": "28b58828ca71c9266022ab0366051e4182f442c2",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2020.12088/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28b58828ca71c9266022ab0366051e4182f442c2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
232075919 | pes2o/s2orc | v3-fos-license | Khavinson conjecture for hyperbolic harmonic functions on the unit ball
In this paper, we prove the Khavinson conjecture for hyperbolic harmonic functions on the unit ball. This conjecture was partially solved in \cite{JKM2020}.
introduction
For n ≥ 2, let R n denote the n-dimensional Euclidean space. We use B n and S n−1 to denote the unit ball and the unit sphere in R n , respectively. A mapping u ∈ C 2 (B n , R) is said to be hyperbolic harmonic if ∆ h u = 0, where ∆ h is the hyperbolic Laplacian operator defined by here ∆ denotes the Laplacian on R n . Clearly for n = 2, hyperbolic harmonic and harmonic functions coincide.
If φ ∈ L 1 (S n−1 , R), we define the invariant Poisson integral of φ in B n For more information about hyperbolic harmonic functions we refer to Stoll [21] and Burgeth [3,4].
Conjecture 1. Let p ∈ (1, ∞], n ≥ 3 and x ∈ B n \ {0}. Then where n x = x |x| , and t x is any unit vector such that t x , x = 0. Moreover, if p = n or p = ∞, then C p (x, ℓ) does not depend on ℓ.
Khavinson [12] obtained a sharp pointwise estimate for the radial derivative of bounded harmonic functions on the unit ball of R 3 and conjectured that the same estimate holds for the norm of the gradient of bounded harmonic functions. For harmonic functions this conjecture was formulated by Kresin and Maz'ya in [14] and in [15] considered the half-space analogue of the above conjecture. See [16,Chapter 6] for various Khavinson-type extremal problems for harmonic functions. Kalaj [8] showed that the conjecture for n = 4 and Melentijević [19] confirmed the conjecture in R 3 . Marković [18] solved the Khavinson problem for points near the boundary of the unit ball. The general conjecture was recently proved by Liu [17].
By computing the gradient of the Poisson-Szegö kernel and using the Möbius transformation as a substitution, we obtain the following integral representation Lemma 1. [5] For any p ∈ (1, ∞], x ∈ B n and l ∈ S n−1 , we have Moreover, one can easily deduce the following Lemma 2.
[5] For any p ∈ (1, ∞], x ∈ B n , l ∈ S n−1 and unitary transformation A in R n , we have For p ∈ (1, ∞] and ℓ ∈ S n−1 , let So in view of (2.1), we have Our main result is the following theorem solving the Khavinson conjecture for hyperbolic harmonic functions.
One of our main tools is the method of slice integration on spheres.
Theorem A. [2, Theorem A.5] Let f be a Borel measurable, integrable function on S n−1 . If 1 ≤ k < n, then where V (B n ) denotes the volume of the ball, which is given by .
(2.5)
and σ n denotes the normalized measure on the sphere S n−1 .
We will consider two special cases for k = n − 1 and k = n − 2. The corresponding formulas are useful when the integrand function f depends only on one or two variables.
(1) If n ≥ 2 and f (η) depends only on the first variable η 1 , then (2) If n ≥ 3 and f (η) depends only on the first two variables η 1 , η 2 , then where dA(z) denotes the Lebesgue measure on the unit disc B 2 .
As the integrand function depends only on η 1 and η 2 , the method of slice integration on spheres reduces an integral on the sphere to some integral on the unit disc B 2 . Using polar coordinates on the unit disc, let us denote η 1 = r cos θ and η 2 = r sin θ. Thus To find the extreme values of J q (r, ρ; γ), we will consider the following more general integral (2.11) The function I a,b has the following properties (1) I a,b is π-periodic.
(2) I a,b is an even function.
Thus, we will consider the behaviour of I a,b only on [0, (1) If a = 0 or a = 1, then γ → I(a, b; γ) is constant.
Thus we obtain our main theorem.
Computation of C p (x)
We will start with two particular cases p = n or p = ∞.
Case p = n.
The conjugate of n is q = n n − 1 .
Recall some properties of the beta function. Let a, b > 0 Therefore, 3.3. Case 1 < p < n. The following lemma is useful to compute K p (x; ℓ 0 ).
As we integrate a function which depends on one variable on S n−1 , then by the slice integration on spheres, we have By Lemma 5 (2), it yields Finally, 3.4. Case p > n. First, we will need the following lemma Lemma 6. Let p ∈ N and q be a positive real number and n ≥ 3. Then (3.12) (2) Proof. Using (2.8) we have cos k θ| sin θ| q dθ .
Using the following well-known transformation formula due to Kummer 2 F 1 a, a + 1 2 ; c; 4v which is respectively a slight variation of the one given in [1,Section 15.3 (20) ]. | 2021-03-02T02:15:35.301Z | 2021-02-28T00:00:00.000 | {
"year": 2021,
"sha1": "503db4c51ecd313c7e107727d6f545621e03d197",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "503db4c51ecd313c7e107727d6f545621e03d197",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
258427660 | pes2o/s2orc | v3-fos-license | The Pharmacokinetics/Pharmacodynamic Relationship of Durlobactam in Combination With Sulbactam in In Vitro and In Vivo Infection Model Systems Versus Acinetobacter baumannii-calcoaceticus Complex
Abstract Sulbactam-durlobactam is a β-lactam/β-lactamase inhibitor combination currently in development for the treatment of infections caused by Acinetobacter, including multidrug-resistant (MDR) isolates. Although sulbactam is a β-lactamase inhibitor of a subset of Ambler class A enzymes, it also demonstrates intrinsic antibacterial activity against a limited number of bacterial species, including Acinetobacter, and has been used effectively in the treatment of susceptible Acinetobacter-associated infections. Increasing prevalence of β-lactamase–mediated resistance, however, has eroded the effectiveness of sulbactam in the treatment of this pathogen. Durlobactam is a rationally designed β-lactamase inhibitor within the diazabicyclooctane (DBO) class. The compound demonstrates a broad spectrum of inhibition of serine β-lactamase activity with particularly potent activity against class D enzymes, an attribute which differentiates it from other DBO inhibitors. When combined with sulbactam, durlobactam effectively restores the susceptibility of resistant isolates through β-lactamase inhibition. The present review describes the pharmacokinetic/pharmacodynamic (PK/PD) relationship associated with the activity of sulbactam and durlobactam established in nonclinical infection models with MDR Acinetobacter baumannii isolates. This information aids in the determination of PK/PD targets for efficacy, which can be used to forecast efficacious dose regimens of the combination in humans.
Acinetobacter baumannii is increasingly associated with serious nosocomial infections often accompanied by high rates of morbidity and mortality [1][2][3][4]. The majority of A. baumannii isolates are multidrug-resistant (MDR), resulting in limited treatment options [5]. Carbapenem-resistant A. baumannii has been identified as a global threat and has been designated as an urgent unmet medical need, requiring new treatment options [6,7]. Sulbactam-durlobactam (SUL-DUR) is being developed for the treatment of Acinetobacter baumanniicalcoaceticus complex (ABC) infections, including those caused by MDR A. baumannii.
CLINICAL USE OF SULBACTAM TO TREAT ACINETOBACTER INFECTIONS
Sulbactam is commonly used to treat A. baumannii infections due to its ability to inhibit penicillin-binding proteins (PBP1 and PBP3) in Acinetobacter spp., leading to cell death of the bacteria [8]. Clinically, Acinetobacter infections have been successfully treated with a high-dose 2:1 combination of ampicillin:sulbactam. Although some in vitro studies have suggested that ampicillin and sulbactam may act synergistically [9], clinical studies utilizing sulbactam administration alone suggest that the intrinsic activity of sulbactam is responsible for the observed efficacy of this combination versus A. baumannii infections encountered clinically [10]. As described below, these observations have also been supported in experimental infection models conducted in in vitro and in vivo studies [11][12][13][14][15][16][17][18][19].
SULBACTAM EFFICACY OBSERVED IN ANIMAL INFECTION MODELS
In a neutropenic murine lung infection model, the efficacy of sulbactam was evaluated following administration of a 100-mg/kg dose every 3 hours (q3h) versus a susceptible A. baumannii isolate, SAN-94040, with a sulbactam minimum inhibitory concentration (MIC) of 0.5 µg/mL and a lesssusceptible isolate, RCH-69, with a sulbactam MIC of 8 µg/mL [13]. The mice were rendered neutropenic and treatment was initiated 3 hours after infection of the lung. Treatment was conducted for 12 hours (4 doses of sulbactam administered intraperitoneally) and lungs were harvested 3 hours after the final dose to assess bacterial burden. Drug concentrations were determined systemically as well as in the lung. Serum concentrations of sulbactam were above the MIC for more than 3 hours vs SAN-94040 and 1.7 hours vs RCH-69. In the lung, concentrations of sulbactam exceeded the MIC for 4.8 and 1.3 hours for SAN-94040 and RCH-69, respectively. These treatments translated to end-of-treatment lung tissue burdens of 4.31 ± 0.19 and 6.4 ± 1.3 log 10 colony-forming units (CFU)/g for SAN-94040 and RCH-69, respectively. Vehicle controls grew greater than 7 log 10 CFU/g for both isolates. A bactericidal effect with a greater than 2-log 10 CFU reduction from baseline was observed for SAN-94040 but not RCH-69. These findings were largely consistent with the pulmonary concentrations of sulbactam, which remained above the MIC of SAN-94040 for the entire dosing interval, but only above the MIC of RCH-69 for 43% of the dosing interval [13].
More recent studies have provided a rigorous assessment of the pharmacokinetic/pharmacodynamic (PK/PD) index most correlated with sulbactam activity and unbound exposure magnitude requirements to achieve PK/PD endpoints of net bacterial stasis and 1-, 2-, and 3-log 10 CFU reductions from baseline in neutropenic murine thigh and lung infection models versus A. baumannii American Type Culture Collection (ATCC) 19606 [14]. This isolate was both sulbactam-and carbapenem-sensitive, with an MIC of 0.5 µg/mL for both sulbactam and imipenem. Dose fractionation was performed in both thigh and lung models over a dose range of 15 to 240 mg/kg administered at 2-, 3-, 4-, 6-, 12-, and 24-hour intervals. Comparison of Hill-type model fits describing the relationship between 24-hour change in log 10 CFU/g burden and PK/PD indices of the time during 24 hours that unbound concentrations remain above the MIC (fT > MIC), the ratio of unbound area under the concentration-time curve over 0 to 24 hours to the MIC (fAUC 0-24 /MIC), and the ratio of unbound maximum concentration to the MIC ( fC max /MIC) demonstrated correlation coefficients (R 2 ) of 0.95, 0.60, and 0.37, respectively. Based on this analysis, the PK/PD index most closely associated with sulbactam activity was fT > MIC. The magnitude of sulbactam fT > MIC associated with achieving net bacterial stasis (no net change in bacterial counts over 24 hours of treatment) and 1-, 2-, and 3-log 10 CFU reductions from baseline is summarized in Table 1. These magnitudes of fT > MIC were largely consistent with imipenem, which was used as a comparator in the study. Slightly higher potency was observed for sulbactam in the lung model compared with the thigh, with approximately 20% fT > MIC for a static effect and more than 60% and more than 40% fT > MIC for bactericidal effects in the thigh and lung, respectively.
SULBACTAM PK/PD TARGETS ESTABLISHED FROM IN VITRO INFECTION MODELS
The magnitude of fT > MIC associated with sulbactam efficacy was further investigated in in vitro dynamic model systems simulating the sulbactam component of 2:1 ampicillin:sulbactam human PK exposures in an evaluation of MDR A. baumannii isolates ranging in sulbactam MIC values of 2 to 32 µg/mL [15]. Free-drug concentrations of sulbactam from 3 g ampicillin/sulbactam (2 g/1 g) every 6 hours (q6h) (0.5-hour infusion) and 9 g ampicillin/sulbactam (6 g/3 g) every 8 hours (q8h) (3-hour infusion) were evaluated in a one-compartment in vitro PK/PD model over 24 hours to determine the net change in log 10 CFU/mL from baseline as well as the area under the bactericidal curve (AUBC). Both the 3-g q6h and the 9-g q8h regimens provided exposure consistent with 100% fT > MIC compared with the MDR isolate, ACB 35 (MIC = 2 µg/mL), and resulted in sustained bactericidal activity over the 24 hours of the experiment. The lower dose of 3 g demonstrated minimal efficacy versus ACB 32 (MIC = 32 µg/mL), with an observed fT > MIC of only 5.9%. Nearly a 1-log 10 CFU reduction from baseline was achieved versus ACB 31 and ACB 33 (MIC = 16 µg/mL), with an observed fT > MIC of approximately 29%. Results for the higher 9-g dose regimen were variable but still not effective compared with ACB 32 ( fT > MIC of 50.7%), although higher AUBCs were achieved across all the isolates relative to the lower 3-g dose regimen. It was suggested that higher fT > MIC exposure may be needed in vitro to maintain efficacy or a higher concentration of sulbactam itself to circumvent higher β-lactamase production associated with the MDR isolates [15]. Additional studies were undertaken using a hollow-fiber in vitro infection model to reconfirm the PK/ PD index associated with sulbactam efficacy and the magnitude of such an index required for various levels of bacterial reduction (O'Donnell et al., Unpublished data). High magnitudes of sulbactam fT > MIC associated with net bacterial stasis and 1and 2-log 10 CFU/mL reductions from baseline were observed in dose-fractionation studies, whereas lower sulbactam Data from reference [14]. Abbreviations: CFU, colony-forming units; fT > MIC, time that unbound drug concentration remains above the minimum inhibitory concentration.
exposure magnitudes of sulbactam fT > MIC associated with these endpoints were observed in in vivo models. Consistent with results from previous studies, however, fT > MIC was identified as the PK/PD index most closely associated with the activity of sulbactam ( Figure 1). The reason for the higher magnitudes of fT > MIC in the hollow-fiber in vitro system is still unknown. However, the lower molecular weight cutoff (5 kD) of the hollow-fiber cartridge may trap and accumulate β-lactamases, which we have confirmed through nitrocefin assay of cartridge contents (J. O'Donnell 2023, unpublished data). This potentially serves as a sink for sulbactam, reducing the amount of unbound drug available to interact with PBPs as well as reducing the targeted concentration of the β-lactam in the system-a phenomenon observed by other investigators [16,17]. To avoid this, further studies were carried out to determine the magnitude of sulbactam fT > MIC associated with efficacy in in vitro chemostat and in vivo neutropenic murine infection models, in which the concentration of β-lactamase does not confound interpretation of study results (O'Donnell et al., Unpublished data).
In vivo studies completed to support the PK/PD understanding of sulbactam and durlobactam were carried out using neutropenic murine thigh and lung infection models and contemporary A. baumannii clinical isolates with relevant resistance determinants spanning a broad range of MIC values (O'Donnell et al., Unpublished data). Initial dose-ranging studies based on mouse PK and sulbactam susceptibility (with and without durlobactam) suggested that a 4:1 dose ratio of sulbactam:durlobactam from 2.5:0.625 mg/kg to 80:20 mg/kg administered every 3 to 6 hours would be sufficient to explore PK/PD relationships for efficacy [18]. Dose titration in a 4:1 ratio dosing of sulbactam:durlobactam was completed versus MDR A. baumannii strain ARC3486 (OXA-66, OXA-72, Temoneria [TEM]-1, acinetobacter-derived cephalosporinase [ADC]-30), which had a sulbactam MIC of 32 µg/mL or greater. In the presence of a 4-µg/mL concentration of durlobactam, the sulbactam MIC versus ARC3486 was reduced to 0.5 µg/mL. Initial tissue burden at the time of treatment was 6.36 log 10 CFU/g in the thigh model and 7.40 log 10 CFU/g in the lung model. In vehicle (no treatment) controls, colonies grew approximately 2 log 10 CFU from baseline over 24 hours. At the top dose of 80:20 sulbactam:durlobactam q3h, a greater than 2 log 10 CFU/g reduction was achieved over 24 hours in both models, with a clear dose-response observed across the dose range. An additional study was performed in the neutropenic murine thigh infection model utilizing a 15-mg/kg dose of sulbactam q3h across all dose arms and titrating the durlobactam dose from 1.25 to 50 mg/kg in combination. The 15-mg/kg dose was identified in PK studies to be consistent with unbound sulbactam concentrations exceeding an MIC (of sulbactam alone) of 0.5 µg/mL for 50% of the dosing interval, an exposure associated with achieving a 1-log 10 CFU or greater reduction from baseline across the infection models completed in vitro and in vivo ( Sulbactam administered alone at a dose of 15 mg/kg q3h was ineffective and nearly equivalent to the vehicle (no treatment) control. This observation was expected, as the MIC of sulbactam alone versus ARC3486 was 32 µg/mL or greater. The addition of durlobactam resulted in a clear dose-dependent reduction from baseline bacterial burden (log 10 CFU/g), with near maximal activity observed at a dose of 15:5 mg/kg sulbactam:durlobactam administered q3h. Similar studies using a higher fixed dose of sulbactam to cover a higher SUL-DUR MIC of 4 µg/mL were also completed [19]. The MDR A. baumannii strain used in these investigations, ARC5955 (TEM-1, ADC-82, OXA-23, and OXA-66), had a sulbactam MIC of 64 µg/mL, but activity was restored to an MIC of 4 µg/mL with the addition of durlobactam. A dose of 75 mg/kg q3h was evaluated across all dose arms in a neutropenic murine thigh infection model, with increasing doses of durlobactam added from 12.5 to 200 mg/kg. The sulbactam dose of 75 mg/ kg was selected to achieve fT ≥ 4 µg/mL for greater than 50% of the dosing interval. Neither sulbactam nor durlobactam administered by themselves at 75 and 50 mg/kg, respectively, q3h were effective, with 2.5-and 2.2-log 10 CFU/g growth observed over 24 hours. Net bacterial stasis was achieved when 12.5 mg/kg of durlobactam was added to sulbactam (75 mg/kg) and a 1-log 10 CFU reduction from baseline was observed when the durlobactam dose was increased to a dose of 50 mg/kg. Another half-log 10 CFU/g reduction from baseline was achieved when a 200-mg/kg dose of durlobactam was added to sulbactam (75 mg/kg).
DURLOBACTAM PK/PD TARGETS ESTABLISHED FROM A ONE-COMPARTMENT IN VITRO INFECTION MODEL
Having established a PK/PD target of approximately 50% fT > MIC for sulbactam, further in vitro studies were pursued using a one-compartment chemostat model to determine the PK/PD index and magnitude of such an index associated with durlobactam activity (O'Donnell et al., Unpublished data). Several recent publications have investigated the PK/PD of contemporary β-lactamase inhibitors, reporting that the PK/PD index most closely associated with activity was either unbound concentrations exceeding a critical threshold (C T ) or unbound AUC 0-24 /MIC. It has been suggested that the type of inhibition observed biochemically with the inhibitor may provide insight into the optimum PK/PD index associated with its activity [20,21]. The PK/PD of the β-lactamase inhibitor tazobactam, which demonstrates a relatively slow on rate and slow off rate appears to be driven by fT > C T [22]. By contrast, nearly irreversible inhibitors such as vaborbactam [23] and relebactam [24] may benefit from a time-exposure and concentration dependency associated typically with mechanism-based inactivators demonstrating greater affinity and rapid on rates [21]. For these inhibitors, the AUC may be a more relevant parameter associated with the inhibitor-enzyme interaction and target occupancy [25]. Durlobactam inhibition of class D β-lactamases, which are highly prevalent in A. baumannii isolates, has been shown to be particularly potent with k inact /K i values nearly 1000-fold higher than avibactam [18,26] and a partition ratio of nearly 1 against the vast majority of β-lactamases [27]. With these types of covalent interactions, inhibition would be expected to increase over time with exposure as opposed to reaching equilibrium. Based on these observations, one might expect fAUC/MIC to be the PK/PD driver for durlobactam.
Studies in the one-compartment system were carried out using a single MDR A. baumannii isolate, ARC5081, which demonstrated a sulbactam MIC of 16 µg/mL and an MIC of 4 µg/ mL for sulbactam in the presence of 4 µg/mL of durlobactam (O'Donnell et al., Unpublished data). Initial dose-ranging studies with sulbactam administered q6h at clinically equivalent exposures of 2 g administered every 6 hours combined with durlobactam administered q6h with a durlobactam fAUC 0-24 range of 18.5 to 591 µg · hour/mL were performed prior to dose fractionation with durlobactam. Dose fractionation was then completed using durlobactam fAUC 0-24 of 13.9, 55.8, 111, and 222 µg · hour/mL administered every 6, 12, or 24 hours. For all studies, sulbactam and durlobactam were administered into the system via a 3-hour infusion. At sulbactam 2 g q6h, the time above the SUL-DUR MIC of 4 µg/mL exceeded 50% of the dosing interval. The relationships between each of durlobactam AUC 0-24 , C max , and the percentage of time that durlobactam concentrations were above the C T values ranging from 0.5 to 2 µg/mL and the 24-hour change in log CFU/g were evaluated using Hill-type models and nonlinear least-squares regression. Fitting of the data demonstrated that free time above a critical threshold (fT > C T ) of 0.75 µg/mL was most highly correlated to the observed activity of durlobactam when administered in combination with sulbactam. While time-dependent activity is associated with the PK/PD of durlobactam, the half-life of the compound precludes a meaningful analysis of the dose-fractionation data to establish PK/PD targets clinically. When only the q6h and q12h data were considered, however, AUC 0-24 was considered a more informative PK/PD index, with data scattered equally across the Hill-type function and a clear maximum effect observed (O'Donnell et al., Unpublished data). Because a q6h regimen of sulbactam is required clinically to achieve its PK/PD target of 50% fT > MIC, a q6h durlobactam regimen was considered. Based on the Hill-type model fit of fAUC 0-24 versus 24-hour change log 10 CFU/mL and administration of durlobactam q6h, 1-and 2-log 10 CFU reductions from baseline were associated with fAUC 0-24 of 30.5 µg · h/mL and 134 µg · h/mL, respectively, versus ARC5081. Using the modal SUL-DUR MIC of 4 µg/ mL for ARC5081, these exposures correspond to fAUC 0-24 : MIC ratios of approximately 10 and 30 for 1-and 2-log 10 CFU reductions from baseline, respectively. In summary, while the PK/PD of durlobactam was shown to demonstrate time-dependent activity in vitro, fAUC 0-24 /MIC was shown to correlate to activity using the q6h and q12h dosing, likely due to the short half-life of the compound.
SULBACTAM AND DURLOBACTAM PK/PD TARGET MAGNITUDES ESTABLISHED IN VIVO
Pharmacokinetic/pharmacodynamic analyses for both sulbactam and durlobactam were performed utilizing data for multiple A. baumannii isolates evaluated using neutropenic murine thigh and lung infection models. Isolates were selected with the goal to evaluate a broad range of MIC values below and above the projected breakpoint of 4 µg/mL. Although MDR bacteria can exhibit a number of resistance mechanisms, in the case of carbapenem-resistant Acinetobacter, the prevailing resistance mechanism is β-lactamase production. It follows that bacteria with higher MIC values likely have higher expression levels of β-lactamase and, thus, would require higher β-lactamase inhibitor exposure to effectively restore wild-type MIC distribution of the β-lactam. Thus, a direct translational relationship should exist between inhibitor exposure and the MIC. This has been shown in the evaluation of the β-lactamase inhibitor tazobactam used in combination with ceftolozane, for which co-modeling of 7 isolates was performed by normalizing the %fT > C T by the MIC, where the C T was the product of the ceftolozane -tazobactam MIC for each individual isolate multiplied by 0.5 [28]. Thus, pooled and co-modeled data spanning a broad range of MIC were considered together to arrive at a single exposure target directly related to in vitro susceptibility of the isolate.
A sulbactam-sensitive A. baumannii isolate, ARC2058, was used in neutropenic murine thigh and lung infection models to compare the magnitude of sulbactam fT > MIC associated with efficacy in the thigh and lung for sulbactam treatment alone. In the lung model, the mean fT > MIC magnitudes required for a 1-log 10 and 2-log 10 CFU reduction from baseline and the exposure required to reach 80% of maximum activity (EC 80 ) were 37.8%, 50.1%, and 68.5%, respectively. In the thigh model, mean %fT > MIC magnitudes required for a 1-log 10 and 2-log 10 CFU reduction from baseline and the EC 80 were 20.5%, 31.5%, and 47.0%, respectively. These magnitudes were close to exposures associated with efficacy of sulbactam in human clinical studies [10,[29][30][31] when considering human PK parameters for sulbactam [32,33].
Up to 5 recent MDR A. baumannii clinical isolates, for which the β-lactamase genotypes had previously been determined by whole-genome sequencing (ARC3484, ARC3486, ARC5079, ARC5081, and ARC5091), were evaluated in addition to ARC2058 in dose-response studies conducted using neutropenic murine thigh and lung infection models. These studies were carried out to determine the %fT > MIC magnitudes required by sulbactam when administered in the presence of durlobactam (O'Donnell et al., Unpublished data). Co-modeling of the %fT > MIC sulbactam exposure response data across multiple MDR isolates and ARC2058 was performed utilizing the data obtained from both neutropenic thigh and lung models incorporating 4:1 sulbactam:durlobactam administration ( Figure 2). The 24-hour change in CFU/g from the initiation of therapy versus fT > MIC for the pooled dataset of all isolates used in each model was fit to a Hill-type function, and the magnitude of sulbactam fT > MIC targets associated with efficacy are summarized in Table 2. Magnitudes of fT > MIC were higher in the lung model compared with the thigh model, with fT > MIC of 41.2% versus 29.9% for 1-log 10 CFU reduction from baseline and 53.5% versus 38.2% for a 2-log 10 CFU reduction from baseline in the lung and thigh models, respectively.
Co-modeling of the durlobactam PK/PD data across multiple MDR isolates and normalizing the AUC 0-24 by SUL-DUR MIC in the thigh and lung models resulted in Hill-type model fits shown in Figure 3. The SUL-DUR MIC values of these MDR isolates ranged from 1 to 4 µg/mL. Correlations (r 2 ) of 0.86 and 0.91 were observed for thigh and lung models, respectively. For studies incorporating a fixed dose of sulbactam (to keep the fT > MIC of sulbactam consistent throughout the exposure range of durlobactam), a correlation of 0.82 was observed. The magnitudes of durlobactam fAUC/MIC associated with efficacy are summarized in Table 3. Unbound AUC 0-24 /MIC magnitudes were generally consistent in both lung and thigh models as well as in the chemostat model, with fAUC 0-24 /MIC magnitudes of 10 and 30 associated with 1-log 10 and 2-log 10 CFU reductions from baseline, respectively.
SULBACTAM-DURLOBACTAM PK/PD TARGETS FOR EFFICACY
Taken collectively, the in vitro and in vivo data support a 1-log 10 CFU reduction over 24 hours when sulbactam 50% fT > MIC and durlobactam fAUC 0-24 /MIC of 10 in the combination were achieved. This level of bactericidal activity has been suggested to correlate to clinical outcome and efficacy in patients with hospital-acquired pneumonia and bacteremia [34][35][36].
CONCLUSIONS
These data confirm %T > MIC to be the PK/PD index that best describes sulbactam efficacy, with unbound exposures above the MIC for 50% of the dosing interval being associated with a 1-log 10 CFU reduction from baseline based upon data from in vitro dose-fractionation studies. For durlobactam, %T > C T of 0.75 µg/mL was identified as the PK/PD index associated with efficacy against a single MDR isolate using a onecompartment in vitro infection model. Targeting AUC 0-24 / MIC with divided (q6h) dosing, however, was also highly correlated to activity across the MIC range of isolates, with a ratio of approximately 10 being associated with achieving a 1-log 10 CFU reduction from baseline when a sulbactam 50% fT > MIC target associated with this endpoint was also achieved.
Studies completed in vivo also demonstrated that the SUL-DUR combination is effective in treating A. baumannii in neutropenic murine thigh and lung infection models, achieving at least a 1-log 10 CFU reduction from baseline over 24 hours of dosing against all MDR isolates evaluated. The magnitudes of the PK/PD indices associated with efficacy for each agent in each of these infection models were relatively consistent between models. An unbound AUC:MIC ratio of approximately 10 for durlobactam and 50% fT > MIC for sulbactam for a 1-log 10 CFU reduction from baseline were supported by in vivo and in vitro studies and are likely to correlate with clinical efficacy. The results of analyses assessing the probability of attaining these PK/PD targets based on human exposures in the epithelial lining fluid support the proposed clinical dose of 1 g/1 g sulbactam/durlobactam administered via a 3-hour infusion q6h to treat patients with A. baumannii isolates with MIC values of 4 µg/mL or less [37]. Moreover, the most recent in vitro surveillance data demonstrate an SUL-DUR MIC 90 of 2 µg/mL for 5032 ABC isolates, thereby supporting the goal for covering a potentiated MIC of 4 µg/mL [38]. Based on these results, SUL-DUR may represent a potentially compelling treatment option over current standard of care, which includes high-dose ampicillin-sulbactam administered in combination with tetracyclines, polymyxin B, extended infusions of meropenem, or cefiderocol [5]. | 2023-05-02T06:17:42.852Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "f99eacfb458c1d6a8f323b62828fa8b347390999",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "34e9d7dcd04e177e39f6b5cbced0e85d7045bf97",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52196859 | pes2o/s2orc | v3-fos-license | Correlations of inhaled NO with the cTnI levels and the plasma clotting factor in rabbits with acute massive pulmonary embolism
Purpose: To investigate the correlation of inhaled nitric oxide (NO) on plasma levels of cardiac troponin I (cTnI) and von Willebrand factor (vWF), glycoprotein (GP) IIb/IIIa, granule membrane protein 140 (GMP-140) in rabbits with acute massive pulmonary embolism (PE). Methods: Thirty apanese white rabbits were divided into 3 groups, thrombus were injected in model group (n = 10), NO were inhalated for 24 h after massive PE in NO group (n = 10), saline were injected in control group (n = 10). The concentrations of vWF, GP IIb/IIIa, GMP140 and cTnI were tested at 4, 8, 12, 16, 20, and 24 h, Correlation analyses were conducted between cTnI and vWF, GP IIb/IIIa, and GMP-140 by Pearson’s correlation. Results: The concentration of cTnI and vWF, GP IIb/IIIa, and GMP-140 was increased in the model group, compared to control group. In the inhaled group, the concentrations of cTnI, vWF, GP IIb/IIIa, and GMP-140 were reduced compared to model group. There was a positive correlation between cTnI and vWF, GP IIb/IIIa, and GMP-140. Conclusion: Inhaled nitric oxide can lead to a decrease in levels of cardiac troponin I, von Willebrand factor, glycoprotein, and granule membrane protein 140, after an established myocardial damage, provoked by acute massive pulmonary embolism.
Introduction
The mortality rate of acute massive pulmonary thromboembolism was about 30% [3][4] , which was very high 1,2 , particularly in patients with shock or heart failure,, so the risk of mortality should be assessed as early as possible to maintain more effective treatments [5][6][7] .The European Society of Cardiology has divided the prognostic risk into 3 levels.Right ventricle dysfunction in patients with hemodynamic instability are included in high-risk group 5 , in which cardiac troponin is increased in the pulmonary thromboembolism with heart failure.These factors indicate myocardial damage and poor prognosis but are not well-understood 8,9 .
Massive pulmonary embolism (MPE) causes damage to vascular endothelial, induced platelet, and coagulation system activation 10 , as well as some coagulation factors 11,12 , which would lead to further deterioration of pulmonary embolism and resulting in severe hypoxia, as well as myocardial damage.
There were some studies have found that inhaled nitric oxide (NO) was a very important treatment in the MPE 13,14 , which can reduce pulmonary arterial pressure as well as reduce neutrophil chelation, promote lung endothelial integrity, improve lung ventilation/ perfusion, increase blood vessels density, and repair vascular endothelial cells to alleviate lung injury 15,16 .In the past 20 years, NO has become one of the most important signaling molecules in the cardiovascular system and is considered as a heart protection mediator 17 .
Above all, we supposed that, since inhaled NO was used to treat MPE through von Willebrand factor (vWF), glycoprotein (GP) IIb/ IIIa and granule membrane protein 140 (GMP-140), it may alleviate the myocardial damage.
In the study, a rabbit model of massive pulmonary thromboembolism was established by emboli injection to detect changes in cardiac troponin I (cTnI), vWF, GP IIb/IIIa, and removed) in syringe, and was form into striplike clot into normal saline inaseptic disk by hard and slow press through the needle, the clot was φ 1mm and was cut into 3-4 mm length, which was mixed with NS to make suspensions when used.Homemade pulmonary artery catheters and microvascular catheters (5Fr TI; Tyler & Company, Atlanta, GA, USA) were inserted into the right jugular vein and left carotid artery via the anterior chest approach through a transverse incision of 2 cm.Pulmonary artery catheter (homemade) insertion was monitored by an oscilloscope to control the insertion site and saline infusion was maintained at 0.3 mL/ min with a peristaltic pump.The arterial mean pressure and pulmonary artery pressure were measured synchronously with a multi-channel physiological parameter analysis recorder (MP150, BIOPA Systems, Inc., Goleta, CA, USA) through a pressure sensor.The left femoral vein was separated and a microcatheter was inserted for rehydration and blood collection.0.5 ml thrombus suspension (3-4 clots) was injected through the right jugular vein every 3 min and washed by 2-3 ml normal saline, until the average arterial blood pressure decreased to 40% of the base.Diastolic pressure was maintained at 55-60 mmHg for 40 min, and the massive pulmonary thrombosis with myocardial damage model was successfully established.Mechanical ventilation was not essential during modeling, only when the rabbit was in danger, such as the respiratory rates >40 times/min or the SO 2% <80% to control the respiratory rate of 28 times/min and 20 cm H 2 O of 50% O 2 .The control group was injected with saline.The inhaled group inhaled NO (20ppm) after 2 h of modeling for 24 h.During the experiment, all the rabbits in 3 group took mechanical ventilation, which was performed with a SERVO-ibaby ventilator (Rontgenvagen 2, SE-17154 Solna, Maquet Critical Care AB Company, Solna, Sweden). 2 rabbits were dead during the modeling.
Method of NOI
Following successful modeling, mechanical ventilation was initiated when mPAP reached 40% and the rabbit exhibited breathlessness or breathing difficulties for 2 h.NO was composed of 800 x 10-6 g/l decompressed NO, nitrogen (N2).When the concentration of NOI reached 20ppm, a ventilator pipe was connected to the NOI pipe for invasive mechanical ventilation (pressure, 20 cm water column; respiratory rate, 28 bpm; oxygen concentration with mechanical ventilation, 50%).The concentrations of NO and nitrogen dioxide (NO 2 ) were continuously monitored using a nitrogen oxide analyzer (Thermo Fisher Scientific, Inc., Waltham, MA, USA) and the level of methemoglobin was also monitored throughout and did not exceed 0.3 g/L.
Pulmonary perfusion and pathology of lung and heart
After anesthesia and fixation on the rabbit platform, the abdominal aorta of the rabbit was flushed with heparin saline until the liver became white and was then injected with acetone until the color of lung changed.Pulmonary perfusion was conducted with an ABS perfusion agent at 18-25 mmHg using a BIOPAC pressure conditioner.The lung artery was detected with a MicroCT X-ray-3D system for laboratory (A00001514J, PerkinElmer, Waltham, MA, USA) (Figure 1A, B) and both the lung and heart were sampled for pathology.
Detection of cTnI, vWF, GP IIb/IIIa, and GMP-140
cTnI was detected by microparticle chemiluminescence method using automatic immune analyzers, vWF and GMP-140 were tested by enzyme-linked immunosorbent assay (Shanghai Sun Biotechnology Company, Shanghai, China), and GP IIb/IIIa was examined by flow cytometry.The monoclonal antibody was supplied by BD Biosciences (San Jose, CA, USA).Blood samples were collected before and at 2, 4, 8, 12, 16, 20, and 24 h while monitoring mean pressure and mean pulmonary artery pressure.
Statistical analysis
All data were processed using the statistics software SPSS20.0(SPSS, Inc., Chicago, IL, USA).The missing data of the study has been deleted in the statistical analysis.Variations in the normal distribution were shown as mean±standard deviation(χ ± s), and one-way analysis of variance and the q test were used for comparison.The distribution was tested by Kolmogorov-Smirnov test (KS), if K-S showed the p>0.05, we say the variables distributed normally.P<0.05 was considered as with statistical differences.
Pathological results
Lung embolization is shown in Figure 1C and the heart is shown in Figure 1D, E. The right ventricular wall of the model group was thinner than that of the control and inhaled groups; myocardial necrosis and embolism in the heart of the model group were higher than those in the heart of inhaled group in cardiomyopathy (Figure 1F, G).
Changes in plasma cardiac troponin concentration
The plasma cardiac troponin concentration of acute massive area pulmonary thromboembolism was increased at 2 h, which was clear at 4 h and reached a peak at 16 h (0.46 ± 0.10 µg/L).These values were significantly higher than those in the inhaled group at each time point (P< 0.05), and extremely higher than those in the control group from 4 to 20 h (P< 0.05) (Table 1).
Change in plasma vWF
Plasma vWF increased from 2 h and reached a peak at 8 h, which was higher than in the control group from 2 to 16 h (P< 0.05); compared with model group, there was a decrease significantly of vWF in the inhaled group at 4 and 8 h (P< 0.05) (Table 2).
Change in plasma GP IIb/IIIa
The plasma GP IIb/IIIa concentration of acute massive area pulmonary thromboembolism increased at 2 h and reached a peak at 16 h, which was significantly higher than in the control group (P< 0.05) at each time point; compared with model group, , there was a decrease significantly in the inhaled group at 16 h (P< 0.05) (Table 2).
Change in plasma GMP-140
The plasma GMP-140 concentration of acute massive area pulmonary thromboembolism increased at 2 h and reached a peak at 16 h, which was significantly higher than that in the control group at each time point (P< 0.05);, this value was extremely higher than that in the inhaled NO group from 4 to 20 h (P< 0.05) (Table 2).
■ Discussion
It has been found that increased cardiac troponin during pulmonary thromboembolism leads to higher mortality 9 .In the study, cardiac troponin levels increased and reached a peak concentration of 0.46 ± 0.10 µg/L, meanwhile plasma vWF, GP IIb/IIIa, and GMP-140 increased in rabbits after massive area pulmonary thromboembolism, and the peak concentrations of plasma cardiac troponin with vWF, GP IIb/IIIa, GMP-140, were significantly positively correlated.The increased vWF indicates that platelet adhesion increased and the pulmonary vascular endothelium was impaired 10,11 , which activated platelets when exposed to blood vessels via vWF 12,19 and aggravated thrombosis and increased vWF 20,21 .GP IIb/IIIa is the main glycoprotein in the platelet membrane 22 and mediates platelet and fibrinogen combination, which can be activated by collagen, causing endothelial cell damage by pulmonary thromboembolism, hypoxia, and pulmonary hypertension 22 .
GMP-140 is found in rod cells of the endothelial cells and resting platelets αgranule, which is a component of the receptors of vWF, thrombin, collagen, and fibrinogen (Fg), among others, and is important in platelet activation, adhesion, and aggregation 23,24 .In pulmonary thromboembolism, reactive oxygen species and inflammatory factors stimulate endothelial Correlations of inhaled NO with the cTnI levels and the plasma clotting factor in rabbits with acute massive pulmonary embolism Zhang Z et al.
Acta Cir Bras.2018;33(8):664-672 cells, inducing GMP-140 expression on the cell surface with a combination of the granular membrane of platelet α granule and serous membrane of endothelial cells, activating platelets to form clots [25][26][27] .The activated platelets undergo an irreversible activation reaction with the infarction of pulmonary tissue, and more GMP-140 is released to cause further thrombosis 12,28 .Increased levels of vWF, GP IIb/ IIIa, and GMP-140 lead to local thrombosisinflammation network formation in the lung and cardiac tissue and clot formation, as well as systemic ischemia and hypoxia, which are related to endothelial cell damage.These events cause myocardial damage, as confirmed in this study.
NO inhalation significantly reduced vWF, GP IIb/IIIa, GMP-140 and cardiac troponin levels.The mechanism may involve inhibition of protein kinase C to downregulate the expression of platelet GMP-140 29 and vWF.Additionally, the inhibition of fibrinogen binding to the GP IIb/IIIa receptor by NO, which activated soluble guanylate cyclase in the platelets, may also reduce platelet aggregation and GMP-140 expression.Thus, NO inhalation can inhibit platelet aggregation and reduce myocardial damage in pulmonary thromboembolism.
■ Conclusion
The increased levels of vWF, GP IIb/IIIa and GMP-140 may have played an important role in the myocardial damage, and inhalation of NO would reduce the levels of vWF and GMP-140 and inhibit the expression of GP IIb/ IIIa on the platelet membrane, all these change alleviated myocardial damage in massive area pulmonary thromboembolism.
Figure 1 -
Figure 1 -Pulmonary embolism image and pathology.A,B: Blocked arteries image in MicroCT X-ray-3D system for laboratory; C: Emboli; D: Right ventricle after NO inhalation; E: Right ventricle before NO inhalation; F: Myocardial necrosis pathological changes before NO inhalation; G: Myocardial necrosis pathological changes after NO inhalation.
Figure 2 -
Figure 2 -Correlation between CTnI peak concentration and plasma coagulation system factor.A: Correlation between plasma CTnI peak concentration and GP IIb/IIIa; B: Correlation between plasma CTnI peak concentration and GMP-140; C: Correlation between plasma CTnI peak concentration and vWF.
Table 1 -
Comparison of plasma CTnI ( x ±s, µg/ml).Comparison between the control group and model group.
Table 2 -
Comparison of plasma vWF (ng/ml), GPIIb/IIIa (%), GMP-140l (%) levels ( x ±s).Correlations of inhaled NO with the cTnI levels and the plasma clotting factor in rabbits with acute massive pulmonary embolism Zhang Z et al.
Note: T1M-NO/P1 M-NO: comparison of vWF between Model group and NO group; T2M-NO/P2 M-NO: comparison of GPIIb/IIIa between Model group and NO group;T3M-NO/P3 M-NO: comparison of GMP-140 between Model group and NO group; *p<0.05,***p<0.01 | 2018-09-16T08:12:25.230Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "4385dd2f4732a0304c1f39eea971c9bd8d912ec2",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/acb/a/SwPj7XjfRrkMrGp4sFTSyjN/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "df87b8bf57557b104607f931cd4dc246005380f1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
24675075 | pes2o/s2orc | v3-fos-license | Ebola Viral Hemorrhagic Disease Outbreak in West Africa-Lessons from Uganda
Background: There has been a rapid spread of Ebola Viral Hemorrhagic disease in Guinea, Liberia and Sierra Leone since March 2014. Since this is the first time of a major Ebola outbreak in West Africa; it is possible there is lack of understanding of the epidemic in the communities, lack of experience among the health workers to manage the cases and limited capacities for rapid response. The main objective of this article is to share Uganda’s experience in controlling similar Ebola outbreaks and to suggest some lessons that could inform the control of the Ebola outbreak in West Africa. Methods: The article is based on published papers, reports of previous Ebola outbreaks, response plans and experiences of individuals who have participated in the control of Ebola epidemics in Uganda. Lessons learnt: The success in the control of Ebola epidemics in Uganda has been due to high political support, effective coordination through national and district task forces. In addition there has been active surveillance, strong community mobilization using village health teams and other community resources persons, an efficient laboratory system that has capacity to provide timely results. These have coupled with effective case management and infection control and the involvement of development partners who commit resources with shared responsibility. Conclusion: Several factors have contributed to the successful quick containment of Ebola outbreaks in Uganda. West African countries experiencing Ebola outbreaks could draw some lessons from the Uganda experience and adapt them to contain the Ebola epidemic.
Background
On 23th March, 2014, the Ministry of Health of Guinea notified the World Health Organisation of the outbreak of Ebola Virus Disease1.The cases of Ebola were initially reported from three southeastern districts of Gueckedou, Macenta, Nzerekore, and Kissidougou in the forest region; and in Conakry, the capital city.The tests on the samples taken from suspected patients were positive for Ebola Zaire, thus confirming the first Ebola outbreak in Guinea1.
retrospective epidemiologic investigation indicated that the first suspect case in the outbreak2 was a 2-year-old child from Meliandou, Gueckedou that died on 6 December 2013.Between December 2013 and March 2014, the outbreak spread from Gueckedou, to Macenta, Kissidougou, and Conakry, the capital city2, 3. The Ebola outbreak then spread to the neighbouring countries of Liberia and Sierra Leone.The Ministry of Health and Social Welfare of Liberia formally declared Ebola outbreak on 29th March, 2014 while Sierra Leone notified the first case of Ebola to the WHO on 25 May, 2014 4,5.As of 17th July, 2014, the cumulative number of cases attributed to Ebola in the three countries was 1,048 and 632 deaths (Case Fatality Rate of 60%) 6.The transmission of Ebola in W. Africa has been char-acterized by the following four major patterns: 6 •Transmission dominantly in the rural communities •Transmission mainly in the densely populated periurban cities of Conakry and Monrovia •Transmission along the shared border areas of Guinea, Liberia and Sierra Leone.
•Persistent transmission in health care settings with several health workers contacting and dying from Ebola virus disease.In this paper, we share Uganda's experiences in controlling similar Ebola outbreaks, and suggest some best practices that could inform the control of the Ebola outbreak in West Africa, and perhaps address the different challenges being faced in its containment.Methods: The article is based on review of published papers and updates on the Ebola outbreak in West-Africa, published papers about the previous Viral Hemorrhagic Fever outbreaks in Uganda, reports of previous Ebola outbreaks, response plans and experiences of individuals who have participated in the control of Ebola epidemics in Uganda.The authors were all actively involved in the response to Viral Hemorrhagic Fever outbreaks in Uganda.
The Major Challenges of the Ebola Outbreak in W. Africa:
•There is lack of understanding of the epidemic in the communities ('there is no such thing as Ebola" says the community), lack of experience among the health workers to manage the cases and limited capacities for rapid response as this is the first Ebola outbreak in West Africa.
•High exposure to the Ebola virus in the communities through household care and customary burial procedures e.g.use bare hands to wash dead bodies.This has resulted into high number of deaths in the communities leading to panic and anxiety.There are misconceptions on the new disease; and this has led to denials, mistrust, hostility to health workers especially foreign health workers and rejection of public health interventions.There are beliefs and myths that impede acceptance of messages about treatment; for example communities believe hospitalization is a death sentence.
•The other challenge relates to the fact that by the time the outbreak was finally confirmed and declared in Guinea on 21 March 2014, several clusters of cases, suspected to have started several months back were identified in hospitals in Gueckedou and Macenta 2, 7. Due to the delayed detection of Ebola virus disease in Guinea, transmission chains accumulated leading to increased number of Ebola cases that were dispersed over wide geographical areas2, 7. It therefore became an immense and complicated challenge to trace all these chains of transmission dispersed over wide geographical locations by teams of public health workers that have no previous experience in responding to Ebola virus disease outbreaks.
•Emergency response relies a lot on the capacity of the prevailing health systems before the disaster strikes.It is now well established that just like the other counties where Ebola has caused outbreaks in sub-Saharan Africa in the past, the three counties affected by the current Ebola outbreak in West Africa have weak health systems with acute shortages of the much needed human resources and lack of in-country capacities to confirm Filoviruses such as Ebola 7, 8.The delayed detection of cases at the onset of outbreaks is a major indicator of the level of functionality of the health system and the corresponding surveillance systems for detecting priority diseases.
•The persistent number of deaths among health workers shows that the three counties lacked preparedness for Ebola virus disease outbreaks.Consequently, the healthcare workers lacked training on infection control precautions including barrier nursing that are requisite for healthcare worker safety so that the much needed patient care is provided 8. •Current information from the outbreaks indicates that Ebola cases were either missed by active case finding teams or were actually withdrawn from the isolation facilities by relatives even after they were confirmed to have Ebola 8.This breach of infection control and isolation recommendations can only worsen outbreak trends as it actually did in West Africa.It has been revealed that the decision by relatives to withdraw patients confirmed to have Ebola from the isolation facilities was prompted by the decision to transfer cases to distant hospitals 8. Transfer of Ebola patients over long distances is never a good idea due to the risk of spreading the outbreak to new locations and the risk of losing the much-needed cooperation of the affected communities.The fact that response teams were eventually turned away from affected communities shows that this trust and confidence had been breached.
Uganda's Experience in Controlling Ebola and Other Epidemics
Over the last 15 years, Uganda has had several viral hemorrhagic fever (VHF) outbreaks 9-21.The first and biggest Ebola outbreak in Uganda occurred in 2000 with cases being reported in the three districts of Gulu, African Health Sciences Vol 14 Issue 3, September 2014 Masindi, and Mbarara 9.In the subsequent years, outbreaks were reported in Bundibugyo, Luwero, Kibaale, and Luwero districts in 2007, 2011, 2012 and 2012 respectively 10-13.Marburg virus disease outbreaks were also reported in Ibanda and Kamwenge in 2007; involving cases in the Netherlands and United States respectively that were exposed after visiting caves in Bushenyi district; and in 2012 with cases confirmed in Ibanda and Kabale districts 13, 14.A major outbreak of Yellow Fever was reported in five districts in Northern Uganda in 2010 with 181 cases including 45 deaths being reported 10.
In virtually all the above, the disease outbreaks were controlled with limited spread of the epidemics much beyond the initial loci of the outbreak.In addition, the case fatality rates for Ebola were Gulu [52.7%];Bundibugyo [32%]; Luwero 2011 [(1/1) -100%]; Kibaale [70.8%]; and Luwero [57.1%] 15.Thus Uganda has accumulated enough experience that can be shared with other countries that face the risk of these epidemics.Specifically, the following strategies were used: •Uganda has standing multi-sectoral and multidisciplinary task force committees on epidemics that include partners and NGOs at the national and district level.The national task force is composed of experts (epidemiologists, laboratory scientists, communication experts, psychiatrists and psychologists, physicians, veterinarians, etc.) from the Ministry of Health, Ministry of Agriculture, Office of the Prime Minister and partners including WHO, CDC, UNICEF, AFENET, Uganda Red Cross, and MSF and it meets monthly; however it meets daily when there is an epidemic.Similarly, all the districts have task forces composed of the district political, civic, and health leadership as well as technical advisors from different partners working in the districts.Both the National and district task forces have subcommittees that are responsible for overseeing and implementing different components of epidemic response; the subcommittees include coordination, epidemiology and laboratory, case management, social mobilization, logistics, and psychosocial support and all have clear terms of reference.In addition, national rapid response teams and district rapid response teams are constituted immediately an epidemic is notified.They conduct investigations and support the establishment of an appropriate response in collaboration with the task forces.To reinforce response coordination at the district level, two or three senior officers are deployed from the national level to the district level to work with the district task force.There is daily contact between the district and national task forces during epidemic time.The Ministers of Health and other senior officers in the ministry of health and WHO visit the affected areas to explain the government actions, seek the community's cooperation, and motivate the response teams.During the inter-epidemic periods, monthly taskforce meetings are convened to review disease surveillance data and update epidemic preparedness and response plans.There is constant communication between district surveillance officers and the national level especially the division of disease surveillance in the ministry of health.
•Uganda was one of the first countries in WHO-AFRO to adopt the integrated disease surveillance and response strategy (IDSR) in 2000.Over the years, the national surveillance system has been strengthened based on this strategy.The designated surveillance focal points at health sub-district, district, and regional levels, have been trained and facilitated to ensure timely detection, reporting, and investigation of priority diseases.These have been linked to the HMIS system for reporting of public health events of national and/or international concern.The village health teams (VHTs) upon hearing or detecting any reportable priority disease report to the nearest health facility.These are important links between the facility-based surveillance system and the community.The surveillance system is further enhanced during epidemics to enable daily tracing of all contacts of viral hemorrhagic fever patients and immediate investigation and isolation of suspected cases.This ensures that transmission within the community is controlled quickly.Uganda has recently established an emergency operations centre supported by CDC and this provides timely data and maps of epidemics.
•Uganda has overtime built a good laboratory network within districts, regional referral hospitals and at the national level.The samples for detection of Ebola are analyzed in Entebbe at the Uganda Virus Research Institute (UVRI).Uganda has also built capacity in specimen collection, processing, packaging and storage and an efficient specimen shipment mechanism.
•Uganda has developed local capacity for social mobilization during epidemics.We engage the political, local, opinion and religious eaders through meetings, radio messages and community meetings.Effort is made to first understand the knowledge, attitudes and beliefs of the affected people and this informs the social mobilization efforts.We work with partners including UNICEF and The Uganda Red Cross in this endeavor.
•Uganda has developed local experience in case management, infection prevention and control (IPC).
African Health Sciences Vol 14 Issue 3, September 2014 We have trained many health workers on Viral Haemorrhagic Fevers (VHF) case management and other epidemics over the years.We have training guidelines that have been updated over time.The health workers are deployed to manage cases in collaboration with partners like MSF during the epidemics.Isolation facilities are set up in health facilities close to the affected areas to minimize transfer of the patients for long distances.A list of the trained and experienced health workers and their contacts is maintained at national level for quick contact when the health workers are needed in districts outside their usual work place.
•During the VHF epidemics psychosocial support to affected patients, their attendants, and communities has been adopted as a practice.A team of psychosocial support providers engages the affected communities to understand their misconceptions and misgivings, takes time to address them and discuss socially accepted solutions which are then adopted by the response team.The psychosocial team also counsels the patients and their immediate family about the natural progression of the disease and the expected outcomes; and keeps the immediate family briefed on the situation of the patient in isolation facility.In addition, the team provides psychosocial support to the health workers working in isolation facility to avoid burnout and depression.
•To further prevent transmission within the affected communities, the affected homes are disinfected and the contaminated personal effects of the patients disinfected and destroyed.The personal effects the patients come with to isolation facility are also disinfected and destroyed at discharge.To replace the destroyed personal effects, a "discharge package" is given to the patient on discharge and the family of the deceased patients.This enhances cooperation of the families with sick patients that need to be taken to the isolation facility.
•Close supervision of the response by the National Task Force, the top leadership of the Ministry of Health, and the top Government leadership.
•Uganda has been working with neighboring countries to strengthen the cross-border surveillance and management of epidemics.This should be strengthened among the W. African countries Challenges Uganda has Met in Controlling Ebola Viral Hemorrhagic Disease •The delayed detection of viral hemorrhagic fever outbreaks as we have seen in the West African Ebola outbreak has been a challenge in most of the Ebola outbreaks in Uganda.This challenge is attributed to the non-specific presentation of cases at the onset of Ebola virus disease outbreaks.To improve the detection of these atypical disease outbreaks the Uganda Ministry of Health has initiated the following interventions: Standardized clinical and community case definitions for all priority diseases including mysterious cases/deaths have been developed and disseminated to all health facilities and VHTs.In addition, village health teams (VHTs) have been trained on using the community case definitions to detect and report emerging disease outbreaks or mysterious diseases to the nearest health facilities for further verification.Also, as part of the nationwide drive by the Ministry of Health to strengthen the national disease surveillance system, district health teams and rapid response teams are being trained to enhance the implementation of the integrated disease surveillance and response strategy at all levels.
•Before the Ebola outbreak in Bundibugyo in 2007, Uganda lacked in-country capacities for laboratory confirmation of Ebola and other Filoviruses.This led to major delays of up to four weeks in securing laboratory confirmation.With support from the Centres for Disease Control and Prevention, a P3+ laboratory has been established at the UVRI and this has reduced the turnaround time to less than 24 hours.This has therefore been a major boost for response and case management teams to rollout timely outbreak response interventions following the confirmation of new Filovirus outbreaks.
•Dealing with Ebola cases that are dispersed over a wide geographic expanse is never an easy undertaking.One strategy that has worked for Uganda is to have a decentralized response with outbreak coordination committees formed in the respective districts that are affected by the outbreak.In addition, the decentralized district outbreak response teams have onsite support from senior technical officers from the national taskforce with experience in managing all the technical interventions for containing an Ebola outbreak.There is also remote support for the district outbreak response teams from the national taskforce through regular telephone conference calls, sharing of situation reports, and sharing minutes of the taskforce meetings.
•The need for strong and effective health systems is paramount for effective disease outbreak response.The need for adequate numbers of well-trained health care workers with adequate medicines, infection control supplies, infrastructure, and equipment for case referral and isolation is paramount.Thus Uganda has developed a roster of well-trained health workers with experience in managing Filovirus disease cases.These experts have been deployed to support response to Ebola outbreaks in Uganda and to all the three countries that are cur-rently affected by Ebola in West Africa.Additionally, the Uganda government has embarked on a phased plan of setting up isolation facilities at all major hospitals.During the first phase of this program, a state-ofthe-art isolation facility has been built in Entebbe with support from the World Bank.To enhance capacities for case detection and management of events due to especially dangerous pathogens (EDPs) like Filoviruses, the Uganda Ministry of Health with support from the Defense Threat Reduction Agency (DTRA) is conducting trainings for healthcare workers countrywide.
•
Involving local leaders is a crucial component of ensuring community compliance to the recommended public health interventions for Ebola control.Thus district outbreak response teams are by default chaired by elected local leaders that have the full mandate of the affected communities.In addition, the Government of Uganda has rolled out a community network of Village Health Team members (VHTs).The VHTs are community health workers that are selected by the community to support the implementation of public health interventions at the community level.The VHTs have proved to be a vital resource for supporting outbreak response activities.During the past Ebola outbreaks, VHTs have been used to conduct door-to-door active case search, contact tracing and follow-up, and house-to-house health education on Ebola prevention and control.Since VHTs work in the very communities that elected them, they are more accepted and hence mistrust and rejection of response teams has not been reported during the recent Filovirus outbreaks in Uganda.
•A unique challenge that we have not had to deal with in the past but which is an issue during the current Ebola outbreaks in West Africa is containing transmission in closely knit cross-border communities.The newly established WHO centre for coordinating response to the outbreaks in the three countries should come up with a strategy for enhancing cross-border communications to facilitate follow-up of contacts, case isolation, and educating communities on Ebola prevention and control Uganda's response to the Ebola outbreak in W. Africa On 8th April, 2014, the Ministry of Health through its Public Emergency Operations Centre informed the districts on the progress of the Ebola outbreak in West Africa and urged health workers to have a high index of suspicion especially for travelers to and from West Africa.
•The Ministry of Health issued a public press release (4th July 2014) and advised the public on the necessary precautions.All district task forces were alerted.
•The Ministry has instituted measures for screening and follow-up of travelers from W. Africa at the Entebbe international airport.
•The National Task Force is operational; and The Rapid Response Teams (RRT) is on full alert and ready to take action when necessary.
Lessons for West Africa
Based on the above experience from Uganda, successful containment of VHF epidemics will depend on effective epidemic coordination at national and sub-national levels in the West-African affected countries.Functional national and sub-national epidemic management committees and response teams that sit daily (to coordinate, review challenges and come up with workable solutions) should be established and/or strengthened.In addition, the W. African countries should strengthen active surveillance, involve local community health workers and volunteers in active surveillance and contact tracing to ensure that all suspected cases are detected timely and removed from the community.Local capacity for case management should be urgently built and case management centres established close to the epidemic hot spots to minimize on transportation of Ebola patients for long distances.This strategy increases acceptability of the community to bring patients to isolation facilities.Opinion leaders from affected communities should be engaged to participate in mobilization of their own people to cooperate with the response teams.Psychosocial support should be urgently stepped up targeting the affected communities and families as well as the patients.Ebola response activities should be adapted where necessary to ensure they are more culturally acceptable for the affected communities without compromising on infection prevention and control.Specifically the following recommendations are made: •Local leaders should take centre stage in directing the Ebola response at the local level, guided by the national and international experts that constitute the outbreak response committees.
•Natives from the affected communities should be trained to conduct house-to-house case search, contact tracing and follow up, and house-to-house health education on Ebola prevention and control.
•Ebola cases should be managed in isolation facilities that are located within the affected communities.
•Health systems should be strengthened through: establishing in-country capacities for laboratory confirmation of Ebola; conducting preparedness trainings on detecting and managing EDPs like Filoviruses and African Health Sciences Vol 14 Issue 3, September 2014 enhancing IDSR capacities at all levels; and establishing a community network of community health workers.
•Develop a strategy for enhancing cross-border communications to facilitate follow-up of contacts, case isolation, and educating communities on Ebola prevention and control.
•All political, religious and cultural leaders should pass on messages on Ebola prevention and the need to report suspected cases.Anthropologists and communication experts should be increased (a ratio of 1:50,000 population).All feasible communication channels should be used.
•The affected countries should to vigorously focus on critical sources of infection which include home-based care of patients and burial of affected patients.An ambulance system should collect suspected patients (to the designated isolation health facility) from the community as soon as they are reported by community resource persons.The national response team should train local burial teams who should burry victims.The house or rooms of the dead should be thoroughly disinfected.All beddings, mattresses and clothes that were use by the dead should be burned.If culturally needed, some family members, dressed in appropriate protective wear, should be allowed to participate in or view the preparation of their deceased relatives for burial.Strong messages should be passed to the communities by the political leaders preferably by the Presidents, religious and cultural leaders on the dangers of getting in contact with patients suspected of Ebola and participating in burial of patients without the appropriate personal protective equipment.
•The local administration entities should consider development of by-laws and tough punitive measures that deter the community members from hiding patients, where necessary.Conclusion Ebola epidemics have been experienced by several countries in Africa and Uganda has experienced several of them.Several factors highlighted above have been key to the quick containment of Ebola outbreaks in Uganda.West African countries currently experiencing Ebola outbreaks should draw some lessons quickly from the Uganda experience and adapt and adopt them so as to contain the Ebola epidemic.The morbidity and mortality due to the outbreak could be minimized by adopting some of the drivers of success in the Ugandan experience above. | 2018-04-03T04:36:22.035Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "37c8c3e0231ee0af9c01aa2f94622dff2d2933d6",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/ahs/article/download/107213/97454",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "37c8c3e0231ee0af9c01aa2f94622dff2d2933d6",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56394190 | pes2o/s2orc | v3-fos-license | DRIS FORMULAS FOR EVALUATION OF NUTRITIONAL STATUS OF CUPUAÇU TREES
DRIS, an Diagnosis and Recommendation Integrated System, is a tool to evaluate the nutritional status of plants. Different DRIS formulas have been proposed to improve the efficiency of the crop nutrition diagnoses. The objective of this study was to compare the nutritional diagnosis of the formulas of Beaufils (1973), of Jones (1981) and of Elwali and Gascho (1984), based on the degree of agreement in commercial orchards of Theobrama grandiflorum trees. Leaf samples of 5 to 18 year-old cupuaçu trees were collected from 153 commercial orchards in agroforestry and monoculture systems in the state of Rondonia, Brazil. Bivariate relationships between nutrition concentrations in healthy trees were used to calculate DRIS norms. DRIS indices were calculated based on the different formulas and interpreted by the Potential Fertilizer Response method, in five categories. The DRIS norms, DRIS index calculations and their interpretations were developed using the DRIS Cupuaçu computer program (www.dris.com.br). The different DRIS formulas resulted in similar diagnoses with a degree of agreement of > 90% for the nutrients N, P, K, Ca, and Mg.
INTRODUCTION
Monitoring the nutritional status of fruit trees based on the chemical analysis of the leaves has become an essential practice, underlying a more precisely adapted and financially balanced fertilization (Mourão Filho, 2004).The DRIS evaluation method of the plant nutritional status is considered an effective tool for nutritional diagnosis in Brazil, but less usual than in other countries (Nachtigall & Dechen, 2007), because it is relatively complicated, compared to traditional methods such as critical level and sufficiency range (Prado, 2008).Nevertheless, it is promising for the nutritional diagnosis of perennial crops, for which calibration tests are generally timeconsuming and costly.
The DRIS method has advantages over traditional methods for being based on nutritional ratios instead of average levels of each nutrient, eliminating dilution and concentration effects that are not dealt with adequately by traditional methods (Wadt, 2009).In addition, variations in the ratios are considered, which allows the development of standardized indices, which are less onerous because they do not require extensive local calibration tests.
The application of this method to fruit trees of different species e.g., apple, mango and citric fruits, has been promising (Nachtigall & Dechen, 2007;Wadt et al., 2007;Santana et al., 2008).However, for the fruit tree species cupuaçu (Theobroma grandiflorum), widely grown in the Amazon region, there is little information about mineral nutrition and the few studies available address only a limited number of nutrients in fertilizer tests (Alfaia et al., 2004;Ayres & Alfaia, 2007).The effect of fertilizer application in commercial orchards is therefore to date unknown.
For the DRIS, different methods to calculate functions based on different DRIS formulas have been proposed, with a view to improve the efficiency of the system.The formula developed by Beaufils (1973) has two distinct expressions, depending on the value of the ratio of the leaf sample, compared to the respective DRIS norm.The formula developed by Jones (1981) consists of the standardization of all DRIS functions.The purpose of the formula developed by Elwali & Gascho (1984) is to nullify nutritional deviation when the values are below the standard deviation from the respective norm.
Several studies evaluated the efficiency of DRIS formulas.Nachtigall & Dechen (2007) concluded that the formula developed by Elwali & Gascho (1984) is more efficient than that of Beaufils (1973) or Jones (1981) for apple orchards.On the other hand, Silveira et al. (2005) identified the formula proposed by Jones (1981) as more efficient than that of Beaufils (1973) or Elwali & Gascho (1984) for signal grass (Brachiaria).These different conclusions are often related to the type of approach to evaluate the efficiency of the different formulas, e.g, the frequency of the most limiting nutrient and correlation between leaf nutrient levels, without necessarily comparing the diagnostic interpretations with each other.
Thus, the objective of this study was to evaluate the performance of the different DRIS formulas in diagnosing the nutritional status of cupuaçu grown in the southeastern Amazon, with a view to identify and recommend the best method for mineral nutrition studies and the monitoring of commercial orchards of the species.
MATERIALS AND METHODS
Cupuaçu leaves were sampled in commercial orchards from 5 to 18 year old trees grown either in monoculture or agroforestry systems, in Nova Califórnia, Porto Velho, in the far West of the state of Rondonia, Brazil.The climate is rainy wet tropical, Am (Köppen), with an annual average of 26 °C and average rainfall of 2100 mm year -1 (Silva, 2000).The predominant soil types are Latosol, Ultosol, Plintosol, and Cambisol.
Sampling was carried out at the end of August and beginning of September 2008, immediately before the beginning of rainy season, in 153 selected orchards.In 111 of these, an agroforesty system was used and a monoculture system in 42.From each orchard, 30 leaves were collected from 10 to 15 randomly distributed plants.Recently matured leaves, the third medium plant were sampled, always facing northsouth direction, as recommended by Costa (2006).
The leaf samples were chemically analyzed using nitro-perchloric and sulfuric acid digestion.After the digestion, the extracts were analyzed for total contents of Ca and Mg by inductively coupled plasma-optical emission spectrometry (ICP-OES), K using flame photometry and P using molecular spectrophotometry.Total N was obtained by sulfuric digestion and Kjehdahl distillation (Embrapa, 1997).
A database was created to separate the orchards into healthy and unhealthy orchards.At sampling, each orchard was previously classified according to the plant health, taking criteria of plant health and crop and soil management into consideration.
In terms of plant health aspects, the level of infestation by witches' broom (Crinipellis perniciosa) and fruit borer (Conotrachellus humeropictus) was evaluated, which are the diseases that most affect yields in the region (Lopes & Silva, 1998).For the crop and soil management, the intensity of application of practices considered appropriate were evaluated, i.e.,: pruning, presence of rotten fruit on the ground, and weeding (crop) and organic fertilization, soil cover and planting along contour lines (soil).These properties (plant health state and intensity of crop and soil management practices) were scored: 1 (poor), 2 (satisfactory) and (3) good.
To establish norms, the database was divided into three groups: unhealthy orchards with low (OLY), unhealthy orchards with average (OAY) and healthy orchards with high potential yield (OHY).The criterion for the definition of these classes was given by the sum of these concepts, where: 3 ≤ OLY ≤ 5;6 ≤ OAY ≤ 7 and 8 ≤ OHY ≤ 9.
The DRIS norms were calculated for the orchards considered healthy, which comprised 14 monoculture and 34 agroforestry orchards.Although small, the size of the reference population was considered sufficient to generate a representative DRIS norm for healthy orchards (Mourão Filho et al., 2002), which is expected to represent a sample of the nutritional status of the orchards.Even for cereals, a reference population of around 30 fields proved adequate to generate DRIS norms (Guindani et al., 2009).
To generate the DRIS norm, the means, standard deviation and number of observations for each of the ratios between two nutrients were calculated directly and inversely, as well as the levels of each evaluated nutrient in the healthy orchards (OHY), regardless of the cultivation system (Dias et al., 2010).
The DRIS Cupuaçu program (DRIS, 2009) was used to calculate the DRIS indices (IDris), average nutritional balance (NBIa) and IDris interpretations, based on the fertilization response potential.Using software DRIS Cupuaçu (DRIS, 2009), the procedures proposed by Beaufils (1973), Jones (1981) and Elwali & Gascho (1984) were taken into consideration, including all bivariate relationships (direct and inverse), as follows: where f(A/B) is the DRIS function for any two nutrients (A and B); A/B is the ratio between nutrients A and B in the sample; a/b is the ratio between nutrients A and B in the reference standard; and sa/ b the standard deviation of ratio A and B from the reference standard.
IDris was calculated for each nutrient (Inut) from the arithmetical average of the sum of the differences among all direct and inverse functions that involve the nutrient to be calculated.The NBI was determined by the sum of the modulus of the IDris values generated for the sample, for each nutrient.NBI was calculated using the arithmetical average of the sum of the absolute IDris values generated for each sample.IDris was interpreted by fertilization response potential method (FRP) and classified in five groups (Wadt, 2005) To evaluate the three different formulas to calculate DRIS indices, the number of times the nutrients were most limiting because of deficiency, consequently with positive FRP, or most limiting because of excess, with negative FRP, was quantified.
Multivariate statistical parameters (principal component analysis) were used, based on a biplot graph, to evaluate the spatial distribution of the DRIS indices in the different DRIS methods (Lipcovich & Smith, 2002).
For each nutrient evaluated in each orchard, identified as deficient (FRP positive and FRP positive or zero), in equilibrium (FRP zero) and in excess (FRP negative or FRP negative or zero) according to diagnoses made by each of the DRIS methods, the degree of agreement was evaluated between the different diagnoses.If for a given nutrient the diagnosis (deficiency, equilibrium, or excess) by two different methods was the same in all orchards, it was considered to agree and when different, it was considered to disagree; the degree of agreement between the diagnoses was calculated for all orchards.
The number of cases in which the nutritional diagnosis by the Beaufils (1973) method indicated nutritional equilibrium while Elwali & Gascho (1984) indicated limitation by deficiency and Jones (1981) limitation by excess was also quantified.Finally, the DRIS methods were correlated to each other based on the DRIS indices of each nutrient and the nutritional balance index (NBI) at 1 % by the T test, to determine the degree of similarity among the methods resulting from the use of the different DRIS formulas.
Together with the leaves, the soil was sampled under the crown and in the areas between rows in 65 of the orchards.In these samples, pH, exchangeable Ca and Mg by 1 mol L -1 KCl, potential acidity and available P and K by Mehlich-1 were determined, as proposed by Embrapa (1997).Since the analytical results indicated practically the same concentrations and content ranges of the chemical properties under the crown and in the tracks between rows, the averages of the two sampling positions were considered.The soil chemical properties and leaf levels of macroand micro-nutrients were correlated with the DRIS indices obtained by the three methods tested (Pearson at 5 %).Only the most relevant results were presented for discussion.
Test T (Student's correlation) and multivariate (discriminant analysis) were performed using Assistat 7.5 beta and Graphic Biplot 1.0 software, respectively, and the others using electronic spreadsheets.For the Chi-Square test, the expected frequency was assumed to be the mean frequency of cases of deficiency or excess for each nutrient, comparing this frequency with the number of cases observed.
RESULTS AND DISCUSSION
The soil of the region is highly acidic (pH 3.7-5.5,potential acidity 2.1-9.3 cmol c kg -1 ), with medium to high Ca and Mg levels (1.1-9.7 cmol c kg -1 and 0.1-1.6 cmol c kg -1 , respecitvely), K levels 0.04-0.41cmol c kg -1 , and available P 1-18 mg kg -1 (Table 1).High levels of exchangeable Ca along with high acidity is a common property of many soils of the Formação Solimões (Wadt 2002;Couto 2010), the geological unit from which soils of the region were formed.The P levels are however normally low, and the average of 3 mg kg -1 with a standard deviation of 2 mg kg -1 could be explained by the replenishment with organic fertilizers in many areas of the RECA project.
Independently of the DRIS method for the nutritional diagnoses, in most orchards Ca was most frequently indicated as deficient, i.e., FRP was positive, while K was most frequently in excess, with negative FRP.The relatively high frequency of orchards with P and Mg deficiency was also noted (Table 2).This result was not supported by the availability of nutrients according to the soil fertility in the study region, indicating that mainly Ca, but also Mg and P were more available in the soil than K.Although our results are representative for other soils and growing conditions than in the study of Alfaia et al. (2004), it is not clear why Ca was indicated as most often deficient and K most often in excess.
For K, the high rate of nutrient cycling in agroforestry systems (Côrrea et al., 2006) along with the low levels of transport and relatively high soil fertility in basic cations (Couto, 2010; Table 1) may partially explain the results obtained in the present study.
The Ca nutritional status of cupuaçu trees could be explained by the low internal redistribution rate of the nutrient, which is not mobile among senescing or developing tree organs (Prado, 2008).This leads to a temporary deficiency at the beginning of the period of rapid growth, when there is not enough water in the soil to optimize the mass flow and Ca transport from the roots to the growth points, as suggested to explain the shoot die-back and Ca deficiency indicated by DRIS in young eucalyptus trees (Wadt, 2004).However, more studies are needed to explain this phenomenon, and even to reduce the risk of inconsistent diagnoses, defining the best period for sample leaves when water deficit is not present.
The correlations between P, K, Ca and Mg levels in the soil and the respective leaf contents or DRIS indices of these nutrients by the formulas developed by Beaufils (1973), Jones (1981) and Elwali & Gascho (1984) were significant at 5 % for Ca (Table 3).For P, only the soil level and the DRIS indices obtained by the formulas developed by Beaufils (1973) and by Jones (1981) were correlated (Table 3), indicating that at least for Ca and P, greater availability in the soil results in a better nutritional balance of the respective nutrient, even when the soil and leaf tissue level are not correlated, as in the case of P (Table 3).
However, the nutritional status of the orchards was similar in the frequency in which each nutrient was indicated as deficient or excessive, independent of the DRIS formula applied (Table 2) for all nutrients by the Chi-Square test at 5 %.This result is different from that obtained by Mourão Filho et al. (2002), who found that Jones (1981) formula was more efficient than that of Beaufils (1973) or Elwali & Gascho (1984) in diagnosing the nutritional status of orange trees.
Multivariate analysis (represented by a biplot of the spatial distribution of the DRIS indices undergoing different DRIS methods, (Lipcovich & Smith, 2002) indicated that the diagnoses for all nutrients except P by the formulas developed by Beaufils (1973) and Elwali & Gascho (1984) were similar (Figure 1).For P, the formulas of Jones (1981) and Elwali & Gascho (1984) were the most similar (Figure 1).Multivariate analysis indicated that for most DRIS indices, the performance of Jones (1981)' formula was opposite to that of Elwali & Gascho (1984), which did however not affect the predictive capacity of the nutritional status (Table 2).
Recently, some studies have discussed the efficiency of DRIS functions using the correlation between NBI and yield since the inverse correlation between NBI and yield would define the performance of the methods (Partelli et al., 2006;Silva et al., 2009).R. Bras. Ci. Solo, 35:2083-2091, 2011 Nachtigall & Dechen (2007) evaluated the efficiency of DRIS methods in apple, concluded that the Elwali & Gascho (1984) formula was superior to that of Beaufils (1973) and Jones (1981) because the correlation between the nutritional status of apple trees indicated by NBI and yield was better by the former.Based on the correlation between NBI and yield, Silva et al. (2005) reported a contrary result, in that Jones (1981)' formula performed better in evaluating the nutritional status of pastures fertilized with N and S compared to the formulas developed by Beaufils (1973) and Elwali & Gascho (1984).
According to Maia (1999) and Wadt et al. (2007), based solely on evaluation of the mathematical expressions of each formula, the formulas of Beaufils (1973) and Elwali & Gascho (1984) tend to overestimate nutritional deficiency compared to Jones (1981) formula.
However, when interpreting the nutritional status by the Fertilizer Response Potential (FRP) method (Wadt, 2005) for all nutrients evaluated, there was no significant difference in the distribution at which each nutrient was indicated as deficient or in excess by the different tested DRIS formulas (Figure 2).
Evaluating each diagnosis separately, the Elwali & Gascho (1984) formula indicated on average 3.4 % more cases of deficiency than Beaufils (1973), while the Jones (1981) formula indicated 1.7 % more cases of excess than Beaufils (1973) (Table 4).That is, six cases of P, Ca and Mg, five cases of N and three cases of K deficiency identified by the Elwali & Gascho (1984) formula were not identified by that of Beaufils (1973) formula (Table 4).This disagrees with Wadt (1996) and Maia (1999), who evaluated the performance of the DRIS formulas and suggested that the Beaufils (1973) formula tends to diagnose deficiency in more cases than the formulas of Jones (1981) and Elwali & Gascho (1984).By the formula of Elwali & Gascho (1984), the NBIa value was systematically lower than of Beaufils (1973) since according to Elwali & Gascho (1984), cases in which the difference between the bivariate ratio in the sample and its norm was below the standard deviation were zeroed, thus diminishing the average NBIa value (Figure 2).With the lower NBIa value, in orchards considered nutritionally balanced by Beaufils (1973)' formula, deficiencies were detected by Elwali & Gascho (1984) (Table 4).Jones (1981)' formula led to similar results, with lower NBIa values than the other formulas (Figure 2), which increased the possibility of diagnosing nutritional imbalance (either deficiency or excess), as explained above.However, since the other formulas overestimated deficiencies, Jones (1981)' formula indicated more cases of excess than the other methods, and performed as the Elwali & Gascho (1984) formula in cases of deficiency.
Based on this performance of the different DRIS formulas and the criterion of Fertilizer Response Potential, it was decided that Beaufils (1973)' formula should be given preference when it is desirable to diminish false diagnoses of deficiency; Elwali & Gascho (1984)'s formula should be used to diminish the number of false excess diagnoses (increasing the number of deficiency diagnoses); and Jones (1981)' formula is recommended to diminish the case of false balance diagnoses (increasing the number of cases of deficiency and excess).
Although Beaufils (1973)´ formula overestimates deficiencies (Maia, 1999), this effect can be eliminated by the interpretation criterion of the DRIS indices when based on NBIa.It should be emphasized, however, that the differences between the methods are minimal.The correlation of the DRIS indices obtained by the different formulas generally resulted in a correlation coefficient of more than 83 % and all correlations were significant at 1 % by Pearson´s correlation analysis.The correlation coefficients between DRIS indices of the nutrients and NBIa obtained by the formulas of Beaufils (1973) and Elwali & Gascho (1984) were higher than 95 % (Table 5).The high correlation between the indices based on the different DRIS formulas was reflected in the high degree of agreement among diagnoses for all nutrients (Table 6).This high level of agreement among the diagnoses calls for the determination of the consistency between the diagnoses and plant response in relation to the addition or maintenance of nutrient levels, the diagnoses in fertilizer trials (Wadt & Lemos, 2010;Wadt & Silva, 2010).
For example, Ca, P and Mg were, in decreasing order, most frequently identified as limiting in the orchards, while K was the nutrient with the highest of excess cases (Table 2).The question arises whether these diagnoses are in fact consistent with the plant response to corrective fertilizer, and if necessary, the DRIS formula should be determined that could improve the level of diagnostic success, for example by diminishing the cases of false diagnoses of balance or imbalance (deficiency or excess).
(FUNTAC) for financial support of the research project and the producers and technicians of the reforestation project RECA (Reflorestamento Econômico Consorciado Adensado) of Nova Califórnia, Porto Velho, RO for transportation and help with the field work.
Table 1 . Mean values of the chemical properties of the soils of the 65 studied orchardsTable 2 . Frequency of cupuaçu orchards with positive response potential (deficiency) and negative (excess) responses to fertilizer (1) diagnosed by DRIS formulas developed by Beaufils (1973), Jones (1981) and Elwali & Gascho (1984)
*: significant at 5 % by the two-tailed Pearson correlation test.
Table 4 . Number of cases in which the nutritional status is diagnosed as nutritional equilibrium by the Beaufils method (1973) whereas the Elwali & Gascho (1984) method indicates more limitations by deficiencies and the Jones method (1981) limitations by excess in 153 cupuaçu orchards in the southeastern Amazon Table 5. Correlation coefficients of the DRIS indices for macronutrients obtained by the DRIS methods developed by Beaufils (1971), Jones (1981) and Elwali & Gascho (1984), based on the evaluation of the nutritional status of 153 cupuaçu orchards in the southeastern Amazon
**: Significant at 1 % by the T Test (Student). | 2018-12-19T09:48:45.446Z | 2011-12-01T00:00:00.000 | {
"year": 2011,
"sha1": "a39781040cbecff08fef44e2052de878487006d9",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbcs/a/qXxNPKMp7q4yFGnwRHwPFVz/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a39781040cbecff08fef44e2052de878487006d9",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
265042261 | pes2o/s2orc | v3-fos-license | A Plant-Centered Diet is Inversely Associated with Radiographic Emphysema: Findings from the CARDIA Lung Study
,
INTRODUCTION
Chronic obstructive pulmonary disease (COPD) affects over 15 million Americans and is a leading cause of death in the United States 1 .Emphysema, a key manifestation of chronic lung disease, is characterized by irreversible structural changes to the lung and carries important implications for long-term lung health, even in patients without an established COPD diagnosis.
There is increasing recognition that emphysema is an important independent clinical predictor of worse respiratory outcomes, including respiratory symptoms and quality of life, even in the absence of airflow limitation on spirometry 2 .Therefore, emphysema may represent a clinically significant intermediate phenotype of impaired lung health 3 and a critical window for early interventions to intercept future respiratory complications.However, modifiable factors to intercept development of emphysema have not been identified.
While smoking remains the primary environmental risk factor for the development of emphysema, smoking cessation interventions have resulted in long-term quit success rates in only up to 25% of patients 4,5 .These data highlight that these strategies alone are insufficient to optimize respiratory health among smokers, and that a simultaneous focus on other treatable traits is greatly needed to mitigate the adverse health effects of tobacco use across the life course.
Notably, individual adherence to healthy dietary patterns may serve as a potential strategy for preserving lung health in these exposed populations.
Emerging data provides support for the role of diet in lung health among individuals with competing risk factors such as lifetime tobacco exposure.In one study, smokers without respiratory disease with greater adherence to a Western diet pattern (increased red and cured meat and sweets; low fruits, vegetables, legumes, and fish consumption) were at an increased risk of impaired lung function 6 .In addition, we have shown that as early as adolescence, adherence to a higher quality diet is associated with significantly decreased environmental tobacco-associated respiratory symptoms 7 .As respiratory symptoms-including cough/phlegm and wheeze-among young adults have been associated with greater odds of future radiographic emphysema 8 , these data emphasize a potential role for diet as an early modifiable risk factor for the lifetime risk of chronic lung disease.
A recent analysis of data from a large, multi-center longitudinal cohort study, Coronary Artery Risk Development in Young Adults (CARDIA), demonstrated that a nutritionally rich plantcentered diet was protective against cardiovascular-related morbidity and mortality 9 .Diets higher in fruits and vegetables have also been associated with improved lung outcomes, including reduced wheeze in children 10 , better lung function in adults 11 , and lower prevalence of asthma in both adults and children 12 .However, few studies have evaluated the effect of dietary patterns on emphysema.In this study, we examined the association between adherence to a nutritionally rich plant-based diet and emphysema in early to mid-adulthood using the A Priori Diet Quality Score (APDQS), a novel diet quality assessment tool that accurately assesses the intake of the most nutritious plant-centered foods without excluding animal products.The objective of this analysis was to determine the extent to which adherence to a nutritionally rich plant-centered diet-via the APDQS-in ever-smoker young adults is associated with future development of radiographic emphysema.
Study population and design
Population: This study is a secondary analysis of the CARDIA study, a multicenter, prospective cohort of 5,115 adult Black and White men and women from four geographically diverse US cities 13 .Participants at baseline were aged 18 to 30 years, who were followed for 30 years with 71% retention at year 30.The study protocol has been published elsewhere 13 .For the present study, we included ever-smoker participants (defined as self-reported smoking at any time point prior to year 20) due to the low prevalence of emphysema in never smokers within this population.For the outcome of emphysema, we restricted the population to those who had computed tomography (CT) measurements available at year 25.As additional analyses were conducted as a part of this investigation, those with absence of spirometry (missing for Years 0, 2, or 5, in combination with missing for Years 20 or 30) were also excluded (n=42).Flow diagram for of participant inclusion is included in Figure 1.All participants provided written informed consent at all examinations, and research protocols were approved by institutional review boards at the CARDIA coordinating center and each field center.University of Alabama at Birmingham Institutional Review Board (IRB-981106002) reviewed and approved the CARDIA study prior to data collection.
Assessment of Plant-based Diet Quality Score: Diet was assessed at baseline, year 7, and year 20 via the validated CARDIA diet history 14 .The APDQS utilizes CARDIA's comprehensive dietary data to generate an evenly weighted score incorporating 46 food groups and was calculated as previously described 9,15 .Unlike other measures of diet quality, the APDQS emphasizes greater consumption of plant-based foods with the highest nutritional value over nutritionally poor plant-based foods, while also accounting for animal product quality.Therefore, plant-based foods such as fruits, avocado, green and yellow vegetables, and whole grains contribute to a higher score, while higher intakes of plant-based foods like fried potatoes, grain desserts, margarine, and fruit juice would result in lower scores 9 .Scores range from 0 to 132, with higher numbers indicating greater adherence to a high-quality diet.
Assessment of outcome variable:
The primary outcome was the presence of radiographic emphysema.Radiographic emphysema was detected by visual assessment of year 25 CT scans utilizing methods as previously described 8,16 .Briefly, all CT scans were reviewed by an initial reader (Reader 1) and classified as having paraseptal emphysema, centrilobular emphysema, both, or neither.All CT scans classified as having emphysema were reviewed by a second reader (Reader 2) in addition to a random sample of 10% of the remaining CT scans.Disagreements between Readers 1 and 2 were adjudicated by a third reader (Reader 3).
Other Covariates: Baseline demographic information and clinical data were obtained from the CARDIA database and included sex, age (years), maximal educational attainment (highest grade completed), race (Black; White), smoking and pack year history, anthropometrics, and energy intake (kcal).Smoking status was assessed every year.Previous studies in this cohort have demonstrated misclassification between self-reported cigarette smoking and Year 0 serum cotinine measurements to be low 17 .Spirometry, performed using standard procedures per American Thoracic Society guidelines 18 to assess lung function, is reported at baseline.Body mass index (BMI) was calculated as weight/height squared (kg/m 2 ) derived from measurements by trained technicians.Cardiorespiratory fitness (CRF; treadmill time, seconds), history of asthma and field center were also recorded.
Statistical Analyses: For the purposes of this analysis, APDQS assessed at years 0, 7, and 20, for which updated information were available, were cumulatively averaged over follow-up and divided into quintiles.Baseline descriptive statistics were reported according to quintiles of the APDQS, and statistical significance was tested using ANOVA for continuous variables and chisquare tests for categorical variables.
Multivariable logistic regression models for binary outcome were used to evaluate associations between APDQS quintile of the averaged dietary data and year 25 radiographic emphysema.
This averaging approach, previously used in CARDIA cohorts 9 , allows for minimization of random within-person error, better reflects the cumulative, long-term dietary effect and preserves sample size.The primary model was adjusted for age, sex, race (Black and White), field center (Birmingham, Chicago, Minneapolis, and Oakland), maximal educational attainment, baseline height, averaged total energy intake (years 0, 7, 20), averaged BMI (years 0, 2, 5, 7, 10, 15, and 20), and life-time pack years of smoking (years 0, 2, 5, 7, 10, 15, and 20).Maximal educational attainment was used as a surrogate for socioeconomic status, to be consistent with previous CARDIA analyses.As a sensitivity analysis, the model was further adjusted for CRF and asthma as potential mediating factors for emphysema.Potential effect modification by pack years of smoking was evaluated by testing the statistical significance of a multiplicative interaction term of the APDQS as a continuous variable with <10, 10-20, and >20 pack years of smoking.All analysis was conducted using SAS version 9.4 (SAS Institute Inc., Cary, NC).
Study population
Of the original CARDIA cohort, 1,351 ever-smokers (ever reported current or former smoking at any exam between year 0 and year 20) were available for analysis.The baseline mean (SD) APDQS in this analytic population was 64.6 ± 12.7.Among ever smokers, 999 (73.9%) had dietary information at all three measures, 310 (22.9%) had two measurements, and 42 (3.1%)had only one measurement.Dietary intake consistently tracked over time.For example, Year 0 APDQS had a correlation about 0.62 and 0.58 with Year 7 and Year 20 APDQS, respectively.
The correlation between Year 7 and Year 20 was 0.60.Table 1 shows baseline (Year 0) characteristics of study participants according to quintiles of the baseline APDQS.Smoking history at enrollment (never, former, current) was similar across APDQS quintiles.Those with a higher APDQS were more likely to be older, female, self-identify as White, obtained a higher educational level, and have higher activity levels and CRF.Participants with a higher APDQS also had lower BMI, lower total energy intake, fewer mean pack-years of smoking over 20 years and higher forced expiratory volume in one second (FEV1) and forced vital capacity (FVC).
ADPQS and radiographic emphysema
Of the 1,351 ever-smokers who completed year 25 CT scans, emphysema was observed in 13.0% (n=175).The mean age of those with emphysema was 50.4 ± 3.5 years.The prevalence of emphysema was 4.5% in the highest APDQS quintile, compared with 25.4% in the lowest quintile (Figure 2).
For each one SD higher APDQS, there was a 34% (OR 0.66, 95% CI: 0.49−0.90)lower odds of emphysema (Table 2).Further adjustment for CRF or current asthma status, both individually and together, did not considerably alter the findings (data not shown).
There was no significant interaction between APDQS and category of pack-year smoking history (<10 pack-year, 10-20 pack year, and >20 pack-year) in the development of incident emphysema (p for interaction=0.20), with no significant differences between the diet quintiles within each strata (Table 3).
DISCUSSION
In this longitudinal study of young adult ever-smokers, we found that long-term consumption of a nutritionally rich plant-centered diet was associated with a lower risk of future radiographic emphysema, independent of an influence of smoking history.Participants with the highest adherence to a plant-centered diet had a 56% lower risk of emphysema, compared with those with the lowest adherence.These associations were evident even after adjustment for established demographic, co-morbid, and lifestyle factors that contribute to lung health.Notably, our study captured a population undergoing a critical transition from young to middle adulthood, for whom adherence to a plant-centered dietary pattern may represent an important early modifiable factor for the lifetime risk of chronic lung disease.
Emphysema is characterized by irreversible structural change to the lung, emphasizing the importance of preventing its development 19 .The presence of emphysema among smokers, even in the absence of obstruction on spirometry, is an important independent clinical predictor of worse respiratory outcomes 2 .However, given the dependence on CT imaging to establish the presence of emphysema, very few prior studies have examined modifiable risk factors impacting the development and progression of this disease among a high-risk smoking population.Prior studies have relied primarily on self-report and have usually combined emphysema and chronic bronchitis to create a composite outcome of COPD.In contrast, our study's unique availability of CTs in middle age allowed for the ability to detect associations between diet and radiographically demonstrated structural changes in the lung.
have not been evaluated.Therefore, our findings fill an important gap in the literature and support diet patterns as a strategy for preventing emphysema development.
Although exact mechanisms for the association between the consumption of a nutritionally-rich plant-centered diet and emphysema are unknown, recent animal studies have shown diets high in fiber (a key characteristic of plant-centered diets) attenuate pathological changes associated with emphysema progression and inflammatory response in cigarette-exposed mice 23 .In particular, fiber supplementation modulated the diversity of the gut microbiome and increased the production of anti-inflammatory metabolites including short-chain fatty acids (SCFA), which are known to significantly influence systemic inflammation 24 through activation of G-protein receptors, inhibiting histone deacetylase, and serving as energy substrates for immune-regulating cells 25 .In another study of mice exposed to cigarette smoke, an intervention of fecal microbiota transplantation and a high-fiber diet resulted in protective effects on the lung.The high-fiber diet significantly decreased macrophages and lymphocytes in bronchoalveolar lavage fluid (BAL), and interleukin-6 (IL-6) and interferon-gamma (IFN-γ) were decreased in BAL and serum.Both the fecal microbiota transplant and the high-fiber diet attenuated the development of emphysema and protected against alveolar destruction and cellular apoptosis 26 .Similarly, human studies have demonstrated that a high-fiber diet is associated with reduced blood-based inflammatory biomarkers 27 ; however, a greater understanding of specific mechanisms by which consumption of a plant-centered diet may protect against emphysema is needed.
We found it notable that 13% of all participants already had emphysema noted on CT at a relatively young age (range 42-56 years old), concurring with previous findings of respiratory impairments in smokers without spirometric COPD 28 .Further research on the specific critical windows-during which dietary exposures have greatest impact on lung health-is essential to develop public health dietary recommendations for children and young adults with the goal of preventing future adverse respiratory outcomes.Since structural outcomes of emphysema and physiologically apparent outcomes (such as lung function decline or airflow obstruction) are often clinically discordant and may represent divergent pathways of pathology, this will be an important area of future investigation.
This study has several key strengths.To our knowledge, this is the first prospective study to assess the relationship between a healthy diet pattern and radiographically demonstrated emphysema.In addition, we had a prolonged follow-up period with high retention of participants, which provides insight into treatable traits for smoking-related respiratory outcomes during the critical transition of young to middle adulthood-a stage where smoking-related respiratory diseases typically begin to manifest.Importantly, the APDQS provides multiple realworld achievable pathways to healthy eating.Another defining feature included the rigorous assessment of smoking status, repeated annually, with previous evaluations demonstrating a strong relationship between baseline smoking reports and cotinine levels 17,29 .Finally, the utilization of CT imaging provided objective outcomes and avoided potential misclassification associated with a self-reported diagnosis of emphysema.
A few limitations are worth noting.The observational study design hinders arriving at any causal conclusions; replication in an independent cohort would strengthen causal inferences.Diet was self-reported and is subject to recall bias.While we used a subset of the CARDIA cohort and our baseline APDQS scores were similar to previous examinations of the larger CARDIA cohort 9 , there is limited use of the APDQS score outside the CARDIA cohort 30,31 and is an area of future examination.In addition, despite adjustment for multiple covariates (including smoking), residual confounding cannot be excluded.While adjusted results remained significant, a wide confidence interval indicates a lack of precision in our findings, warranting future prospective studies with a larger sample size to ensure the reliability, strength of association, and generalizability of our findings.While the contribution of smoking, the strongest known risk factor for emphysema, was accounted for in both the adjusted model and by testing for interaction, other key factors, such as air pollution and other social determinants of health, should be accounted for in future studies.Furthermore, the dose-response relationship between diet and decreased incidence of emphysema was maintained, suggesting a true relationship.
Additionally, the CARDIA cohort is composed of Black and White participants, thus the
Table 1 . Baseline characteristics of participants according to quintiles of the Y0 APDQS among ever smokers, N=1351 APDQS Total participants
Priori Diet Quality Score; BMI, body mass index; IQR, interquartile range; FEV1, forced expiratory volume in 1 second; FVC, forced vital capacity; SD, standard deviation Published
Table 1 .
Baseline characteristics for participants excluded and included Evaluated with chi-square tests for categorical variables and ANOVA for continuous variables.b Cumulative data through Y30.c Exercise units, physical activity score derived from the CARDIA physical activity history. a | 2023-11-08T06:16:55.149Z | 2023-11-06T00:00:00.000 | {
"year": 2023,
"sha1": "7c5c3e61dd813bba61ede15d76728f6dcae870da",
"oa_license": null,
"oa_url": "https://doi.org/10.15326/jcopdf.2023.0437",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c811b3a95c7759a117e40acb7d9297166f6f2148",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14780317 | pes2o/s2orc | v3-fos-license | The simulation of realistic BiSON-like helioseismic data
When simulating full-disc helioseismic data, instrumental noise has traditionally been treated as time-independent. However, in reality, instrumental noise will often vary to some degree over time due to line of sight velocity variations and possibly degrading hardware. Here we present a new technique for simulating Birmingham Solar Oscillations Network (BiSON) helioseismic data with a more realistic analogue for instrumental noise. This is achieved by simulating the potassium solar Fraunhoffer line as observed by the BiSON instruments. Intensity measurements in the red and blue wing of the line can then be simulated and appropriate time-dependent instrumental noise can be added. The simulated time-series can then be formed in the same way as with real data. Here we present the simulation method and the first generation of a BiSON-like instrumental noise time series.
Introduction
Simulating a realistic solar oscillations signal is an important task within Helioseismology. The use of artificial data, whose statistical characteristics and input parameters are known, allows for thorough testing, and may reveal biases in, different analysis techniques. Most simulations to date have concentrated on accurately replicating the solar oscillation and background signals, with less thought put into simulating the instrumental noise sources associated with the collection of the data. Traditionally, instrumental noise has simply been treated as a constant time-independent source. However, in reality instrumental noise varies over time due to line of sight velocity variations -at least for resonant scattering spectrometers (RSS) commonly used to make full-disc measurements -and hardware changes. The current work seeks to introduce realistic simulations of instrumental noise into standard solar oscillation simulations.
Method
In order to accurately simulate instrumental noise, we must first understand how the oscillations are observed. At each of the six BiSON stations an RSS is used to measure the Doppler shift of the 770-nm solar absorption line. Each RSS contains a cell of Potassium atoms held at about 100 o C. Detectors are placed at right-angles to the cell so that only resonantly scattered photons should be 2 Fletcher et al.
counted. The absorption linewidth of the vapour cell is much smaller than the width of the Fraunhofer line because the temperature is much lower and there is no rotational broadening. Therefore, only light from a narrow band of the solar absorption line is detected by the RSS. By applying a magnetic field and making use of the Zeeman effect, the absorption line of the potassium atoms in the lab is split. Hence, by switching the state of circular polarisation of the input solar light, it is possible to measure the light intensity in one wing at a time.
As the solar absorption line undergoes a Doppler shift due to the orbital motion and spin of the Earth and due to solar oscillations, so the intensity measurements in the wings change. From these measurements a ratio, R, is formed to give a near-linear proxy for the velocity shift of the line: where I b and I r are the intensities in the blue and red wings of the line respectively. Background offset and instrumental noise will affect the intensity measurements used to form R so the I's can be expanded out into I res which is the desired contribution from resonantly scattered light and I non , I elec and i which are background sources due to non-resonantly scattered light, electronic offsets and noise respectively. In general these are all functions of time.
To obtain the velocity measurements of the solar oscillations, a third-order polynomial function of the station-Sun line-of sight velocity is fitted to R. The oscillations signal is then recovered by subtracting R from the polynomial function and calibrated using the fitted gradient of R versus station velocity.
The first step in the simulation process was to estimate I res by fitting a Gaussian profile to an observed line obtained using Doppler velocity observations of the centre of the solar disc. The observations were taken by the Themis solar telescope located at Izaña, Tenerife [private communication with Roasaria Simonello]. The effects of solar rotation, limb darkening and Doppler imaging where all taken into account in order to generate a simulated line similar to that seen by the RSS's at each of the BiSON stations (see Broomhall et al. 2007). The use of this more realistic line shape is an enhancement on earlier simulation work reported by Chaplin et al. (2005).
The "operating point" on the simulated line is evaluated from the Doppler shift due to the changing line of sight velocity of the Sun to each station (gravitational red shifts and convective blue shift are included). Artificial velocity oscillations can then be added to the velocity Doppler shift . Intensity "measurements" are made for I res b and I res r in the same way as with real data. At this point estimates of the various noise sources can easily be included. Finally, the ratio can be formed and analysed in the same way as with real data. A comparison of the daily ratio curve generated via the simulator compared with that from real data is shown in Fig. 1.
Preliminary Results
Different noise sources can be categorised by whether the noise they generate is correlated or uncorrelated in the two wings of the Fraunhofer line. Noise that varies faster than the switching between the intensity measurements in the blue and red wings will likely be uncorrelated, while more slowly fluctuating noise will be correlated. Using the simulator the effects of these different types of noise can be tracked. The left panel of Fig. 2 shows the power spectral density of three different types of noise over the course of a year. The three cases represent uncorrelated noise with a constant amplitude, correlated noise with the same constant amplitude and shot noise (which is equivalent to uncorrelated noise with an amplitude proportional to I). The resulting noise power clearly varies throughout the year, with the correlated noise showing the largest variation and the shot noise the smallest. Although its variation is largest, the correlated noise case generates somewhat smaller absolute noise levels than the uncorrelated case.
The simulated noise can be compared with noise in real BiSON data. The right hand plots in Fig. 2 show the mean power spectral density of BiSON observations for frequencies from 8.0 to 12.5 mHz (well above the region of the solar oscillations), over a course of a year. The Carnarvon plot shows relatively high noise and a large variation similar to the correlated case. Izaña also has fairly high noise level, but the variation over the year is fairly small indicating that most prevalent noise source may be shot noise. Sutherland has a much lower noise level but, unlike the simulations, its variation over the year shows 2 maxima. A possible explanation for this is that the instrumental noise is actually following a similar trend to the other plots, but the high frequency "tail" of the solar background maybe increasing the observed noise level in the 2nd half of the year. Instrumental noise also varies on a daily basis, again due to the change in operating point on the solar line. This is very difficult to observe with real data due to the presence of the solar oscillations and background. Hence, the simulator is very useful for investigating the effect of this variation. Fig. 3. shows the simulated residuals of four different BiSON stations over July 1st and part of July 2nd (the solar oscillations and background have not been added). This trace represents the first attempt to generate such a realistic data set and makes an interesting comparison with previous simulations which assume constant noise. The type and level of the noise sources have been matched as well as possible to real data.
Discussion
A new BiSON simulator has been designed to more accurately mimic the instrumental noise generated in the data. We will now use the simulator to help us better understand the effect of instrumental noise on the precise mode parameters extracted from power spectra and on the detection of low power modes. In addition, our analysis here shows that a time series produced from multiple BiSON stations will have a varying noise profile suggesting that better statistics may be achieved by applying a weight to each of the time series points. This may be accomplished using a sine-wave fitting technique -see New et al from the proceedings of this meeting. | 2009-03-20T10:11:55.000Z | 2009-03-20T00:00:00.000 | {
"year": 2009,
"sha1": "7c729fe321ccb9f892d54af79bb69785403dfa93",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7c729fe321ccb9f892d54af79bb69785403dfa93",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
} |
17125942 | pes2o/s2orc | v3-fos-license | Edinburgh Research Explorer The PI3K p110alpha isoform regulates endothelial adherens junctions via Pyk2 and Rac1
VE-cadherin), and leukocyte TEM. p110 selectively mediates activation of the Tyr kinase Pyk2 and GTPase Rac1 to regulate barrier function. Additionally, p110 mediates the association of VE-cadherin with Pyk2, the Rac guanine nucleotide exchange factor Tiam-1 and the p85 regulatory subunit of PI3K. We propose that p110 regulates endothelial barrier function by inducing the formation of a VE-cadherin–associated protein complex that coordinates changes to adherens junctions with the actin cytoskeleton.
Introduction
Endothelial cells lining blood vessels provide a barrier between the blood and the tissues.Movement of solutes, nutrients, cytokines, and leukocytes across endothelial cells can occur through both paracellular and transcellular pathways, in which molecules and cells pass either between or through cells, respectively (Engelhardt and Wolburg, 2004;Millán and Ridley, 2005;Nourshargh and Marelli-Berg, 2005).
Endothelial cell-cell contacts comprise tight and adherens junctions similar to those found between adjacent epithelial cells but also contain some proteins unique to endothelial cells, including vascular endothelial cadherin (VE-cadherin), PECAM-1, and ICAM-2 (Dejana et al., 2008).The transmembrane VEcadherin is a key regulator of endothelial barrier function.VEcadherin-null mice are embryonic lethal because of defects in vascular development (Carmeliet et al., 1999), and VEcadherin-blocking antibodies cause a dramatic increase in endothelial permeability in adult mice (Corada et al., 1999).In vitro, inhibition of VE-cadherin increases endothelial permeability and enhances neutrophil transendothelial migration (TEM; Hordijk et al., 1999).VE-cadherin dimers link adjacent cells via homophilic interactions between their extracellular domains, while associating via their cytoplasmic domain with a macromolecular complex that comprises scaffolding and adaptor proteins such as and p120-catenin and plakoglobin (Gumbiner, 2005;Dejana et al., 2008).
Many different inflammatory agents induce dynamic changes to endothelial junctions to increase movement of solutes and leukocytes.Proinflammatory cytokines such as TNF and IL-1 induce a gradual increase in vascular permeability, which is sustained for many hours after stimulation.In contrast, other agents such as histamine or thrombin stimulate acute but short-lived changes in permeability.TNF and other inflammatory mediators induce Tyr phosphorylation of VE-cadherin, -catenin, and/or p120-catenin (Esser et al., 1998;Shasby et al., 2002;Hudry-Clergeon et al., 2005;Angelini et al., 2006).Tyr phosphorylated VE-cadherin acts as a signaling hub to regulate endothelial barrier function by recruiting multiple signaling molecules such as Src, Pyk2, PAK (p21-activated kinase), and E ndothelial cell-cell junctions control efflux of small molecules and leukocyte transendothelial migration (TEM) between blood and tissues.Inhibitors of phosphoinositide 3-kinases (PI3Ks) increase endothelial barrier function, but the roles of different PI3K isoforms have not been addressed.In this study, we determine the contribution of each of the four class I PI3K isoforms (p110, -, -, and -) to endothelial permeability and leukocyte TEM.We find that depletion of p110 but not other p110 isoforms decreases TNF-induced endothelial permeability, Tyr phosphorylation of the adherens junction protein vascular endothelial cadherin (VE-cadherin), and leukocyte TEM.p110 selectively mediates activation of the Tyr kinase Pyk2 and GTPase Rac1 to regulate barrier function.Additionally, p110 mediates the association of VE-cadherin with Pyk2, the Rac guanine nucleotide exchange factor Tiam-1 and the p85 regulatory subunit of PI3K.We propose that p110 regulates endothelial barrier function by inducing the formation of a VE-cadherin-associated protein complex that coordinates changes to adherens junctions with the actin cytoskeleton.
Control endothelial cells had a round cobblestone-like morphology with bundles of actin filaments predominantly around the edges of the cells (Fig. 1 B).VE-cadherin and PECAM-1 were localized linearly along cell-cell borders (continuous junctions; Fig. 1 B, arrow) and with a broader distribution where adjacent cells overlapped (overlapping junctions; Fig. 1 B , arrowheads); in these regions, VEcadherin often had a unique reticular distribution (Fig. 1 B, boxed region in control; and Fig. S2 A; Lampugnani et al., 1995;Noria et al., 1999).As a measure of overlapping junctions, we used VE-cadherin staining to determine the junctional area per cell (junctional index; Fig. S2 B).Depletion of p110 but not p110, -, or - increased the level of overlapping junctions (Fig. 1, B and C).This was not caused by a change in the shape of the cells because neither the spread area nor the circularity (circularity = 1 for a circle) were altered (Fig. 1, D and E).We have previously shown that TNF induces a progressive disruption of endothelial cell junctions and an increase in endothelial permeability and stress fibers over 8-24 h after stimulation (McKenzie and Ridley, 2007).TNF is also well known to induce endothelial elongation (Stolpen et al., 1986).Therefore, we investigated whether PI3Ks were activated by TNF and whether they affected TNF-induced responses.TNF transiently increased PI3K activity, as measured by Akt phosphorylation, at 10-30 min, which returned to basal levels at 60 min.It then gradually increased to 12-18 h (Fig. 2 A).This gradual increase correlated well with the time course of changes in endothelial cell morphology and cell-cell junctions (McKenzie and Ridley, 2007).
Cells treated with TNF for 18 h became elongated, and junctions were disrupted, particularly at the ends of elongated cells where VE-cadherin became fragmented (Figs. 2 B and 3, A and C; circularity index).p110 depletion inhibited cell elongation and increased overlapping adherens junctions by over twofold as well as inducing a small increase in cell spread area (Figs. 2 B and 3,B and D;junctional index).In addition, the levels of PECAM-1 and the tight junction marker ZO-1 were higher at junctions after p110 depletion (Figs. 2 B and 3 A).This indicates that junctions are strengthened when p110 is inhibited.These changes were not the result of altered expression of junctional proteins (Fig. S3 A).In contrast to p110 suppression, depletion of p110, -, or - did not affect TNF-induced changes to cell-cell junctions or cell shape (Fig. 3, B-D; and Fig. S4, A and B).p110 depletion did not block all TNF-induced signaling because it did not affect early p38MAPK phosphorylation (5-60 min; Fig. S3 B).Interestingly, p110 expression increased gradually over 18 h after TNF addition (Fig. S3 C), which could contribute to the progressive changes to endothelial morphology and junctions.
PI3Ks affect multiple steps of the inflammatory process, including leukocyte TEM (Carman and Springer, 2004;Puri et al., 2004Puri et al., , 2005;;Nakhaei-Nejad et al., 2007;Li et al., 2008;Serban et al., 2008).Class I (A and B) PI3Ks consist of a 110-kD catalytic subunit and a regulatory subunit.Class IA comprises three catalytic isoforms, p110, -, and -, bound to one of five regulatory subunits (p85 and -, p55 and -, and p50), whereas class IB comprises the p110 catalytic isoform bound to either a p101 or p84 regulatory subunit (Suire et al., 2005;Cain and Ridley, 2009).Class 1A isoforms are usually activated by binding of the regulatory subunit to Tyr phosphorylated proteins, whereas class 1B PI3K is activated by G protein-coupled receptors (Vanhaesebroeck et al., 2001).Studies in gene-targeted mice have shown that class 1A PI3Ks, particularly p110, are important for vascular development and angiogenesis (Graupera et al., 2008;Yuan et al., 2008), but whether they regulate endothelial junction integrity has not been studied.
In this study, we investigate the roles of each of the class I PI3K isoforms in endothelial barrier function and leukocyte TEM using siRNA and identify a key role for p110 in these processes through downstream effects on VE-cadherin Tyr phosphorylation, the Tyr kinase Pyk2, and the GTPase Rac.
Results p110 regulates junctional morphology in endothelial cells
To investigate the roles of class I PI3K isoforms in regulating endothelial junctions, human umbilical vein endothelial cells (HUVECs) were transfected with siRNAs targeting p110, -, -, or -.Knockdown of each isoform did not affect the expression levels of the other isoforms or the regulatory subunit p85 (Fig. 1 A and Fig. S1, A and B).As a readout for PI3K activity, we monitored Akt phosphorylation at residues Ser473 and Thr308 (Vanhaesebroeck and Alessi, 2000;Vanhaesebroeck et al., 2001).Knockdown of p110, -, and - significantly reduced Akt phosphorylation (Fig. 1 A and Fig. S1 C).The strongest reduction was observed after p110 depletion, which is consistent with observations in VEGF-stimulated mouse endothelial cells (Graupera et al., 2008), whereas p110 siRNA only had a small effect on total Akt phosphorylation, reflecting the very low levels of this isoform in endothelial cells (Graupera et al., 2008).p110 regulates endothelial barrier function • Cain et al.
p110 regulates endothelial permeability
Endothelial permeability is regulated by cell-cell junctions, and thus, we investigated whether the effect of p110 depletion on junctional organization correlated with a change in endothelial barrier function.TNF treatment progressively increased endothelial permeability to FITC-dextran, with an cell (Fig. 2 B; McKenzie and Ridley, 2007).In cells depleted of p110, TNF still stimulated an increase in stress fibers, although they were no longer aligned in the same direction in adjacent cells and thus unlikely to exert tension on junctions at the ends of cells (Fig. 2 B).Therefore, a PI3K-independent, TNF-stimulated pathway regulates stress fiber assembly.affect TEM.Interestingly, p110 depletion induced a small reduction in TEM, which could reflect its postulated role in the selectin-mediated tethering of leukocytes to endothelial cells (Puri et al., 2004).To determine whether PI3Ks also affected transcellular TEM, we analyzed diapedesis of T cell lymphoblasts, which use both paracellular and transcellular pathways (Millán et al., 2006).The proportion of T lymphoblasts using the paracellular pathway was significantly reduced across p110depleted HUVECs, whereas transcellular TEM remained unaffected, which is consistent with a role for p110 in junctional regulation (Fig. 5 B).Transcellular TEM represented around 15% of TEM events, as previously observed (Millán et al., 2006), and was not affected by suppression of any p110 isoform.
To determine which step of leukocyte TEM was affected by p110, we used a 3D assay system of HUVECs cultured on a thick collagen gel (Fig. 5 C).Depletion of endothelial cell p110 did not alter leukocyte adhesion after 10 or 60 min, and thus, differences in TEM were not caused by reduced cell attachment (Fig. 5, C and D).Indeed, expression of ICAM-1, a major leukocyte adhesion receptor up-regulated upon TNF stimulation, was not affected by p110 depletion (Fig. S1 A).Endothelial ICAM-1 is known to cluster around some adherent leukocytes (Barreiro et al., 2002;Carman and Springer, 2004), but this clustering was not affected by p110 levels (Fig. S4, C and D).However, p110 depletion reduced the number of cells able to transmigrate across the endothelium into the collagen gel matrix after 1 h (Fig. 5, C and E).Most adherent leukocytes on p110 siRNA-treated HUVECs appeared unable approximately twofold increase at 18 h (Fig. 4 A; McKenzie and Ridley, 2007).p110 suppression with siRNA strongly reduced TNF-induced endothelial permeability (Fig. 4 A), whereas a small decrease was observed for p110 but not p110-or p110-depleted cells.
We also used transendothelial resistance (TER) to measure the intrinsic barrier function of endothelial monolayers.The TER of HUVECs treated with TNF was 25% lower than unstimulated cells (Fig. 4 B).In agreement with the permeability data, cells depleted of p110 showed a significant increase in TER compared with controls (Fig. 4 B), whereas knockdown of the other p110 isoforms had no effect.As a positive control, VE-cadherin depletion also reduced resistance (Fig. 4 B), as previously reported (van Gils et al., 2009).In unstimulated cells, p110 depletion also led to an increase in TER (Fig. 4 C), correlating with the increase in overlapping junctions (Fig. 1 C).
Endothelial p110 regulates leukocyte TEM
Leukocytes can cross the endothelium either through the junctions (paracellular TEM) or through the endothelial cells (transcellular route).Endothelial junctional integrity is known to affect leukocyte paracellular TEM (Aghajanian et al., 2008).To assess whether PI3K inhibition affected leukocyte TEM, we initially measured TEM of THP-1 cells, a monocyte-like cell line which only utilizes the paracellular route in vitro (Fig. 5 A and not depicted).There was a significant reduction in THP-1 TEM across p110-depleted HUVECs, whereas p110 and - did not to cross endothelial junctions, as determined by staining with -catenin (unpublished data).For those leukocytes that did cross the endothelium, their ability to invade the collagen matrix was similar (unpublished data), indicating that p110 depletion in endothelial cells does not alter leukocyte migration properties.
Tyr phosphorylation
Tyr phosphorylation of VE-cadherin is believed to be an important factor in regulating vascular permeability and correlates with impaired barrier function (Dejana et al., 2008).TNF induced VE-cadherin Tyr phosphorylation (Fig. S5 A), which is consistent with previous observations (Angelini et al., 2006).p110 depletion reduced Tyr phosphorylation of VE-cadherin in TNF-stimulated cells (Fig. 6 A).Several Tyr residues in the VE-cadherin intracellular domain can be phosphorylated, including Y731, Y658, and Y685 (Allingham et al., 2007;Wallez et al., 2007;Turowski et al., 2008).To assess whether the p110-regulated changes in total VE-cadherin phosphorylation were caused by one or more of these residues, lysates were blotted with phosphospecific antibodies for each of these sites.p110 siRNA-treated cells showed a strong reduction in Y731 phosphorylation and a smaller reduction at Y658 but no change at Y685 (Fig. 6 A), indicating that p110 regulates VE-cadherin Tyr phosphorylation on specific sites.It is also possible that the pY731 antibody recognizes other phospho-Tyr residues on VE-cadherin in addition to pY731 and that p110 regulates phosphorylation of these sites.
p110 depletion reduces Pyk2 activity and its association with VE-cadherin
Both Pyk2 and Src have been identified as potential Tyr kinases involved in the regulation of VE-cadherin phosphorylation (Allingham et al., 2007;Turowski et al., 2008).Pyk2 was present in VE-cadherin immunoprecipitates in unstimulated cells and at higher levels in TNF-stimulated cells, whereas its close relative FAK was not detected (Fig. 6 B and Fig. S5 A).Pyk2 association with VE-cadherin was much reduced in p110-depleted cells, both in TNF-stimulated and unstimulated cells (Fig. 6 B and Fig. S5).In unstimulated cells, knockdown of p110 also slightly reduced Pyk2-VE-cadherin association, suggesting that under some circumstances, p110 could also affect junctional integrity, for example downstream of the G protein-coupled IL-8 receptor (Gavard et al., 2009), although not in the TNF response.
Y402 phosphorylation on Pyk2 reflects Pyk2 kinase activity (Mitra et al., 2005).TNF induced a small increase in pY402-Pyk2 at around 60 min and a larger increase at 12-18 h (Fig. 7), correlating with the delayed increase in PI3K activity induced by TNF (Fig. 2 A).Pyk2 Y402 phosphorylation was reduced by p110 suppression both in VE-cadherin immunoprecipitates and in total lysates, indicating that p110 regulates both Pyk2 (Millán et al., 2006).(C) TNF-stimulated, p110 siRNA-or control siRNA-transfected HUVECs previously labeled with CellTracker orange dye were grown on collagen matrices in Transwell chambers.CellTracker greenlabeled THP-1 cells were added to HUVECs, and TEM was allowed to proceed toward an MCP-1 gradient in the lower chamber for either 10 or 60 min before fixation; confocal z stacks were then collected.3D reconstructions were produced using Volocity software.A small gamma correction was applied to the red channel to visualize weak cell tracker staining at the edges of cell, which are difficult to distinguish upon 3D rendering.(D and E) Quantification of adhesion to endothelial cells (D) or TEM (THP-1 cells within the collagen matrix; E).Results represent the mean and SEM of at least three independent experiments.Statistical significance was assessed by the Mann-Whitney U test; *, P < 0.05; **, P < 0.02.Bars, 40 µm.
association with VE-cadherin and Pyk2 activation.Src was also detected in VE-cadherin immunoprecipitates, but unlike Pyk2, neither its levels nor its activity, as measured with antibodies to pY416-Src (Parsons and Parsons, 2004), were affected by p110 depletion (Fig. 6 B).p110 depletion did not induce reduced VE-cadherin binding to all its partners because the levels of -catenin coimmunoprecipitated with VE-cadherin were not affected (Fig. 6 and Fig. S5 A).
The p85 subunit of PI3K has previously been reported to associate with Tyr phosphorylated VE-cadherin, which could be via p85 SH2 domains (Hudry-Clergeon et al., 2005).The level of p85 coimmunoprecipitation with VE-cadherin was increased by TNF stimulation (Fig. 6 B and Fig. S5 A), reflecting the higher VE-cadherin Tyr phosphorylation.The amount of p85 immunoprecipitated with VE-cadherin was strikingly reduced after p110 depletion, both with and without TNF stimulation (Fig. 6 B and Fig. S5 A), suggesting that PI3K- activity itself regulates p85 association with VE-cadherin by increasing VE-cadherin Tyr phosphorylation.
RhoA activity
Rho GTPases are important regulators of endothelial barrier function (Wojciak-Stothard and Ridley, 2002;Fryer and Field, 2005;Tzima, 2006), and VE-cadherin has been implicated in the regulation of Rac1 activity (Lampugnani et al., 2002).In addition, PI3K isoforms affect Rho GTPase activity in endothelial and other cells (Papakonstanti et al., 2007;Graupera et al., 2008;Cain and Ridley, 2009).Therefore, we investigated whether p110 knockdown affected Rho, Rac, or Cdc42 activity.Despite being implicated in epithelial barrier function (Bruewer et al., 2004), active Cdc42 levels were not affected by Tyr phosphorylation of the intracellular domain of VE-cadherin leads to protein binding and correlates with increased endothelial permeability and TEM (Lambeng et al., 2005;Potter et al., 2005;Turowski et al., 2008).Our data indicate that p110 plays a key role in the assembly of a VE-cadherin signaling complex by regulating Tyr phosphorylation of VE-cadherin residue Y731, with a smaller effect on Y658.Several kinases contribute to VE-cadherin phosphorylation, including Pyk2 and Src family kinases, and a dominantnegative Pyk2 construct was shown to affect junctional integrity (Weis et al., 2004;van Buul et al., 2005;Allingham et al., 2007;Wallez et al., 2007).In agreement with this, we find that knockdown of Pyk2 reduces both endothelial permeability and leukocyte TEM.We show in this study for the first time that p110 acts upstream of Pyk2 and regulates both Pyk2 autophosphorylation and its association with VE-cadherin.Like its close relative FAK, Pyk2 autophosphorylation recruits Src kinases (Basile et al., 2007).VE-cadherin Y731 is a putative Pyk2 target (Dejana et al., 2008), although it is unclear whether Pyk2 is able to directly phosphorylate VE-cadherin in vivo or whether it acts indirectly, for example by recruiting a Src kinase such as Fyn, which has been reported to mediate TNF-induced Tyr phosphorylation of VE-cadherin (Angelini et al., 2006).As Pyk2 has no SH2 domain, it is unlikely to bind directly to phosphorylated tyrosines in VE-cadherin, but it could be recruited via the p85 regulatory subunit of PI3K.We show that p85 associates with VE-cadherin and PI3K interacts with Pyk2 (Melikova et al., 2004), probably via the interaction of a p85 SH2 domain with a Pyk2 phospho-Tyr (van Buul et al., 2005;Allingham et al., 2007).Interestingly, p85 interaction with VE-cadherin is reduced by p110 inhibition, presumably because p85 association with the complex depends on PI3K--mediated VE-cadherin Tyr phosphorylation.
Our data suggest that p110 regulates Rac activity via recruitment of the Rac GEF Tiam-1 to the VE-cadherin complex.Recruitment of Tiam-1 to membranes in endothelial cells is dependent on VE-cadherin (Lampugnani et al., 2002), and the N-terminal pleckstrin homology domain of Tiam-1 p110 knockdown (Fig. 8 A).In contrast, Rac1 and RhoA activities were both dramatically reduced in p110-depleted HUVECs but not by knockdown of the other PI3K isoforms.Interestingly, of the Rho isoforms, only the level of GTP-RhoA was affected by p110 but not GTP-RhoB or -RhoC (Fig. 8 B).
VE-cadherin regulates the localization of the Rac guanine nucleotide exchange factor (GEF) Tiam-1 to endothelial cellcell junctions (Lampugnani et al., 2002), and PI3Ks regulate Tiam-1 localization through its N-terminal pleckstrin homology domain (Sander et al., 1998).To assess whether PI3K-mediated changes in Rac1 or RhoA activity were linked to VE-cadherin, VE-cadherin immunoprecipitates from p110-depleted HUVECs were probed for the Rac GEFs Tiam-1 and Vav and the Rho GEF GEF-H1, which has previously been reported to localize to endothelial junctions (McKenzie and Ridley, 2007).Although Vav and GEF-H1 could not be detected (not depicted), Tiam-1 associated with VE-cadherin in both unstimulated and TNFstimulated endothelial cells, and levels were lower in p110depleted HUVECs (Fig. 8 C and Fig. S5).This suggests that p110 mediates Rac1 activation by regulating Tiam-1 association with VE-cadherin.
Rac1 and Pyk2 regulate endothelial barrier function and leukocyte TEM
Because p110 regulates both Rac1 and Pyk2 activities, we investigated whether they contributed to p110-regulated barrier function in endothelial cells using siRNA to knock down their expression (Fig. 9 A).Both Pyk2 and Rac1 depletion reduced TNF-induced endothelial permeability (Fig. 9 B), whereas RhoA knockdown had no effect (Fig. S5).In addition, knockdown of Pyk2 or Rac1 in HUVECs reduced THP-1 TEM, and this effect was significant when both Pyk2 and Rac1 were depleted (Fig. 9 D).Interestingly, Pyk2 and Rac1 each had different effects on endothelial morphology that could affect barrier function.Like p110 suppression, Pyk2-depleted cells had an increase in overlapping cell junctions and junctional index, both in stimulated and unstimulated cells, whereas Rac1 knockdown did not affect junctional index (Fig. 9 E and not depicted).In contrast, Rac1 and p110 but not Pyk2 depletion inhibited TNF-induced cell elongation (circularity; Fig. 9, C and F), which reduced the disruption of junctions observed at the ends of elongated cells (Fig. 2 B).Knockdown of both Rac1 and Pyk2 inhibited cell elongation and increased overlapping junctions (Fig. 9, E and F), although stress fibers were still induced, which is consistent with results with p110 depletion (Fig. 2 B).These results indicate for the first time that TNF-induced changes to cell shape and junction morphology are regulated separately and that both of these responses are likely to affect barrier function.
Discussion
PI3Ks have been shown to regulate vascular integrity and angiogenesis during development, but the mechanistic basis for their effects on endothelial cells has not been determined.In this study, we have identified a specific role for p110 in regulating endothelial junctions, affecting both TNF-induced endothelial binds to PtdIns (3,4,5) P 3 (Hordijk et al., 1997;Michiels et al., 1997).Association of p110 via p85 with VE-cadherin might thereby provide a mechanism by which Tiam-1 is recruited and activates Rac1.
p110 inhibition prevents two TNF-induced morphological responses that could be linked to endothelial barrier function, cell junction morphology, and cell elongation via Pyk2 and Rac1, respectively.p110 and Pyk2 depletion both lead to an increase in adherens junction staining, predominantly in regions where endothelial cells overlap.These junction-rich regions are likely to form a strong barrier to both small molecules and leukocytes.In contrast, Rac1 mediates TNF-induced elongation and contributes to permeability and leukocyte TEM, indicating that cell shape affects barrier function.This could be because in elongated cells, stress fibers exert a higher tension on cell-cell junctions and thus decrease junctional integrity at cell poles compared with cells with a more cobblestone shape.Indeed, leukocytes preferentially transmigrate at poles of elongated endothelial cells (Millán et al., 2006), and endothelial shape changes have been proposed to contribute to TNF-induced vascular leakage in vivo (Cotran and Pober, 1990).Interestingly, RhoA-depleted cells did not show a reduction in TNF-induced stress fibers or altered barrier function.This is consistent with our previous results suggesting that TNF-induced stress fiber assembly and endothelial permeability do not correlate with RhoA activation (McKenzie and Ridley, 2007).
Previous studies have suggested that PI3Ks are important for efficient TEM in vivo and in vitro (Puri et al., 2004(Puri et al., , 2005;;Nakhaei-Nejad et al., 2007).In vitro, treatment of HUVECs with pan-PI3K inhibitors did not affect lymphocyte adhesion but reduced diapedesis.Our results indicate that this step specifically involves p110.In vivo studies with gene-targeted mice have implicated endothelial p110 and - in selectinmediated neutrophil tethering and rolling but not ICAM-1mediated firm attachment to endothelial cells (Puri et al., 2004(Puri et al., , 2005)), and indeed, we see a small effect of p110 on leukocyte TEM in our model, which is consistent with an involvement in selectin function.So far, the roles of p110 or - have not been tested in vivo.Collectively, these results and our data indicate that p110 acts in endothelial cells primarily to regulate changes in junctional integrity induced by TNF and thus influence leukocyte diapedesis, whereas p110 and - are involved in selectin-mediated leukocyte tethering.It is also possible that p110 affects adhesion receptor signaling in some way that reduces subsequent TEM.
In conclusion, we have shown that the p110 isoform of PI3K specifically regulates cell-cell contacts in endothelial cells, thereby contributing to TNF-induced changes to endothelial barrier function and leukocyte TEM.This selective TNF-stimulated, siRNA-transfected HUVECs were lysed, and VE-cadherin was immunoprecipitated (i.p.).Samples were separated by SDS-PAGE and analyzed by Western blotting with antibodies to Tiam-1.Total lysates (bottom) were probed in parallel to control for variations in protein level.GAPDH was used as a loading control.
Antibodies and reagents
Antibodies to p110, -, -, and - have been previously described (Graupera et al., 2008;Guillermet-Guibert et al., 2008).Antibodies to ZO-1, -2, and -3, JAM-A, occludin, VE-cadherin-Y731, and claudin-5 were purchased from Invitrogen, -and -catenin and -actin were role of p110 in endothelial cells could in part explain its role to angiogenesis (Graupera et al., 2008), which requires dynamic changes to endothelial cell-cell junctions and shape.Our results suggest that p110-selective inhibitors could be used to treat chronic inflammatory diseases involving TNF such as arthritis and atherosclerosis.software analysis options (Fig. S2 B).In each case, a minimum of five fields were quantified (20 cells per field) per experiment, and data shown represent the mean of at least three independent experiments.
TEM assay
HUVECs were plated at confluence on Transwell inserts (5-µm pore).After 6-8 h, cells were treated with 10 ng/ml TNF for 18 h.10 µg/ml MCP-1 was added to the bottom chamber before the addition of 1 × 10 5 washed THP-1 cells to the upper chamber.Cells were left to transmigrate (1 h; 37°C) before medium from the bottom chamber was removed, and cells were counted using a cell counter (CASY; Innovatis).Each experiment was performed in triplicate.For quantification of T lymphoblast paracellular and transcellular TEM events, T lymphoblasts (provided by E. Cernuda-Morollon, University College London, London, England, UK) were added to confluent TNF-stimulated HUVECs on glass coverslips for 15 min before fixation with 4% paraformaldehyde.Samples were stained with anti-ICAM-1/ Alexa Fluor 488-conjugated anti-mouse IgG, anti--catenin/Alexa Fluor 546-conjugated anti-rabbit, and Alexa Fluor 633-conjugated phalloidin.T lymphoblasts undergoing TEM were easily distinguishable by confocal microscopy, as the leading edge was in contact with the coverslip, whereas the rear uropod was still localized on the surface of the endothelial cell.-Catenin and ICAM-1 staining was used to determine whether the cell was undergoing paracellular or transcellular diapedesis, respectively.Each experiment was performed in quadruplicate, quantifying 25 fields per condition.
3D model for TEM
HUVECs were seeded at confluence onto collagen I gels polymerized in 12-µm pore Transwell filters.After 6-8 h, monolayers were treated with 10 ng/ml TNF for 18 h.HUVECs were labeled with 5 µM CellTracker orange for 40 min before washing in normal medium.10 µg/ml MCP-1 was added to the bottom chamber before the addition of 1 × 10 4 washed CellTracker green-labeled THP-1 cells to the upper chamber.After 10 or 60 min, HUVECs were washed three times to remove unbound THP-1 cells, and then Transwells were fixed by immersion in 3.7% (wt/vol) paraformaldehyde before being processed for immunofluorescence microscopy.z sections were acquired at 200-nm intervals by confocal microscopy (see Immunofluorescence microscopy), and 3D images were reconstructed using Volocity image analysis software (PerkinElmer).Adhesion and TEM (leukocytes below the level of the endothelial cells) were assessed using ImageJ software to count CellTracker green voxols either above or within the matrix, and cell numbers were determined by comparison with a standard curve of known voxol/cell numbers.
Transendothelial permeability assays
HUVECs were seeded onto 0.4-µm pore Transwell filters (Thermo Fisher Scientific) at confluency.After 6-8 h, monolayers were treated with 10 ng/ml TNF for 18 h.0.1 µg/ml FITC-dextran (molecular weight, 42,000) was subsequently applied to the apical chamber and allowed to equilibrate for 45 min before a sample of the medium was removed from the lower chamber to measure basal permeability.Fluorescence was measured using a fusion universal microplate analyzer (Fusion-FA; PerkinElmer; excitation, 492 nm; detection, 520 nm), and data were expressed as a ratio of the control untreated monolayer fluorescence.All experiments were performed in triplicate, and results shown are the mean of at least four independent experiments.
TER measurement
Transfected HUVECs were plated at confluence on dishes containing gold electrodes (ECIS; Applied Biophysics).After 24 h, cells were washed in warm EGM-2 media and either treated with 10 ng/ml TNF for 18 h or left untreated before data were collected on a resistance meter (ECIS 1600; Applied Biophysics).Resistance data were collected at a constant voltage of 15 V for a period of 12 h, sampling data every 10 min.Data from a stable period of 4-6 h were used to calculate mean resistance.Control siRNA transfection did not alter the TNF-induced decrease in resistance.
Cell culture
Pooled HUVECs were purchased from Lonza and cultured in EGM-2 medium (Lonza) containing 3% FCS.Cells were used for experiments between passages 1 and 4 and were seeded onto fibronectin-coated (10 µg/ml; Sigma-Aldrich) flasks, glass coverslips, or filters.For experiments, HUVECs were stimulated with 10 ng/ml TNF for 18 h except where indicated.THP-1 cells were maintained in suspension culture in RPMI 1640 medium (Invitrogen) containing 10% FCS and 2 mM l-Gln at between 5 × 10 5 and 2 × 10 6 /ml.T lymphoblasts were prepared from isolated human peripheral blood mononuclear cells (peripheral blood lymphocytes).After stimulation with 0.5% phytohemagglutinin for 48 h, nonadherent peripheral blood lymphocytes were washed and cultured in RPMI 1640 medium (10% FCS and 2 U/ml IL-2 in 5% CO 2 ) and used in experiments after culturing for 7-12 d. were tested for knockdown activity by Western blotting and subsequently used individually (RhoA, Rac1, Pyk2, and VE-cadherin) or as four oligonucleotide pools.For p110 isoforms, the siRNA pools gave the strongest knockdown and were therefore used in all subsequent experiments.siRNAs diluted in EBM-2 medium without FCS were premixed with EBM-2-diluted Oligofectamine reagent (Invitrogen) as described in the manufacturer's instructions.Cells were transfected (6 h) in 1 ml of EBM-2 medium containing growth supplements, but excluding antibiotics and FCS, giving a final oligonucleotide concentration of between 80 and 200 nM. 1 ml EBM-2 medium with growth factors and 8% FCS was subsequently added to each well, and cells were incubated overnight.After transfection (24 h), cells were trypsinized and plated either at confluence or subconfluence and allowed to adhere (8 h) on fibronectin-coated dishes or coverslips for the different assays.
Immunofluorescence microscopy
HUVECs grown to confluence on glass coverslips were fixed with 3.7% (wt/vol) paraformaldehyde, permeabilized with 0.2% (vol/vol) Triton X-100 in PBS (15 min), blocked with 3% (wt/vol) BSA in PBS (1 h), and then incubated with primary antibodies in 3% BSA in PBS (1 h).Samples were sequentially incubated for 1 h with Alexa Fluor 488-or Alexa Fluor 546-conjugated anti-rabbit or anti-mouse IgG secondary antibodies, followed by incubation with Alexa Fluor 633-conjugated phalloidin (20 min; Invitrogen).Coverslips were mounted using ProLong Antifade reagent (Invitrogen).Images were acquired at room temperature using a confocal microscope (LSM 510 META; Carl Zeiss, Inc.) with three single photomultiplier tube confocal detectors mounted on an inverted microscope (Axio-Observer Z1; Carl Zeiss, Inc.) at a magnification of 40 with an oil immersion objective (EC Plan-Neofluar 40× NA 1.30 oil differential interference contrast M27).Images were processed using Zen software (Carl Zeiss, Inc.), and figures were assembled using Photoshop CS4 (Adobe).Quantification of cell area, circularity, and junctional index (junctional area/cell number) was performed using ImageJ software (National Institutes of Health).Circularity was calculated using the formula 4(area/perimeter 2 ).A circularity value of 1 indicates a perfect circle.As the value approaches 0, it indicates an increasingly elongated polygon.Junctional index was calculated using the formula ([junctional area/total area] × 100)/cell number.Junctional area was calculated per field by using the VE-cadherinstained channel, thresholding the image to create an even intensity stain corresponding to junctional area, and quantifying its area using the 10 min; 4°C).A 50-µl sample of whole cell lysate was retained, and the remaining supernatant was incubated with the appropriate beads (1 h; 4°C with rotation).Beads were washed three times in pull-down buffer and boiled for 5 min in Laemmli sample buffer, and proteins were resolved by SDS-PAGE and Western blotting.Protein loading was determined by reprobing the whole cell lysate lanes with an anti-GAPDH antibody.
Immunoprecipitation and Western blotting
HUVECs were harvested by scraping into lysis buffer (50 mM Tris-HCl, pH 7.4, 1% [vol/vol] NP-40, 150 mM NaCl, 0.25% Na deoxycholate, 1 mM EGTA, 1 mM Na orthovanadate, 1 mM Na fluoride, 1 µM PMSF, and 1 µg/ml each of aprotinin, leupeptin, and pepstatin) and lysed by passing through a 21-gauge needle.Lysates were clarified (10,000 g; 10 min; 4°C), and protein concentrations were measured and standardized.VE-cadherin was immunoprecipitated by the addition of anti-VE-cadherin monoclonal antibodies and protein G-conjugated agarose beads (2 h; 4°C).A sample of identically treated control lysates was incubated with agarose-conjugated anti-mouse IgG as a negative antibody control.Beads were sedimented (500 g; 1 min) and washed three times with lysis buffer containing an additional 100 mM NaCl.Immunoprecipitated proteins were eluted and boiled in Laemmli buffer containing 0.1 mM DTT.Samples were separated by SDS-PAGE using a gradient gel system (12-15% gradient gels; Invitrogen), transferred to polyvinylidene fluoride membranes (2 h; 50 V), and blocked with 5% nonfat dried milk or 5% BSA in TBS (20 mM Tris-HCl, pH 7.5, and 150 mM NaCl), followed by subsequent incubations with appropriate primary and secondary antibodies in TBS with 5% nonfat dried milk/BSA (1 h or overnight) and washing in TBS-T (TBS with 0.1% Triton X-100).Proteins were detected using the enhanced chemiluminescence detection system (GE Healthcare).
Online supplemental material Fig. S1 shows the knockdown efficiency of p110 isoforms and effects on Akt phosphorylation.Fig. S2 shows that depletion of p110, -, or - does not affect endothelial junctions or morphology.Fig. S3 shows that depletion of p110 isoforms does not affect the expression of junctional proteins or TNF-induced p38MAPK activation.Fig. S4 shows that depletion of p110, -, or - does not affect junctional organization or ICAM-1 clustering in TNF-stimulated cells.Fig. S5 shows the effects of p110, Rac1, and RhoA depletion on endothelial junctions.Online supplemental material is available at http://www.jcb.org/cgi/content/full/jcb.200907135/DC1.
Figure 1 .
Figure 1.Inhibition of p110 increases junctional overlap in endothelial cells.(A) p110 siRNA-transfected HUVECs were lysed and analyzed by SDS-PAGE and blotting for PI3K subunits and Akt phosphorylation.siRNA oligonucleotides specifically knock down only individual isoforms and do not affect levels of the regulatory p85 subunit.GAPDH was used as a loading control.(B) Immunofluorescence micrographs of HUVECs transfected with p110 or control siRNA.Samples were stained with antibodies to VE-cadherin and PECAM-1 and Alexa Fluor 633-conjugated phalloidin to visualize F-actin.The arrow and arrowheads point to examples of linear and overlapping junctions, respectively.Bottom panels are magnifications of the boxed regions (merged images), showing detail of overlapping junctions.(C-E) Junctional index (junctional area/cell number; C), cell area (D), and cell circularity (E) were determined from immunofluorescence images using ImageJ software.In each case, a minimum of five fields were quantified (20 cells per field) per experiment, and data represent the mean and SEM of at least three independent experiments.Statistical significance was assessed by the Mann-Whitney U test; *, P < 0.05.Bars, 20 µm.
Figure 2 .
Figure2.Inhibition of p110 affects cellcell junctions and cell morphology after TNFmediated inflammation.(A) HUVECs were either stimulated with TNF (10 min to 18 h) or left unstimulated before lysis, SDS-PAGE, and Western blotting with pAktS473, Akt, and GAPDH antisera.(B) Immunofluorescence micrographs of HUVECs transfected with p110 or control siRNA and either stimulated with TNF (18 h) or left unstimulated before fixation.Samples were stained with antibodies to VE-cadherin and PECAM-1 and Alexa Fluor 633-conjugated phalloidin to visualize F-actin.Bottom panels are magnifications of the boxed regions (merged images), showing detail of a linear junction (untreated), disrupted junctions at the boundary of two elongated cells (TNF + si-control), or overlapping junctions (TNF + si-p110).The arrowhead shows an example of an overlapping junction.Bars, 20 µm.
Figure 3 .
Figure 3. Inhibition of p110 increases junctional index, cell area, and cell circularity.(A) Immunofluorescence micrographs of HUVECs transfected with p110 or control siRNA and either stimulated with TNF (18 h) or left unstimulated, before fixation.Samples were stained with antibodies to ZO-1 and -catenin.Right panels are magnifications of the boxed regions (merged images), showing detail of cell-cell junctions.(B-D) Junctional index (junctional area/cell number; B), cell circularity (C), and area (D) were determined from immunofluorescence images using ImageJ software.In each case, a minimum of five fields were quantified (20 cells per field) per experiment, and data represent the mean and SEM of at least three independent experiments.Statistical significance was assessed by the Mann-Whitney U test; *, P < 0.05; **, P < 0.02.Bar, 20 µm.
Figure 4 .
Figure 4. Inhibition of p110 increases barrier function in TNF-stimulated endothelial cells.(A) p110 siRNA-transfected HUVECs were either TNFstimulated (16-18 h; A) or unstimulated before permeability to FITCdextran was assessed.(B and C) TNF-stimulated (B) and unstimulated (C), p110 siRNA-or VE-cadherin siRNA-transfected HUVECs were seeded at confluence in ECIS electrode chambers, and TER was measured (15 V; 12 h; sampling every 10 min).Data from a stable TER period of 4-6 h were used to calculate mean resistance, and comparisons were made with unstimulated monolayers (assigned as 100%).Mean change in resistance
Figure 5 .
Figure5.p110 inhibition in endothelial cells reduces leukocyte TEM.(A) THP-1 cells were added to TNF-stimulated, siRNA-transfected HUVECs grown in Transwell chambers, and the resultant TEM efficiency toward MCP-1 was determined after 1 h.(B) T lymphoblasts were added to TNF-stimulated, siRNA-transfected HUVECs and fixed after 15 min.Samples were analyzed by confocal microscopy and scored visually for paracellular and transcellular TEM events(Millán et al., 2006).(C) TNF-stimulated, p110 siRNA-or control siRNA-transfected HUVECs previously labeled with CellTracker orange dye were grown on collagen matrices in Transwell chambers.CellTracker greenlabeled THP-1 cells were added to HUVECs, and TEM was allowed to proceed toward an MCP-1 gradient in the lower chamber for either 10 or 60 min before fixation; confocal z stacks were then collected.3D reconstructions were produced using Volocity software.A small gamma correction was applied to the red channel to visualize weak cell tracker staining at the edges of cell, which are difficult to distinguish upon 3D rendering.(D and E) Quantification of adhesion to endothelial cells (D) or TEM (THP-1 cells within the collagen matrix; E).Results represent the mean and SEM of at least three independent experiments.Statistical significance was assessed by the Mann-Whitney U test; *, P < 0.05; **, P < 0.02.Bars, 40 µm.
Figure 6 .
Figure6.Inhibition of p110 reduces VEcadherin Tyr phosphorylation and Pyk2 activity.(A and B) TNF-stimulated, siRNAtransfected HUVECs were lysed, and VEcadherin was immunoprecipitated (i.p.).Samples were separated by SDS-PAGE and analyzed by Western blotting with antibodies to total phospho-Tyr and for VE-cadherin phosphorylated at Y658, Y685, and Y731 (A) and other proteins as indicated (B).Total lysates were probed in parallel (blots on right) to control for variations in protein level.
permeability and leukocyte TEM.Our results indicate that p110 acts via VE-cadherin Tyr phosphorylation, the Tyr kinase Pyk2, and the Rho GTPase Rac1 to regulate these responses.
Figure 7 .
Figure 7. Pyk2 activity is increased by TNF stimulation.Endothelial cells were either stimulated with TNF (10 min to 18 h) or left unstimulated before lysis, SDS-PAGE, and immunoblotting with pY402-Pyk2, Pyk2, and GAPDH antisera.
Figure 8 .
Figure 8. p110 depletion decreases RhoA and Rac but not Cdc42 activity.(A and B) TNF-stimulated, p110 siRNA-transfected HUVECs were lysed, and Rho GTPase activity was determined by affinity for GST-Rhotekin-RBD-or GST-PAK1-PBD-conjugated beads for RhoA/B/C and Rac/Cdc42, respectively.After washing, samples were separated by SDS-PAGE and analyzed by Western blotting with antibodies to Rac1 or Cdc42 (A) or RhoA, -B, or -C (B).Total lysates (bottom) of each sample were Western blotted in parallel to control for total amounts of GTPase.GAPDH was used as a loading control.(C) PI3K inhibition reduces Tiam-1 association with VE-cadherin.
Figure 9 .
Figure 9. Pyk2 and Rac1 depletion alter endothelial junctions and leukocyte TEM.HUVECs were transfected with either control siRNA or siRNA to Pyk2 and/or Rac1 and then stimulated with TNF (18 h) or left unstimulated.(A) HUVECs were lysed and analyzed by SDS-PAGE and Western blotting with Pyk2, Rac1, and GAPDH antibodies to assess siRNA knockdown efficiency and ICAM-1 to assess TNF stimulation.The asterisk indicates a nonspecific band; Rac1 is the top band.(B) Barrier function of siRNA-treated HUVECs either stimulated with TNF (16-18 h) or left unstimulated was assessed by permeability to FITC-dextran.Unstimulated monolayers were assigned as 100%.(C) Immunofluorescence micrographs of siRNA-transfected HUVECs.Samples were stained with antibodies to VE-cadherin and Alexa Fluor 633-conjugated phalloidin to visualize F-actin.Arrowheads indicate examples of overlapping junctions.(D) THP-1 cells were added to siRNA-treated, TNF-stimulated HUVECs in Transwell chambers, and the resultant TEM efficiency toward MCP-1 was determined after 1 h.(E and F) Junctional index (junctional area/cell number; E) and cell circularity (F) were determined from immunofluorescence images.Data represent the mean and SEM of at least four independent experiments, each performed in triplicate.Statistical significance was assessed by the Mann-Whitney U test; *, P < 0.05; **, P < 0.02.Bars, 20 µm. | 2019-02-28T23:32:14.317Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "c6e555d1c097a1ef3b98bd2cd5334e492cd0af90",
"oa_license": "CCBYNCSA",
"oa_url": "http://jcb.rupress.org/content/188/6/863.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9b5d6709bb46673d25c6b7911eda337d5a5669a6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
110936910 | pes2o/s2orc | v3-fos-license | Implementing a simple vectorial bridge with a digital oscilloscope
We show how to exploit instrumentation available in undergraduate student laboratories to build a simple vectorial bridge. In particular we take advantage from the possibility to read data from a digital oscilloscope with a personal computer and describe an algorithm to obtain an accurate evaluation of the phase difference between two sinusoidal signals. The use of the bridge to characterize components of a high Q RLC filter is shown to greatly improve the understanding of results in resonance experiments. Direct evidence of dielectric losses, of skin currents and of the effect of distributed capacitance is obtained.
I Introduction
The study of resonant circuits, both in theory and experiments, is an important step in the learning path of physics students. RLC systems provide a paradigmatic tool to introduce young learners to the concept of normal modes and to the methodologies of linear system analysis. 1 , 2 , 3 In the laboratory RLC circuits offer the opportunity to familiarize with instruments of fundamental importance in the future professional life such as oscilloscopes, waveform generators and data acquisition with personal computers. 4 It is therefore rather disturbing that it is never easy to get a good matching when it comes to comparing experimental results obtained with high Q RLC circuits with theoretical predictions 2, 5 .
These difficulties stem from shortcomings in the modelling of real components and specifically from distributed capacitance and frequency dependent dissipation caused by skin effect, dielectric or magnetic losses. These effects are very difficult to compute from theory thus making a quantitative evaluation from first principles impossible. A viable alternative consists in a better characterization of the available components obtained by measuring their complex impedance as a function of the excitation frequency. This can be done with a variable frequency vectorial bridge 6 which however is not often available in student laboratories.
In this paper we show that the same instrumentation used for the RLC experiment can be exploited to build such an instrument. In particular we will take advantage of the possibility to read the data from a digital oscilloscope with a personal computer and we will describe an algorithm to obtain a very accurate evaluation of the phase difference between two sinusoidal signals.
Indeed any method aiming at quantifying dissipation in reactive components boils down to measuring the phase angle φ between the voltage applied and the current flowing in the component.
The relative accuracy achievable on the equivalent resistance R X turns out to be linked to the phase measurement accuracy δφ by the following relation ( ) Since the reactance X(ω) can be a couple of orders of magnitude larger than R X , an accuracy of 10% for the dissipation measurement requires phase accuracy of the order of 10 -3 radiant, i. e. 0.05°. We will show that this accuracy is achievable with a careful exploitation of instruments of current use in a student laboratory.
The structure of the paper is the following: phase measurements with a digital oscilloscope and their uncertainties are described in the next two sections; a very simple bridge configuration is then presented in section IV together with the expressions to evaluate both resistance and reactance with their uncertainties. Then in section V we illustrate the difficulties that may be encountered in accounting for experimental observations with a high Q resonant RLC filter by discussing a couple of exempla. Finally in section VI we show the results obtained for the frequency dependence of the impedance of the capacitors and the inductor used in the measurements. The last section presents the conclusions of our work.
II Phase measurements with a digital oscilloscope
In the realm of analogical electronics we could obtain the phase difference φ of two signals by using a device that performs their product. We would then deal with a signal and we could measure the alternate and the continuous component to recover the cosine of the phase angle from the value of their ratio. If we measure V(t) with an oscilloscope then it would be sufficient to measure its maximum M and minimum m value to obtain cos(φ) = (M+m)/(M-m).
With a double channel digital oscilloscope we can obtain a better accuracy by measuring directly the two original voltage signals and reading the data files with a personal computer. Indeed a number of methods can be found in the published literature aiming at optimally recovering the phase difference between two digitized sinusoidal signals 7 and recommended standards have been issued by the IEEE. 8 , 9 In the following we will describe in some detail an algorithm that: i. is best suited for use in an impedance meter, ii. takes into account all relevant features of an analogical to digital converter, iii. allows for a clear evaluation of relevant uncertainties, iv. can be usefully illustrated in a single undergraduate classroom session.
Neglecting noise for the time being, we will have to deal with a digital representation for each signal consisting of a number N p of data points taken at a constant and precisely known time where the coefficients K represent the ADC's overall calibration factor, δ 0 their offset value and δ i are the quantization errors.
Taking their product we see that a number of additional terms interfere with the phase information we are looking for and we must properly handle them to minimize their impact: Since quantization errors are in principle not correlated, we are lead to work with averages of this expression. Therefore, assuming the knowledge of the signal period and deferring a discussion of the impact of its uncertainty to the following section, we will average over an integer number of periods so that the second and the third term have a null expected value. Then its first term yields a numerical evaluation of cos( ) / 2 in out in out K K V V φ whose accuracy we will comment later.
For the last term we note that since the two factors are not correlated, its average can be recovered from the product of their averages. Using the symbol < > to denote our numerical averages we have 0 ( ) The values of V in and V out can be recovered numerically by evaluating the root mean square of the two signals. Proceeding as above we get for the input signal and averaging this expression we obtain > is the variance of the distribution of quantization errors.
We can write similar expression for the output channel and finally get the phase cosine as that does not depend upon the calibration factors K in and K out whose uncertainties therefore do not affect this measurement.
Only the phase absolute value can be recovered from the knowledge of its cosine but the availability of digital data allows determining its sign too. When the frequency is known, we can add a digital phase delay to the output signal by shifting it with respect to the input signal of a number of data point corresponding to a quarter of the period, that is a phase shift of π/2, and use the same algorithm to obtain cos(φ+π/2)=sin(φ) from which the sign of the phase angle is deduced.
Using a Taylor expansion, it can be shown that the numerical evaluation of the averages involved in our phase algorithm does not produce a significant error when the average is performed on an integer number of cycles. This is of course not possible when the ratio of the signal period to the sampling time is not rational and results in a phase uncertainty that can be easily shown to be lower than 1/N, N being the number of data points used in the averaging process. In the following we will take care to use N > 2000 to make sure that this uncertainty stays below our target of δφ < 10 -3 radiant. Table I developed at CERN. 10 As shown in table 1 the agreement is excellent: the difference between the two values always stays below the combined statistical uncertainty and on average does not exceed one part over a thousand. This comparison fully validates the algorithm discussed in this section.
III Phase uncertainties
Statistical uncertainties in the phase measurements are caused by fluctuations in the value of the quantization errors. Starting from expression (1), (2) and (3), a long but straightforward calculation, see appendix, shows that to the leading order in the size of quantization errors we have The two quantities (4) to quantify statistical phase uncertainty in the impedance measurements described in the following of this paper.
This finding implies that quantization errors cause a statistical fluctuation δφ of the phase that does not depend upon the phase value and is determined only by the ratio of the bit size to the signal amplitude. We have tested this result in fig. 2 showing that the measured phase fluctuation for fixed signal amplitude is well described by eq. (4) when the vertical range of the oscilloscope channels is increased to change the bit size. This figure also shows that adapting the vertical range of the oscilloscope channels to the signal amplitudes we can obtain a statistical uncertainty on the phase angle small enough to be useful in impedance measurements.
The impact of electrical noise on phase accuracy can be evaluated proceeding as in the previous section. It is easy to demonstrate that, since noise is not correlated with quantization errors, we have: We have again tested successfully this result by changing the signal amplitude in experiments with a fixed noise level.
Noise is seldom important in the input channel but it can significantly affect the output signal when it becomes too small, depending on the impedance we are attempting to measure. It can be shown using eq. (5) that, to be compatible with our accuracy goal, the signal to noise ratio in the output channel should not fall below the value of 3 10 25 N ≈ for N=2000.
It is important to stress at this point that the presence of noise also affects the phase evaluation.
With the method used above, and assuming that noise in the input and output channel are not correlated, it is easily shown that the previous expression (3) must be modified as To the leading order the noise amplitude introduces a systematic overestimate of the phase cosine equal to a fraction > of its value. With the limit stated above on the signal to noise ratio we can neglect the noise bias except when measuring small phase angle where it can translate into an important phase error. To circumvent this problem, when the phase angle is below 30° we introduce a digital phase delay of 45°, evaluate the phase difference between the two signals and then subtract the digital delay to obtain a better evaluation of the original phase angle. This allows making negligible the noise bias with respect to the statistical uncertainty also in these unfavourable circumstances.
Before closing this section let us note for future use that the statistical uncertainties on the effective input signal amplitudes can be computed starting from eq. (2) by the method outlined in the appendix and is given by with a similar expression for the output channel.
IV A simple vectorial bridge
The simplest experimental arrangement for impedance measurements that allows exploiting the opportunities offered by a digital oscilloscope is illustrated in fig. 3 oscilloscope whose memory buffer is read by a personal computer.
The unknown impedance Z is obtained in terms of the reference impedance Z 0 as where φ 0 is the phase delay of the output signal with respect to the input and 0 This method can be useful when measuring the reactive component of the unknown impedance but it suffers of two drawbacks that considerably reduce its accuracy in the measure of the resistance.
Recalling the analysis of section II, we note that the experimental determination of the output attenuation A 0 depends on the values of the two overall calibration factors of the input, K in , and output, K out , channels, that are generally poorly known. Assuming as a guideline that these factors are equal to 1 with a relative uncertainty of 1/128 for our 7+1 bit ADC converters, we see that the attenuation A 0 is affected by a systematic uncertainty larger than 1% that can impact severely on the uncertainty affecting the value of R Z .
The second drawback stems from the fact that, although we may try to compensate as well as we can the two probes used to pick-up signals, nevertheless the phase shift introduced by the two measuring channels of the oscilloscope are bound to differ by an amount that can severely affect our measurements.
Both these problems can be solved performing a new set of measurements of phase shift and attenuation with the same setup after replacing our unknown impedance Z with a known test impedance Z T . We will obtain This formula shows that, when the two sets of measurements are performed with the same settings of the digital oscilloscope, the results for Z are independent from the voltage calibration factors and from any phase shift introduced by the measurement channels. In these conditions only statistical fluctuations and noise have to be taken into account for the attenuations yielding to the leading Comparing with eq. (5) we see that the relative statistical uncertainty affecting attenuation is very similar to the absolute statistical phase uncertainty. With the use of eq. (7) their effect on the measured values of resistance and reactance can be easily evaluated.
Thin metal film resistors are the most appropriate choice for both Z 0 and Z T since they have lower tolerances and temperature coefficients, lower noise, lower parasitic inductance and capacitance.
Moreover, their value can be measured to an accuracy of a few parts over thousands with low cost digital dc ohmmeter available in all student laboratories. The proper choice of their values depends on the reactance under test and the impact of their uncertainty on the measured value can be made comparable to the statistical uncertainties discussed above.
When aiming at measurements in the high frequency range, it is important to keep in mind that the impedance of the probe used to read the output signal affects the value of Z 0 that becomes then complex. When the knowledge of the probe capacity C p is only approximate, the value of R 0 should be sufficiently low to guarantee that the needed corrections are negligible. A similar consideration applies to the stray capacity C T across the test impedance Z T . In this case an upper limit exists to the value of the resistance R T that in turns limits the useful frequency range for inductance measurements to 1 T LC ω << . As we will show in the following it is relatively easy, by choosing adequately the value of R 0 , to maintain statistical uncertainties below 10% or 1% respectively for resistance or reactance measurements within this frequency limit.
Systematic effects due to trigger jitter are avoided by using the oscilloscope in single sweep option.
The most serious parasitic effect affecting phase measurement is due to capacitive pickup and/or ground loop interference in the output channel. Therefore it is important that the layout of the experiment is carefully selected, keeping all conductor leads as short as possible, avoiding the use of jumper connections and using screened cables for signal routing.
It is always important to check the results we obtain against these possible interferences by working with multiple values of R 0 or R T for each sampled frequency. Also it is important to control that the results obtained are compatible within experimental uncertainties after exchanging the two measurements channel of the oscilloscope. However it should be noted that with these precautions we succeeded to perform the measurements illustrated below using a solder-less breadboard to connect components.
V Resonant circuit experiments
In spite of its apparent simplicity, an experiment with a resonant RLC circuit leading to an accurate comparison with theoretical predictions requires a careful planning and an accurate choice of components and procedures. The main point to bear in mind is that the voltage across the two reactive components at resonance is higher than the input voltage by a factor equal to the quality factor Q of the filter. This can induce a current that is likely to be larger than the one used when testing the single components. To minimize the impact of this effect on the results a number of precautions are needed. First of all we should select an air core inductance for our experiment to avoid to be fooled by non linear properties of a magnetic core. Moreover, because of the higher dissipation caused by the higher current, we should use components with a low temperature sensitivity coefficient. This applies to the choice of the capacitor whose non linear dielectric properties are also a concern.
In the first experiment to be described here we used an air core inductor whose inductance L, as measured by a low frequency vectorial bridge, is equal to 2.790±0.005 mH, a polyester film capacitor whose capacity C, measured with the same bridge, is equal to 9.90±0.03 nF and a resistor whose In the absence of dissipation in the reactive components we would also expect that the measured attenuation at the resonant frequency is equal to unity which is never the case in the experiment. Dissipation takes place in the inductor due to finite conductivity of its winding. Measuring its equivalent resistance R L with a dc ohmmeter may be adequate at low frequency but could prove wrong when the frequency is high enough to make the skin depth comparable to the wire radius. The dashed lines in fig. 5 take into account a measured dc value of 7.65 Ω for R L but still badly miss experimental data. As shown by the continuous lines in fig. 4, an excellent fit to data is obtained by arbitrarily increasing the total parasitic resistance to 17.2 Ω whose origin must be clarified.
Dissipation takes also place in the capacitor mainly due to dielectric losses which exhibit both intrinsic frequency dependence due dielectric behaviour and an extrinsic one due to reduction of the shows that we observe in this case a resonant frequency 1% lower than prediction. Now to fit the data we have to adjust the value of the inductance or the capacity and we need to increase the parasitic losses to an equivalent of about 60 Ω (see the continuous line in fig. 5). This frequency dependence of both losses and reactance remains to be explained. were available we first checked that they were compatible with the respective accuracies and then we combined them in a weighted average. Moreover all data obtained at 1 kHz and 10 kHz were successfully checked against measurements performed with a dual frequency vectorial bridge available in the student laboratory. The values of the measured inductance for our air core inductor are displayed in fig. 6 in the frequency range 1 kHz to 700 kHz. This figure shows that L remains relatively constant up to 30 kHz and then starts to increase significantly. At 100 kHz we detect a relative increase of 3.5% with respect to the low frequency value that accounts for the resonance When the value is higher than the resistance of leads and contacts, they scale inversely to their capacity, as justified from geometrical considerations.
The frequency dependence of the resistance of the inductor is much steeper, see fig. 9. An increase of 15% is observed at 30 kHz with respect to the dc value which becomes a factor 3.5 at 100 kHz where the skin depth becomes 0.2 mm, comparable with the wire radius of 0.15 mm. The peak observed at 500 kHz is caused by the parallel resonance with the stray inter-wire capacity discussed above.
The same model used to fit the reactance data can also be exploited to recover the naked resistance of the coil. is well below our experimental data. The full line takes into account the proximity effects and is computed for a 5 layer coil of the same wire diameter and a filling factor of 80%. We see that in this case the model has the capability to reproduce the experimental observations up to values of the skin depth lower than a few times the wire radius.
VII Conclusions
The data of the previous section allow for a better understanding of the observation reported in section V. They quantify the importance of additional loss mechanisms both in the capacitor, due to a frequency dependent dissipation caused by polarization currents in the dielectric, and in the inductor, where eddy currents and the parallel resonance with the inter-turn capacity greatly enhance the apparent resistance of the coil.
In the narrow band-pass of our two filters we can neglect any frequency dependence and obtain an independent estimate of the inductance L, the capacity C and total losses R L +R C at the resonant frequency ν 0 by a fit of the A project that exploits the material presented in this paper could start with a first step consisting in the design, construction and characterization of a high Q band-pass filter. A discussion of results obtained should then lead to identify the necessity of a better evaluation of component impedances. The A similar expression can be recovered for the contribution of the output channel and, since the two channels are not correlated, expression (4) is now easily obtained.
As for the amplitude accuracy, starting from the expression from which the last equation in section III is recovered. | 2019-04-13T07:03:43.040Z | 2013-10-30T00:00:00.000 | {
"year": 2013,
"sha1": "bc57a7f78724acc3231172c7b6af7dec587dc10f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1310.8163",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "35d6b441ee64c8ebf2241d47f410e116f5a818d8",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
643696 | pes2o/s2orc | v3-fos-license | Chemical-Induced Read-Through at Premature Termination Codons Determined by a Rapid Dual-Fluorescence System Based on S. cerevisiae
Nonsense mutations generate in-frame stop codons in mRNA leading to a premature arrest of translation. Functional consequences of premature termination codons (PTCs) include the synthesis of truncated proteins with loss of protein function causing severe inherited or acquired diseases. A therapeutic approach has been recently developed that is based on the use of chemical agents with the ability to suppress PTCs (read-through) restoring the synthesis of a functional full-length protein. Research interest for compounds able to induce read-through requires an efficient high throughput large scale screening system. We present a rapid, sensitive and quantitative method based on a dual-fluorescence reporter expressed in the yeast Saccharomyces cerevisiae to monitor and quantitate read-through at PTCs. We have shown that our novel system works equally well in detecting read-through at all three PTCs UGA, UAG and UAA.
Introduction
It is estimated that at least 11% of all known inherited genetic diseases are associated with nonsense mutations that generate premature termination codons (PTCs). In-frame UGA, UAG and UAA in mRNA promote a premature arrest of translation leading to the synthesis of truncated, often un-functional or aberrant, proteins that result in pathological phenotypes. The presence of PTCs has been found in many diseases, including β-thalassemia, cystic fibrosis (CF), Duchenne muscular dystrophy (DMD), ataxia telangiectasia, Usher syndrome, Hurler syndrome and several types of cancer [1,2]. To date there is no genetic therapy available for these disorders. Nonsense mutations activate nonsense-mediated mRNA decay (NMD) a process that specifically recognizes and degrades PTCs-containing mRNA and that could add difficulties in the efforts to alleviate the disease phenotype [3,4].
Suppression therapy, based on chemical-induction of suppression at PTCs (read-through) but not at the natural stop codon has recently been developed [5]. Suppression of PTCs mediated by drugs restores translation elongation and stabilizes PTC-mRNA whose availability could be enhanced by attenuation of NMD [6,7]. Two classes of compounds have been so far described as read-through inducers. The first class is composed by aminoglycoside antibiotics such as geneticin (G418) and gentamicin. These compounds have been demonstrated to be efficient in mediating read-through leading to the re-synthesis of a functional full-length product in in vitro studies as well as in vivo mouse models, in Duchenne muscular dystrophy (DMD), cystic fibrosis (CF), hemophilia, β-thalassemia and other non hereditary diseases, such as colon cancer [1,2,8,9]. Clinical trials imply only a small number of patients with nonsense mutations are expected to benefit from gentamicin treatment whilst long-term treatment with aminoglycosides has been shown to produce severe nephrotoxic or ototoxic effects [10,11]. A rational design strategy to provide synthetic aminoglycoside derivatives with increased readthrough efficiency and decreased ototoxicity is required [12][13][14]. NB84 was recently demonstrated to be effective in moderating disease progression in a long-term suppression therapy [15].
The second class of read-through molecules is represented by nonaminoglycoside compounds, such as PTC124 (Ataluren), identified in a high-throughput screening (HTS) based on a luciferase reporter assay. PTC124 is an oxadiazole compound that has no antibiotic properties and was shown to be safe in pre-clinical trials. However its efficiency in mediating readthrough has been questioned because (a) it was found to interact directly with firefly luciferase [16,17] and (b) displayed with no read-through efficacy in a comparative assay with G418 that instead resulted active across multiple in vitro reporter assays [18]. PTC124 was demonstrated to be effective in mediating read-through in various studies [1] although it displays a selective activity for the UGA codon [19]. An alternate luciferase-independent HTS assay (PTT-ELISA) was developed that is based on an in vitro transcription and translation system driven by a plasmid containing a portion of the sequence of the ATM gene with a TGA C mutation [20]. By using this screening system a new series of small compounds were identified that induced read-through at all three types of nonsense mutation. Among these, RTC13 and RTC14 induced read-through at nonsense mutations in both the ATM and dystrophin genes [20]. In a second series of studies two other nonaminoglycosides compounds, GJ071 and GJ072, were identified and confirmed to be effective in mediating read-through in cells derived from ataxia telangiectasia (A-T) patients with three different types of nonsense mutation in the ATM gene [21]. Another nonaminoglycoside compound, Amlexanox, was recently identified as an inhibitor of NMD in a dedicated screening of a library of 1200 marketed drugs and found to be also able to induce read-through [22]. Most recently a novel class of natural compounds, analogues of (+)-negamycin, were discovered possessing a selective eukaryotic read-through ability that do not display antimicrobial activity [23]. Despite the fact some potential therapeutic molecules are already available, so far clinical data have been below expectations. Novel safe and efficient read-through molecules and repurposed drugs for therapeutic approaches to PTCassociated diseases are required.
Several reporter systems have been so far developed to facilitate the detection of readthrough activity based on high throughput screening (HTS) of small compounds. These include dual enzyme reporters such as β-galactosidase-luciferase [23,24] and dual luciferase [25] or the protein transcription-translation (PTT)-enzyme-linked immunosorbent assay (ELISA) [20,21]. Enzymatic read-through assay systems have also been developed in Saccharomyces cerevisiae and the dual luciferase reporter system has been adapted for the expression in yeast [26][27][28]. Although powerful, these reporter systems are expensive and require cell lysis, reagents and time-consuming manipulations.
Results
Construction of a dual fluorescence reporter system for monitoring readthrough in yeast Most read-through assays at PTCs are currently performed by using reporter systems based on luciferases. Particularly, in the dual luciferase system, Renilla and firefly encoding sequences are cloned in tandem and placed in phase such as to express a unique open reading frame or the two sequences are separated by a stop codon [25][26][27][28]. In the latter configuration the sequence of firefly downstream to the stop codon can be expressed only if read-through does occur thus providing a quantitative measure of read-through at the stop codon separating the two sequences. Taking advantage of this robust principle a rapid and efficient read-through reporter system based on dual fluorescence based on the yeast Saccharomyces cerevisiae, for which genetic tools are available was developed [26]. To construct a dual-fluorescence reporter system the sequences encoding the yEGFP (yeast-enhanced green fluorescent protein) and a variant of the mCherry red fluorescent protein (RFP) were used. The latter RFP variant sequence was codon-optimized for the expression in Saccharomyces cerevisiae as yeastenhanced mRFP (yEmRFP) and can combine fluorescence and a purple visible phenotype [29]. A powerful color phenotype read-through assay was successful developed in yeast [30,31]. The feasibility in using the two fluorescent proteins as read-through reporters, whose expression is easily detectable in vivo, was then explored. The YEpGAP expression plasmid bearing the yEmRFP constructed by N. Dean and colleagues was used [29]. This plasmid constitutively expresses yEmRFP under the control of the TDH3 promoter and confers a characteristic pink color to yeast transformants. The yEGFP upstream to yEmRFP was cloned and the intergenic sequences subsequently modified by site-directed mutagenesis to separate the two open reading frames (ORFs) by a stop codon UGA, UAG or UAA, or to connect them by a correspondent sense codon CGA, CAG or CAA as depicted ( Fig 1A) (see also Materials and Methods). These constructs were then transformed in yeast cells and observed using fluorescence microscopy for fluorescent proteins expression. Yeast cells harboring the YepGR-CGA displayed both the green and red fluorescence (Fig 1C, top), whereas those carrying the Yep-GR-UGA lack the red fluorescence ( Fig 1C, bottom). In order to evaluate the response of our reporter system read-through mediated by the aminoglycoside G418 (geneticin) was determined. Yeast transformants carrying the YepGR-CGA or YepGR-UGA or the vector only, were grown directly in a 96 well microplate in the absence or presence of increasing concentrations of G418 (8 and 16 μg/ml) proved to be efficient in inducing read-through in yeast [32]. After overnight incubation at 30°C the expression of green and red fluorescence were monitored using a dual-laser scanner. Cells harboring YepGR-UGA, that is expressing yEGFP only, clearly showed an increasing red fluorescence as a function of the increasing concentration of G418 thus rendering visible the phenomenon of read-through (Fig 2A). Fluorescence in each well was quantified by using IQTL software and read-through percentage was calculated as described (Fig 2) see also Materials and Methods and the Supporting Information (S1 Fig). The ability of the aminoglycoside G418 in mediating read-through in a dose dependent manner was clearly shown in two parallel read-through assays with independent clones (RT1 and RT2) ( Fig 2B). Following these results read-through was tested at the stop codons UAG and UAA in yeast transformants bearing YepGR-UAG, YepGR-CAG, YepGR-UAA or YepGR-CAA. Yeast strain CW04 was transformed with plasmids (YEpGR series) harboring the indicated read-through cassette UGA or the corresponding sense control CGA. Independent clones were inoculated in quadruplicates in 96 wells microplates to perform two read-through assays (RT1 and RT2). Geneticin (G418) was added at the concentrations indicated. Microplates OD was measured at 595 nm and fluorescence was acquired by using the dual-laser scanner Typhoon 8600 after 24h incubation at 30°C. (B) Read-through levels as a function of the presence of increasing concentrations of G418 were quantitate as described in the Materials and Methods. Quantitative data were obtained from two independent experiments and are expressed as mean values and indicated with standard deviation. Difficulty was found in measuring read-through at these stop codons. The UAG and UAA stop codons are known to be less susceptible to read-through than UGA and it is likely that the background of natural red fluorescence, at least in our yeast strain, constitute an important interference (results not shown). In order to circumvent this shortcoming a novel plasmid set was constructed into which the order of the fluorescent proteins was inverted as shown in Fig 1B. The novel set of plasmids (YepRG series) was transformed into yeast and transformants were checked for the correct expression of the dual red-green fluorescence. In Fig 1D yeast cells harboring either YepRG-CGA or YepRG-UGA expressing both red and green fluorescent proteins or only the red protein are shown. A similar expression pattern was observed from fluorescence microscopy for the other sense and nonsense codons (not shown). The potential visible color phenotype in our strain was correlated to fluorescence. Novel yeast transformants were streaked as a patch on a selective solid medium and grown overnight at 30°C. Imaging of the relevant plate revealed a pink color displayed by all transformants, more intense in the transformants bearing the YepRG-UGA, YepRG-UAG or YepRG-UAA plasmids expressing only yEmRFP and absent in the control vector (Fig 3). Only transformants bearing the sense codons expressed the green fluorescent protein and no green background was observed in cells expressing the plasmids bearing a stop codon between the two reporter sequences.
Dual fluorescence reporter system responds to both G418 and Gentamicin at all stop codons
In order to evaluate the dual fluorescence reporter system a comparative assay was conducted with compounds well characterized for their efficiency in inducing read-through at premature termination codons. Aminoglycosides G418 (geneticin) and gentamicin are validated readthrough correctors in different experimental cell systems and demonstrated to possess the ability to restore full-length production of the cystic fibrosis transmembrane conductance regulator (CFTR) in a bronchial cell line carrying a nonsense mutation in the CFTR gene [33], a functional full-length p53 protein in human cancer cell line containing a PTC [34] and the biological activity of the adenomatous polyposis coli PTC-gene in human cancer cells [35]. In a previous study G418 and gentamicin were shown to be efficient in inducing dose-dependent read-through at stop codons by using a dual luciferase reporter expressed in yeast [32]. Readthrough assay was performed using the dual fluorescence reporters, to monitor and quantify the effects of both aminoglycosides at all stop codons. Yeast transformants with YEpRG series Fig 1B) which are expressing the read-through cassettes indicated. Yeast cells transformed with a YEpRG expressing the yEmRFP and yEGFP ORFs with an in frame sense codon (CGA, CAG or CAA) display both the red and green fluorescence, whereas only red fluorescence could be expressed from those plasmids into which a nonsense codon (UGA, UAG or UAA) was placed between the yEmRFP and yEGFP ORFs. Yeast transformants were selectively grown on solid synthetic minimal medium in the absence of uracil. Imaging was performed by using the dual laser scanner and acquired after overnight incubation of the plate at 30°C. plasmid reporters each carrying UGA, UAG or UAA read-through cassette as well as the correspondent sense controls CGA, CAG or CAA transformants were incubated with G418 or with gentamicin at the same concentrations previously used, 8 and16 μg/ml, and 200 and 400 μg/ ml, respectively. In panels A of Figs 4 and 5 we report the results obtained using Microplates incubated overnight at 30°C and scanned for dual fluorescence after 19-24h by a Typhoon 9600 FLA (GE Healthcare). yEGFP was revealed at 478 excitation and 532 nm emission, whereas yEmRFP at 540 excitation and 635 emission. The quantitative data relative to the obtained fluorescence was acquired by the IQTL software (GE Healthcare) and exported as Excel files that are available in the Supporting Information (S2 Fig). Read-through percentage was calculated as described in the Material and Methods section and reported in panels B of Figs 4 and 5. Read-through was expressed as the ratio Green/RED (nonsense) divided by the ratio Green/RED (sense) x 100. The percent read-through is expressed as the mean ± the standard deviation. All reported values are obtained by at least three independent experiments. The results obtained clearly show that both G418 (Fig 4) and gentamicin (Fig 5) exhibited efficient read-through capability, after 19-24h of incubation, at all three stop codons. These results reiterate our previous findings about the read-through properties of aminoglycosides characterized by using the yeast based dual-luciferase assay [35], indicating that this novel dual fluorescence read-through reporter system is at least as powerful as the dual luciferase based assay. Taken together, these results indicate that this novel dual-fluorescence reporter system based on yeast is sensitive to both widely characterized read-through molecules and is suitable for a general primary screening at all three premature stop codons. Yeast transformants with a plasmid harboring each of the nonsense or the correspondent sense codon, were prepared and cultivated in quadruplicates as described (Fig 4), in the absence or presence of aminoglycoside gentamicin, added at 200 μg/ml (lanes 2, 5 and 8) or 400 μg/ml (lanes 3, 6 and 9. A) Shown is a representative image of yEGFP acquired by a Typhoon 9600 FLA after 19h incubation at 30°C related to a gentamicin mediated read-through assay at the UGA stop codon (see also
Dual fluorescence reporter system responds to NMD
It is well established that the core system of NMD is conserved from yeast to human and its function depends on the UPF1, UPF2 and UPF3 genes [36]. In yeast deletion of each of these genes abolishes NMD with a consequent increase in the abundance of PTC-containing mRNAs and, in addition, results in a nonsense suppression phenotype (read-through) [27,37,38]. In order to verify that a stop codon inserted in frame between the yEmRFP and yEGFP sequences in our reporters is actually recognized as a PTC and triggers NMD, we have tested our dual fluorescence reporter system in strains into which NMD was either functional or abolished by deletion of the UPF1, UPF2 or UPF3 genes. To this aim, wild type (WT) and Δupf1, Δupf2 or Δupf3 deleted strains were transformed with the YepRG-UGA or YepRG-CGA reporter plasmids and a read through assay was performed. Results shown in Fig 6A display both the yEmRFP and yEGFP fluorescence acquired by the dual fluorescence scanning as previously described in the drugs mediated read though assay (Figs 4 and 5). It can be noted that the red fluorescence related to the samples expressing the yEmRFP-UGA-yEGFP reporter was significantly increased in the Δupf1, Δupf2 or Δupf3 strain with respect to WT. In addition, an increase in the green fluorescence can be appreciated in the upfs deleted strains compared to WT. Consistently, an increase of read through in the Δupf1, Δupf2 or Δupf3 strain (up to 2 folds) with respect to wild type was observed and quantified in Fig 6B. Thus, our read-though reporter system appears to be able to sense the NMD functional state and the nonsense suppression phenotype associated with lack of NMD. In order to verify mRNA stability and translation products in those genetic contexts, we next examined the abundance of both the yEmRFP-UGA-yEGFP and yEmRFP-CGA-yEGFP transcripts and translation products expressed by the corresponding YepRG-UGA and YepRG-CGA reporter plasmids. In the experiments depicted in Fig 7 RNA transcription (Fig 7A) and full-length yEmRFP-yEGFP protein production (Fig 7B) was detectable in all the YepRG-CGA clones (either WT, or Δupf1, Δupf2 and Δupf3). As expected, in all the YepRG-UGA clones, only the truncated yEmRF-P-UGA-yEGFP protein was detected (Fig 7B). Quantitative analysis on RT-qPCR results ( Fig 7A) showed that in all the YepRG-UGA clones the relative content of yEmRFP-UGA-yEGFP mRNA is higher when Δupf1, Δupf2 and Δupf3 are compared to the WT, supporting the hypothesis that, in the absence of NMD, mRNA with the UGA in frame is more stable. This is well in agreement with the Western blotting analysis, showing in these YepRG-UGA transformants a higher content of protein products in Δupf1, Δupf2 and Δupf3 than the WT (normalized yEmRFP densitometric values were as follows: WT 10%; Δupf2 14%; Δupf1 22%; Δupf3 12%). In these samples we were unable to detect the full-length protein yEmRFP-yEGFP that would result from read through at the UGA codon, probably because the experimental conditions used were below the detection limit. In addition, slight differences were observed likely depending on intrinsic variability due to the experimental manipulations performed to generate the Δupf1 samples (Fig 7). In this context, we sought to further check the system by using a different technique such as flow cytometry to measure yEmRFP fluorescence at the cellular level and estimate the mRNA abundance reference ratio UGA/CGA. As expected, the yEmRFP fluorescence increased in Δupf1, Δupf2 and Δupf3 bearing the YepRG-UGA reporter compared to WT and with respect to transformants harboring the YepRG-CGA reporter ( Fig 8A). Comparison of the UGA/CGA ratio of yEmRFP fluorescence resulting from flow cytometer ( Fig 8B) with that obtained from RT-qPCR analysis ( Fig 8C) and microplate scanning ( Fig 8D) showed a very similar trend of the increase profile.
Taken together, these results demonstrated that the dual fluorescence reporter system responds to NMD and nonsense suppression associated to the absence of NMD.
Discussion
Recent progress in the chemical-induced read-through approach to overcome PTCs has led to the identification of several compounds possessing read-through ability and of potential therapeutic application in several pathologies caused by nonsense mutations. This property has been screened and examined throughout several cell-culture models of different diseases and a number of different methods [2,9]. However, the read-through response to chemicals is highly variable and many studies indicate that various factors play a role in the efficacy of readthrough. Basal suppression of stop codons UGA (opal), UAG (amber) or UAA (ochre), is known to display an UGA >UAG >UAA hierarchy in the read-through efficiency and is strongly influenced by the identity of the nucleotide immediately downstream to stop codons (position +4). The read-through profile can also be altered when induced by drugs [8,24]. Studies in the yeast Saccharomyces cerevisiae, in which eukaryotic translation termination is conserved, have highlighted the rules governing translation termination efficiency. The simple eukaryotic model was the first genetic context into which suppression of nonsense mutations mediated by paromomycin, an aminoglycoside, was shown [30,31]. In this study a yeast-based in vivo enzyme-independent method that is rapid, inexpensive, sensitive, quantitative and suitable for the screening of low molecular weight compounds able to promote read-through at PTCs has been developed. This method is based on the use of a set of reporter plasmids harboring the coding sequences of two fluorescence proteins, yEmRFP and yEGFP, both being adapted to be expressed in the yeast Saccharomyces cerevisiae and endowed with read-through cassettes separating the two open reading frames. GFP-reporter cell-based assay for translational read-through have been recently described, with a stop codon introduced either within the ORF of interest [39] or inside the GFP reporter ORF [40]. The dual fluorescence system takes advantage of the expression system of the robust dual luciferase reporter [25,27], for which the yEmRFP and yEGFP expression is under control of a single constitutive promoter (TDH3). This is transcribed as a unique mRNA and translated by a single ATG start codon as a unique ORF (see Fig 1). Normalization of nonsense to sense reporter expression takes into account effects of drugs on protein synthesis. A similar principle was successfully adopted for programmed -1 ribosomal frame shifting assay in yeast [41]. As a stop codon separating the two ORFs serves as PTC, the relevant mRNA is substrate for NMD and the ORF upstream of the stop codon, i.e. yEmRFP, provides a reference of mRNA abundance (see Figs 7 and 8). The Chemical-Induced Read-Through at Premature Stop Codons in Yeast reporters were evaluated for yEmRFP and yEGFP cloning order. The study showed that yEGFP placed downstream of the PTC, while exhibiting read-through efficiency equivalent for the UGA stop codon in the inverted order is most effective for both the UAG and UAA stop codons in absence of natural green fluorescence in our yeast strain. Combining multiwells format with a dual-laser fluorescence scanning allows rapid data acquisition and export in excel spreadsheets with results obtained in less than 24 h. Normalization of data with yeast cells OD595, provides monitoring of eventual toxic effects of tested compounds. The method has been evaluated with G418 and gentamicin, the most extensively studied aminoglycosides for the read-through approach with G418 being more efficient in read-through [2]. G418 promotes read-through at a comparable level but at concentrations 25-fold less than gentamicin (Figs 4 and 5). The read-through profile at each stop codon is in general agreement with those previously reported, as determined by dual luciferase reporters in yeast [32]. This novel readthrough reporter system responds to aminoglycosides mediated read-through working equally well at all stop codons.
In the present study a yeast-based in vivo dual fluorescence read-through system that proved to be extremely simple and robust in the screening for the effect of drugs at each stop codon UGA, UAG and UAA in the same nucleotide context was tested. Results show that the read-through response to low molecular weight drugs in yeast is essentially consistent with recent findings in human cells where enzyme-reporters are mostly used and the simple eukaryote could constitute an invaluable tool in the study of the mechanism of different classes of read-through compounds.
This system should be validated in the next future with the several read-through molecules recently proposed, such as for example NB30, NB54, NB84 [14,42,43] and possibly other nonaminoglicoside compounds including Ataluren (PTC124) [19], RTC13 and RTC14 [20] GJ071and GJ072 [21] and Amlexanox [22]. In addition the results obtained with this Saccharomyces cerevisiae based method should be comparatively assessed with in vitro experimental systems based on eukaryotic cell lines harboring target genes carrying premature termination codons to be suitably corrected with read-through inducing molecules [9].
Fast high throughput screening of drugs of possible interest in experimental therapy of diseases caused by nonsense mutations is of great interest in medicinal chemistry, given the high number of patients affected by these disorders. The screening of new molecules and characterization of drugs already used in therapy or clinical trials for other pathologies could provide a repurposing/repositioning strategy for drugs to treat diseases caused by nonsense mutations [44] Materials and Methods
Construction of the YEpGR serie reporters
The structure of dual fluorescence reporter is based on the original plasmid yEpGAP-Cherry, a high copy 2μ, URA3 vector, carrying the yEmRFP sequence. whose expression is placed under the control of the constitutive promoter TDH3 [29]. YepGAP-Cherry was first used as a template in site-directed mutagenesis PCR to substitute the ATG start codon in yEmRFP. This can be with either the CGA sense codon or the TGA stop codon. A Bgl II restriction site can be introduced upstream of the codon by two site-directed PCRs using the forward primers 3 or 4 respectively and the reverse primer 5 introducing a Xba I site ( Table 1).
The yeast-enhanced green fluorescent protein (yEGFP) sequence was amplified by PCR, using the plasmid pUG35 as a template (U. Güldener and J. Hegemann, unpublished results), with primers 1 and 2 (Table 1) and cloned in the Bgl II and Xba I restriction sites to form the recombinant YEpGR-CGA and YEpGR-TGA plasmids. YEpGR-CGA was then used as a template in site-directed mutagenesis PCRs to convert the sense codon CGA into the nonsense UGA or UAA codon. YEpGR-TGA was used as a template to convert the nonsense codon TGA into the sense CGA or CAA codon.
Construction of the YEpRG serie reporters
The plasmid YepGAP-Cherry [29] was used as a template to amplify the yEmRFP coding sequence, with primers 10 and 11, to clone the sequence as an Eco RI/Sal I amplicon into the polylinker of the plasmid pUG35, upstream of yEGFP. The yEmRFP-yEGFP fusion obtained was amplified using the primers 10 and 12 and cloned back into the EcoRI-XhoI restriction sites of the YEpGAP vector to obtain the recombinant YEpRG plasmid. This was used as a template to introduce the read-through cassettes in frame between the yEmRFP and yEGFP ORFs by 6 PCRs using each forward primer 13-18 with the Rv primer 19.
Site-directed mutagenesis PCRs were performed by using Phusion Hot Start II High-Fidelity DNA Polymerase (Thermo Scientific) or Q5 Hot Start High-Fidelity DNA Polymerase (New England Biolabs Inc.) according to the supplier instructions. Corrected mutagenesis issues were all verified by sequencing.
Read-through assay
Yeast transformants bearing a plasmid with a nonsense cassette, or the correspondent sense control or vector alone were loaded on 96 wells microplates as follows. Yeast transformants cultures were grown overnight at 30°C in selective medium without uracil, diluted at OD595 = 0.00125 and incubated 4 h before being dispensed as 150 μl per well in quadruplicate. Chemicals to be tested were added at the concentrations indicated. Sample controls in the absence of chemicals were compensated with an equal volume of water (G418 and geneticin). Microplates were incubated overnight at 30°C, then read for OD595 and scanned for dual fluorescence after 16-24h by a Typhoon 8600 (Amersham) or Typhoon 9600 FLA (GE Healthcare). yEGFP was revealed at 478 excitation and 532 nm emission, whereas yEmRFP at 540 excitation and 635 emission. Fluorescence for quantitative data was acquired by IQTL software (GE Healthcare) and exported as Excel files. Read-through percentage was calculated as follows: For YEpGR transformants, fluorescence was first normalized to OD values and subtracted of vector transformed samples, then read-through was expressed as the ratio RED/Green (nonsense) divided by the ratio RED/Green (sense) x 100; For YEpRG transformants the same procedure was followed unless read-through was expressed as the ratio Green/RED (nonsense) divided by the ratio Green/RED (sense) x 100.
The percent read-through is expressed as the mean ± the standard deviation. All reported values are obtained by at least three independent experiments. Independent clones (indicated for instance as RT1 and RT2 in Fig 2) were usually inoculated in quadruplicates in the 96 wells microplates, in order to obtain data about intra-plate variation and statistical significance of the average data obtained. Where indicated identical clones replicates were inoculated as indicated in Fig 6.
Protein extracts preparation
Yeast cells pellets were resuspended in an appropriate amount of Y-PER Yeast Protein Extraction Reagent (Thermo Scientific) added of 100X Halt Protease Inhibitor Cocktail (Thermo Scientific). The amount of Y-PER Yeast Protein Extraction Reagent was calculated according to the weight of pellet as indicated the manufacturer's instructions. The mixtures ware agitated at room temperature for 20 minutes and then, centrifuged at 14,000g for 10 minutes at 4°C. The supernatant that contains proteins was reserved and stored at -80°C. Protein extracts were quantified using Pierce BCA Protein Assay Kit (Thermo Scientific).
Western blotting
65 μg of cytoplasmic extracts were denatured for 5 min at 98°C in 1x SDS sample buffer (62.5 mM Tris-HCl pH 6.8, 2% SDS, 50 mM Dithiotreithol (DTT), 0.01% bromophenol blue, 10% glicerol) and loaded on SDS-PAGE gel (10 cmx8 cm) in Tris-glycine Buffer (25 mM Tris, 192 mM glycine, 0.1% SDS). Precision Plus Protein WesternC Protein Standards (size range of 10-250 kDa) (BioRad) was used as standard to determine molecular weight. The electrotransfer to 20 microns nitrocellulose membrane was performed using Trans-Blot Turbo Transfer System (BioRad) with Turbo program (2.5A, 25V, 7 minutes). The membranes were prestained in Ponceau S Solution (Sigma Aldrich) to verify the transfer, washed twice with 25 ml TBS (10 mM Tris-HCl pH 7.4, 150 mM NaCl) for 5 minutes at room temperature and incubated in 30 ml of blocking buffer for 1 hour at room temperature. The membranes were washed three times for 5 minutes each with 30 ml of TBS-T (TBS, 0.1% Tween-20) and incubated with Anti-RFP primary mouse monoclonal antibody (1:2000) (Cat. AKR-021, Cell Biolabs, San Diego, CA, USA) in 10 ml primary antibody dilution buffer with gentle agitation over-night at 4°C. The day after, the membranes were washed three times for 5 minutes each with 30 ml of TBS-T and incubated in 10 ml of blocking buffer, in gentle agitation for 1 hour at room temperature, with Stabilized Goat Anti-Mouse IgG, (H+L), Peroxidase Coniugated secondary antibody (1:2000)(Thermo Scientific) and Precision Protein Strep-Tactin HRP-conjugated antibody (1:10.000) (BioRad) used to the protein marker. Finally, after three washes each with 30 ml of TBS-T for 5 minutes, the membranes were incubated with 5 ml LumiGLO 1 (0.5 ml 20x Lumi-GLO 1 , 0.5 ml 20x Peroxide and 9.0 ml Milli-Q water) (Cell Signaling) in gentle agitation for 5 minutes at room temperature and exposed to x-ray film (Pierce, Thermo Scientific). After stripping procedure using the Restore™ Western Blot Stripping Buffer (Pierce) membranes were incubed again in blocking buffer and then reprobed with Anti-Actin primary mouse monoclonal antibody (1:500)(Cat.69100, ImmunO, MPBiomedicals, LLC, Solon, Ohio, USA) used as normalization control. X-ray films for chemiluminescent blots were analyze by Gel Doc 2000 (Bio-Rad Laboratoires, MI, Italy) using Quantity One software to elaborate the intensity data of the target protein.
RNA extraction
RNA extraction was performed using RiboPure-Yeast Extraction Kit (Ambion) according to the manufacturer's instructions. Yeast pellet was resuspended 480 μl of Lysis Buffer, 48 μl of 10% SDS and 480 μl of Phenol:chloroform:IAA. The mixture was vortexed vigorously for 15 seconds and then was transferred in a screw cap tube containing 750 μl of Zirconia Beads. Tubes, positioned horizontally, were mixed at maximum speed for 10 minutes and then were centrifuged at 16,000g for 5 minutes at room temperature. The aqueous phase, containing RNA, was transferred in a fresh tube and 350 μl of Binding Buffer was added for each 100 μl of aqueous. 235 μl of 96% DNAse-RNAse free Ethanol was added for each 100 μl of aqueous and the mixture was vortexed. Filter Cartridges were assembled in either Collection Tube and 700 μl of mixture were added. Tubes were centrifuged at 8,000 g for 60 seconds at room temperature and the flow-through was discarded. This step was repeated several times until all the mixture pass through the filter. Filters were washed at first by adding 700 μl of Wash Solution 1, and then by adding twice 500 μl of Wash Solution 2/3. After every wash the filters were centrifuged at 13,000g for 1 minute at room temperature. At the end, the empty filter was centrifuged twice at 13,000g for 60 seconds at room temperature. Filters were transferred to fresh tubes and RNA was eluted with 50 μl Elution Buffer, pre-heated at 95°C. Filters added by Elution Buffer were incubated at room temperature for 5 minutes and then were centrifuged at 16,000g at room temperature at room temperature. To remove DNA contamination, RNA was treated with DNase I. 5 μl of 10X DNase I Buffer and 4 μl of DNase I were added to 50 μl of extracted RNA, and the mixture was incubated for 30 minutes at 37°C.To stop the reaction 5.9 μl of DNase Inactivation Reagent were added to each sample, the solution was vortexed and incubated at room temperature for 5 minutes. The mixture was centrifuged at 16,000 for 3 minutes at room temperature and the supernatant, containing RNA, was transferred in a new tube. RNA obtained stored at -80°C.
Reverse Transcription and quantitative Real-time PCR (RT-qPCR)
For gene expression analysis 300 ng of RNA treated with DNase I were reverse transcribed to cDNA using SuperScript VILO cDNA Synthesis Kit in a 20-μl reaction. At 300 ng of RNA were added 4 μl of 5X VILO Reaction Mix, 2 μl of 10X SuperScript Enzyme Mix and Rnase-Dnase free water to the final volume of 20 μl. the mixture was incubated for 10 minutes at 25°C, 60 minutes at 42°C and 5 minutes at 85°C. The obtained cDNA was stored at -20°C. Real-time-qPCR experiments were carried using I-Taq Universal SYBR Green Supermix (BioRad), primers (purchased from IDT and Sigma Aldrich) used in the analysis are indicated in the table. cDNA (0,5 μl) was amplified for 45 PCR cycles using an CFX96 Touch Real-Time PCR Detection System (BioRad) in 20 μl final volume reaction mix. Amplification program includes a first denaturation step at 95°C for 3 minutes, followed by 45 cycles consisted of a denaturation phase (96°C for 10 seconds),a phase of primers annealing (63°C for 60 seconds) and finally, an extension phase (72°C for 20 seconds). Relative expression was calculated using the comparative cycle threshold method and the endogenous controls yeast ACTIN, was used as normalizer gene. Duplicate negative controls (no template cDNA) were also run with every experimental plate to assess specificity and to rule out contamination. The real-time PCR reactions were performed in duplicates for both target and normalizer gene.
Flow cytometry
Yeast cells were grown to OD = 0.5-1. Cell sample was diluted 8 fold with SC buffer and analyzed by using the BD LSRFortessa X-20 (Becton, Dickinson and Company, Franklin Lakes, NJ, USA) cell analyser. The quartz cuvette flow cell is gel-coupled by refractive index-matching optical gel to the fluorescence objective lens (1.2 NA) for optimal collection efficiency. Emitted light from the gel-coupled cuvette is delivered by fiber optics to the detector arrays. The BD LSRFortessa X-20 uses configurable polygonshaped optical pathways that use signal reflection to maximize signal detection. The flow rate was 12 μL/min and the red emission was recorded in the wavelength range 660-680 nm, in order to avoid any interference effect due to the presence of yEGFP. The number of yeast cells observed in a single analysis was 10,000. | 2018-04-03T03:09:54.622Z | 2016-04-27T00:00:00.000 | {
"year": 2016,
"sha1": "303377de79d9cb4f95f90463814720bc0edff375",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0154260&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "303377de79d9cb4f95f90463814720bc0edff375",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
169075602 | pes2o/s2orc | v3-fos-license | Research on Internal Accounting Control in Colleges and Universities
With the rapid development of discipline construction and improvement of teaching service facilities in colleges and universities, the economic scope involved in colleges and universities is also constantly expanding. In addition to state funding and tuition fees, most universities in the country can raise and use funds independently. However, at the same time of rapid development, colleges and universities have not yet established a perfect internal accounting control system. At the same time, the management of colleges and universities has not paid enough attention to the internal accounting control system, resulting in frequent problems in the construction of teaching infrastructure, management of fixed assets in colleges and universities, recruitment and employment mode of colleges and universities and the use of scientific research funds. Therefore, on the basis of a comprehensive review of the connotation, development and current situation of the internal accounting control theory, this paper aims to construct a scientific, reasonable and practical internal accounting control system in colleges and universities in view of the current situation of the internal accounting control in colleges and universities.
Introduction
In response to the country's deepening reform and opening up and promoting the integration of colleges and universities in the language, as the national and social talent Training, scientific research Highland colleges and universities are constantly seeking reform and innovation, from the university's reform results, the role and functions of colleges and universities have gradually changed, Colleges and universities from the traditional public welfare organization gradually changed to have an independent legal personality of the institutions, and the most worthy of our attention is the financial source of higher education has undergone a significant change, from the national financial education to expand the special allocation to diversify the financing channels, colleges and universities actively participate in economic activities at the same time, The complexity of internal and external environment in colleges and universities has been increased, which has won full autonomy for colleges and universities, and put forward greater challenges for internal accounting control.
From the point of view of capital absorption and utilization, the paper presents a wide range of financing channels for the construction work of colleges and universities, and the second is that higher education institutions including their subordinate level two colleges have abundant resources endowment, and the tertiary institutions are not fully connected with market economy, and there are many places to be improved in management idea and management mode. The weak links in the internal accounting control and the relatively weak and backward management consciousness of colleges and universities bring the loopholes in the management, which makes the economic problems of colleges and universities bring about economic losses, and brings the heavy load to the scientific research management of colleges and universities.
2013-2014, due to the lack of construction of internal accounting control system and the implementation of internal accounting control system, some colleges and universities in the construction of basic facilities such as teaching, college admission and Employment, and the management of state-owned assets to leave school. A glimpse of the whole picture, because of the weak consciousness of internal accounting control, imperfect internal accounting control system, and the lack of internal supervision mechanism, the internal accounting control of colleges and universities cannot become the supervision and management tool of the economic crime in higher institutions, so it is urgent to strengthen the internal accounting control.
Theoretical Summary of Internal Accounting Control in Colleges and Universities
In the process of enterprise development, internal accounting control is more and more mentioned. Internal accounting control adheres to the characteristics of control functions, emphasizing the targeted and planned monitoring of enterprise development. There is a substantial internal relationship between internal accounting control and internal control. Internal control internal control is the theoretical basis, focusing on top-down internal control processes and improving the operational efficiency of enterprises through accounting systems.
Combined with the definition of internal accounting control, the basic functions of accounting and the definition of internal accounting control by other scholars, this paper defines the internal accounting control of colleges and universities as the management system implemented by higher education institutions to ensure the quality of accounting, property security and promote the healthy development of colleges and universities. This management system consists of mutually interconnected and mutually constrained methods and institutional measures.
The development of internal accounting control in enterprises is earlier than the internal accounting control of universities. At present, the company has established a relatively mature internal accounting control system. The mature system of internal accounting control provides reference and reference for the internal accounting control of colleges and universities. However, as a non-profit organization, colleges and universities have a large difference between their internal accounting control and internal accounting control.
The development of internal accounting control in our country is mainly concentrated after the reform and opening up. In 1985, the introduction of the first "Accounting Law" marked the introduction of China's internal accounting control, and the subsequent development of internal accounting control was accompanied by the revision and improvement of relevant laws. After entering the year 2000, the development of the internal accounting control system has formed a framework system for internal accounting control of enterprises in China from the basic operational norms of enterprise internal accounting control, risk management and risk assessment, and the business management environment.
The Elements of Internal Accounting Control in Colleges and Universities
Through reviewing the previous literature, it is found that internal accounting control is closely related to internal control. The research basis of internal accounting control is internal control theory. The internal accounting control of colleges and universities is based on the theory of internal accounting control. In view of the significant differences between universities and enterprises, the internal accounting control of colleges and universities is mainly composed of the following aspects:
Internal accounting control environment
The internal accounting control of colleges and universities is based on the environment of higher education institutions. Higher education institutions assume more social responsibilities and assume the responsibility of public higher education. This is a company that is obviously different from W for profit. Therefore, the internal accounting control of colleges and universities should be based on the university environment, and the internal accounting control environment of the university is also the basis of the internal accounting control of other elements. According to the characteristics of colleges and universities, the content of the internal accounting control environment of colleges and universities can be further divided into the competence of human resources, the culture of colleges and universities, the basic financial policies of colleges and universities, and the financial management methods.
Advances in Social Science, Education and Humanities Research, volume 264
Objectives and risk assessment of internal accounting controls Internal accounting control activities Internal accounting control supervision Internal accounting control information sharing
Analysis of the Status Quo and Problems of Internal Accounting Control in Chinese Universities
In recent years, China's education expenditures have shown a clear growth trend, and China's higher education has also ushered in a good opportunity for vigorous development. Under the background of the national revitalization education, universities have closely followed the development opportunities, and universities have formed educational development alliances, such as "NATO", "Huayo" and "excellent alliance", and some universities actively participate in national higher education. Development plans, such as "111 plan". Major universities across the country have seized the stage of strategic development opportunities, expanded the enrollment scale, developed the campus area, improved the campus infrastructure construction, and built a branch campus, showing a good development trend. Although colleges and enterprises have the same needs in internal accounting control, colleges and universities are different from enterprises, and their internal accounting control system is rather backward, which restricts the development of colleges and universities to a certain extent. Therefore, analyzing the status quo and problems existing in the internal accounting control of colleges and universities, and repairing and perfecting the problems are of great significance for the construction of internal accounting control system in colleges and universities.
Because there are many studies on internal accounting control and internal accounting control in colleges and universities, through the analysis of the literature, the internal accounting control for different entities such as enterprises and universities, the measurement of internal accounting control mainly uses the internal accounting proposed by COSO. The control framework mainly includes five dimensions: internal accounting control environment, internal accounting control risk assessment, internal accounting control activities and processes, internal accounting control information sharing, and internal accounting control supervision. Prof. Gao Yibin from the Institute of Fiscal Science of the Ministry of Finance and his team put forward the initial framework of internal accounting control in the research. After extensive citation, it was recognized by other researchers. An index system for internal accounting control was constructed, which consisted of 33 items with 9 dimensions. Professor Zhu Rongen and his team also proposed 29 items in 5 dimensions in the study of internal accounting control, and investigated the effects of internal accounting control.
Internal accounting control environment analysis
What is different from the development of higher education in Western countries is that Chinese universities lack the support of modern management theory in the development process. The management of colleges and universities in China is carried out by the leadership of the party committee, the principal is responsible, and the development path of the university leaders is also limited. That is, the leaders of colleges and universities need to make a decision in this field before they turn to administrative positions. The leaders are all professors, all of whom are highly regarded scholars and experts in their respective fields. Very few colleges and universities are equipped with full-time executive vice presidents.
The lack of professional internal accounting control management theory and knowledge in the leadership of colleges and universities is limited, and it is difficult for university leaders to apply internal accounting control to specific work practices. However, internal accounting control is very Advances in Social Science, Education and Humanities Research, volume 264 important for the development of colleges and universities. It needs to run through every key link in the development of colleges and universities. At the same time, all faculty and staff in each key link must comply with the implementation of internal accounting control. The leaders of colleges and universities are currently not paying enough attention to internal accounting control, and the professional literacy level of internal accounting control needs to be further improved and improved.
Weak supervision
At present, colleges and universities in China are generally prepared for public institutions. They mainly rely on the supervision of the education authorities in terms of finance and auditing, and many of them still enjoy certain financial autonomy. Different supervisory and management departments in the political and political departments cannot supervise and manage the supervision and management of colleges and universities. The supervision and management between different departments still have great differences in standards and systems. The external inspection and supervision departments of colleges and universities are struggling to cope with daily work. In addition, there are certain professional slogans in the budget and audit of colleges and universities, which also brings certain difficulties to external supervision and management.
The powers and responsibilities are not clear enough
In view of the difference between colleges and universities, in the current organizational structure of colleges and universities, the posts are set to be independent, the professional span is large, the powers and responsibilities of the functional departments are limited and not clear enough, and the leadership positions of the administrative department are mostly experts and professors. The lack of supervision and control mechanisms between them is difficult to effectively implement transparent, open, and supervised management of colleges and universities. The responsibilities of the functional department managers involved in internal accounting control are not well understood, and it is necessary to further clarify the authority and responsibilities, and promote the implementation of internal accounting control in universities.
The professional level of the internal accounting staff of the university is not high
Through the investigation and analysis of the background of the faculty and staff of the university engaged in internal accounting control for 10 years, it is found that the faculty and staff who are engaged in internal accounting control positions in the investigated universities have fewer faculty members in accounting or economic background. .
Insufficient construction of campus culture
The construction of campus culture in colleges and universities roots in the education and life of students, and the cultural construction of colleges and universities is the soul of the development of a university. It can be said that many colleges and universities now attach great importance to the education and training of students, such as a series of campus cultural activities.
The Internal Accounting Control System Framework and Operation Design of Colleges and Universities
The internal accounting control of colleges and universities is an important method and link of the internal control management of colleges and universities. Of course, the internal accounting control of colleges and universities should be combined with the characteristics of higher education institutions, that is, it should be in line with the teaching and research of colleges and universities and related administrative management. The design of the internal accounting control system framework of colleges and universities should abide by or achieve the following objectives: First, strictly abide by national laws and regulations, internal accounting control should be within the legal framework of the law; second, basically meet the objectives of risk control, especially universities Decision-making risks related to major issues: The third is to ensure the safety of financial assets of colleges and universities, and to prevent the loss and waste of assets; the fourth is to standardize the authenticity of financial information of colleges and universities, and stand the test: five is to optimize the funds and other related resources of colleges and universities. Configuration to improve the efficiency of resource utilization.
Building a governance structure with organizational functions integration
Under the conditions of market economy, colleges and universities, as the institutions of public education, have the same attributes. In the external governance structure of colleges and universities, the relationship between higher education institutions and the competent government departments should be repositioned, and the teaching, research and administrative management of colleges and universities should be clarified. Subjective status. Internally, the interaction between universities and different stakeholders should be brought into full play.
Improve internal accounting control with institutional system
The internal accounting control of the school must implement the methods and principles of internal accounting control, risk identification, risk assessment and other relevant management elements in the form of institutional documents. The relevant institutional documents of internal accounting control are the action plans and principles for internal accounting control of managers at all levels of the university, which can make internal accounting control well documented.
Promote information communication improvement by efficiency sharing
The internal accounting control of colleges and universities should be a transparent information system. The establishment of the information system should be based on the principle of efficiency sharing, and promote the information sharing among the various departments of the university. At the same time, the new system should meet the requirements of all stakeholders of the school.
Improve the rationality of internal accounting control with budget control system
At present, the budget management of colleges and universities needs to go through two budgets: the first is the departmental budget, which is the comprehensive budget of the school arranged by the financial department. The budget needs to be reviewed and approved by the people's congress. The budget includes the school's financial income and expenditure. , the income and expenditure of affiliated units, special expenditures for scientific research in universities and other related incomes; the second budget meeting is the comprehensive budget of the university's meat department. The budget needs to be reported to the college committee for review, and the basic content of the budget is used for colleges and universities. Internal decision reference. | 2019-05-30T23:46:28.175Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "c790d7cf26e455f5445398bdbd663d0e6c42084e",
"oa_license": "CCBYNC",
"oa_url": "https://download.atlantis-press.com/article/25906342.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a28656791adf92c1e8e356ea9c82f32dd71a05b4",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Business"
]
} |
119196837 | pes2o/s2orc | v3-fos-license | Contributions of Al and Ni segregation to the interfacial cohesion of Cu-rich precipitates in ferritic steels
We characterise the influence of the segregation behaviours of two typical alloying elements, aluminium and nickel, on the interfacial cohesive properties of copper-rich precipitates in ferritic steels, with a view towards understanding steel embrittlement. The first-principles method is used to compute the energetic and bonding properties of aluminium and nickel at the interfaces of the precipitates and corresponding fracture surfaces. Our results show the segregation of aluminium and nickel at interfaces of precipitates are both energetically favourable. We find that the interfacial cohesion of copper precipitates is enhanced by aluminium segregation but reduced by nickel segregation. Opposite roles can be attributed to the different symmetrical features of the valence states for aluminium and nickel. The nickel-induced interfacial embrittlement of copper-rich precipitates increase the ductile-brittle transition temperature (DBTT) of ferritic steels and provides an explanation of many experimental phenomena, such as the fact that the shifts of DBTT of reactor pressure vessel steels depend the copper and nickel content.
I. INTRODUCTION
It has long been known that the copper content in steels leads to precipitation hardening.
Copper is an element commonly occurring in steels either as an intentionally added alloying species or as an impurity. Nanoscale copper-rich precipitates are utilised to provide substantial precipitation hardening for high-strength low-alloy steels, which possess excellent impact toughness, corrosion resistance, and welding properties [1][2][3]. In contrast, copperrich precipitates induce hardening and embrittlement effects in reactor pressure vessel steels (RPV) after neutron irradiation [4][5][6], thereby limiting the operational life of nuclear power plants. Therefore, understanding the properties of copper-rich precipitates is desirable.
Many investigations have provided insight into the hardening mechanism that results from copper-rich precipitation in ferritic steels. Molecular dynamics simulation has suggested that the major source of precipitation hardening is the dislocation core-precipitate interaction [7][8][9]. Dislocation core-precipitate interactions tend to induce the loss of screw dislocation slip systems and the transformation of the copper phase for larger body center cubic (BCC) copper-rich precipitates (d > 3.3 nm) while inducing polarised-to-nonpolarised transitions of screw dislocation core structures in precipitates for very small BCC copper-rich precipitates (1.5 nm < d < 3.3 nm) [10,11]. In addition to the diameter of precipitates, the temperature and dislocation line characteristics are important to the dislocation core-precipitate interaction [12]. Experimental evidence relating to the dislocation core-precipitate interaction has been observed using transmission electron microscope (TEM) experiments. In situ TEM demonstrates dislocations pinned and curved with obtuse bow-out angles by BCC copper-rich precipitates with d ∼ 2 nm [13,14]. TEM observations also demonstrate that the transformation of copper phase and the appearance of dislocation loops are induced by precipitates with d ∼ 4 nm [15]. The bow-out angle of the dislocation can further be used to estimate macroscopic hardening [15].
Many experiments have specifically studied the copper-rich precipitation embrittlement effect on RPV steels and model alloys. Positron annihilation experiments suggest strongly that copper-rich precipitates are responsible for irradiation-induced embrittlement [16]. The influence of nickel content on the embrittlement of RPV steels has also been found to be important [17][18][19][20][21][22], and the nickel and copper content in steels have a synergistic effect on the embrittlement tendency [23][24][25][26]. It has been reported that RPV steels with high nickel content have a higher ductile-brittle transition temperature (DBTT) shift than low-nickel steels with same copper content irradiated using same neutron fluence [23]. However, the shifts in the DBTT are small for irradiated low-copper, high-nickel RPV steels that do not contain copper-rich precipitates [24]. Furthermore, a parametric study of model alloys after neutron irradiation (at a neutron fluence of 72 × 10 18 m −2 ) showed that DBTT shifts increase with nickel content and that the shift is visible from a threshold copper content of approximately 0.08 at.% [25,26]. These results demonstrate clear evidence for a relation between embrittlement and copper and nickel contents. The influences of nickel on copperrich precipitates have therefore attracted significant attention.
Microstructure experiments show that nickel can occur in the copper precipitates. Atom probe experiments on thermally aged model alloys demonstrate that the nickel is located in the core region of the BCC copper-rich precipitates during the initial growth stage and is rejected from the core to the interfacial region during growth and coarsening [27]. Observations on neutron-irradiated RPV steels demonstrate a higher nickel concentration in the copper precipitates than in thermally aged model alloys [27,28]. Recently, a series of atom probe experiments have revealed the growing and coarsening behaviour of copper-rich precipitates in concentrated multicomponent alloys with high strength [29][30][31][32][33][34][35]. The nickel and other alloying elements have been observed to segregate at the interface of the precipitates.
Moreover, phase-field [36] and Langer-Schwartz [37] simulations of copper precipitates nucleation also indicated nickel segregation at the interface of copper precipitates during growth.
First-principles calculation is very efficient in predicting the embrittlement potential and aids in understanding the mechanism of the impurity effect based on electronic structure [38][39][40][41][42][43]. These findings motivated us to perform a careful, first-principles investigation of the contribution of segregation, especially of nickel, on the interfacial cohesive property of precipitates to better understand the effects of these precipitates on embrittlement.
In this paper, we study aluminium and nickel as typical alloying elements to examine the effect of segregation at the interface of copper-rich precipitates towards steel embrittlement.
After calculating the segregation energies, we characterise the effect of aluminium and nickel on the interfacial cohesion of copper-rich precipitates and attempt to explain their bonding properties in terms of their electronic structures. Finally, we discuss the roles of aluminium and nickel in the embrittlement of ferritic steels. Our results suggest that the nickel-induced interfacial embrittlement process increases the DBBT of ferritic steels.
METHODS AND MODELS
Our first-principles calculations are based on density functional theory (DFT) [44,45] and performed using the Vienna Ab-initio Simulation Package (VASP) [46] with a plane wave basis set [47,48]. The electron exchange and correlation is described within the generalised gradient approximation (GGA) [49,50], and the interaction between ions and electrons is described using the projector augmented wave method (PAW) [51]. The PAW potentials we chose are treated by considering Fe3d4s, Cu3d4s, Al3s3p and Ni3d4s as valence states. All calculations include spin polarisation. The structural relaxations of the ions are calculated by conjugate-gradient (CG) algorithm.
We model the coherent (001) interfaces between copper-rich precipitates and the ferritic matrix by designing (2 × 2 × 10) multi-layered supercells composed of BCC copper and BCC iron (Fig. 1). The lattice constants of the supercells are set using the theoretical value for BCC iron. Our BCC iron value, 2.83Å, is reasonably consistent with the experimental value, 2.86Å [52]. These multilayer structures have a distance of 10 atomic layers between each interface, and this is determined by considering the balance of avoidance of the interface interaction and computational expense. The three structures illustrated in Figs. 1a-c correspond to the interfaces of precipitates with copper concentrations of 100, 75, and 50 at.%, respectively. The k-points sample for these supercells are Monkhorst-Pack grids (6 × 6 × 1).
We create initial configurations of supercells modelling the alloying element (M = Al, Ni) in different sites by substituting M for the iron or copper atoms from (2 × 2 × 10) supercells. We substitute M for the iron atom at site 1, to simulate the system of M in the bulk of the ferritic matrix. The distance from site 1 to the interface is five atomic layers, sufficient to avoid the interaction between M and the interface. Sites 2 and 3 are used to simulate M segregation at the interface toward the matrix and toward the precipitated phase, respectively. Site 4 is used to simulate M in the precipitated phase.
The formation energy is one of quantities typically used to describe the segregation behaviour. The formation energy (E M ) of M in the crystal is defined as: where E M +CR and E CR are the total energies of the supercell with a substitution defect M and defect-free supercell, while E M and E A are the energies per atom of equilibrium pure-element reference states. The formation energy is dependent on the reference states, which induce arbitrariness in the result [53]. It is usually difficult to choose the correct form of reference states when one calculates the formation energy to predict the segregation behaviour.
Here, a more efficient quantity, the segregation energy, is used to predict the segregation behaviour. The segregation energy (E M X ) of M at site X can be written as: where E tot M −X and E tot M −matrix are the total energies of the supercells for M at site X in precipitated phase and that for M in the matrix, respectively. A negative segregation energy indicates that M can transfer from the matrix to site X, whereas a positive segregation energy indicates that M prefers to dissolve within the matrix. The segregation energy reflects the competitive capacity of trapping M between the matrix and the precipitated phase. This strategy has previously been used to predict partition behaviours between cementite and ferrite [53], and the method has been verified to be appropriate.
The embrittlement property of the interface can be obtained from the Griffith work separating the interface. Based on the Rice-Wang mode [54], the Griffith work is a linear function of the difference in segregation energy for the alloying element M at the interface (△E M I ) and that at the fracture free surface (△E M F ), △E M I -△E M F . An alloying element M with positive △E M I -△E M F will reduce the cohesion of the interface and induce an embrittlement potency, or vice versa. For the embrittlement property, the segregation energies (△E M F ) at the fracture free surface are needed. We construct an isolated fracture free (001) ferritic matrix by subtracting the copper-rich phase from (2 × 2 × 10) supercells representing the interface to calculate △E M F (Fig. 2). In addition to energetic properties, we also analyse electronic structures to provide insights into the bonding properties. We focus on the charge density differences within the region of the interface and the fracture free surface as shown in Fig. 2. The charge density differences (Fig. 3) are obtained by subtracting the superimposed charge density from the self-consistent charge density of the relaxed structure. Fig. 3 shows the charge accumulation and depletion, which indicate the interatomic interactions. △E sc , are also used when estimating the lattice distortion.
Aluminium segregation
The segregation energies (see △E Al X in Table 1) of aluminium at the interfaces of the precipitated phases are predicted to be negative, indicating that the presence of aluminium atoms at the interfaces is energetically favourable compared to that in the ferritic matrix.
Moreover, the segregation energies of aluminium at the interfaces are lower than those in the core regions of the precipitates, indicating that the presence of aluminium atoms at the interface is more favourable than that in the core region. These results indicate that the interface between the matrix and the precipitated phase can trap aluminium atoms, consistent with three-dimensional atom probe (3DAP) experiments [29][30][31]33].
The interfacial segregation energy at the interface for precipitates with 100 at.% copper is larger than that for the precipitated phase with 50 at.% copper by 1.56 eV. This result indicates that interfacial segregation is strongly dependent on the composition of the precipitated phase and increases with its copper concentration. The strong dependence of interfacial segregation on the composition of precipitates plays an important role in experimental phenomena in which the concentration of aluminium at the interface of precipitates increases in ferritic steels during thermal treatment processes [29,30,33].
Nickel segregation
We find that the nickel segregation behaviour is similar to that of aluminium. The segregation energies (see △E N i X in Table 2) of nickel in the interfaces are negative, indicating that the presence of nickel atoms in the interfaces is energetically favourable compared to that in the ferritic matrix. However, the most favourable sites depend on the composition of the precipitated phase. The most favourable sites are in the core region for the precipitated phase with 50 at.% copper, but at the interface for the precipitated phase with 100 and 75 at.% copper. These results indicate that nickel atoms can partition at the precipitates with 50 at.% copper and segregate into the interfaces of the precipitated phase with 100 and 75 at.% copper in a ferritic matrix. Therefore, nickel will segregate into the core region of the precipitates at the initial formation stage and be pushed away from the core towards the interface of the precipitates for the following growth stage. This is consistent with phase field [36] and Langer-Schwartz [37] simulations. These phenomena were also observed for the copper precipitates in RPV steels using 3DAP [27,28].
The segregation energy of nickel at the interface also depends on the composition of the precipitated phase. The segregation energy at the interface of the precipitated phase with 100 at.% copper is larger than that of 50 at.% copper by 0.22 eV, a difference much lower than that of aluminium. This result indicates that the nickel concentration at the interface of copper-rich precipitates increases more slowly than that of aluminium with increasing copper concentration during the growth of precipitates. This trend has previously been observed in 3DAP experiments [55].
Chemical and mechanic contributions
As shown in Tables 1 and 2, the values of △E chem X approach those of △E X , so long as the It is necessary to discuss the factors affecting the relaxation energy of lattice distortion.
The relaxation energy is mainly determined by mechanical stability and the mixing effects of the BCC FeCu metastable alloy. The mechanical stability and mixing effect both decrease with copper concentration (at copper concentrations > 50 at.%) [56,57]. Higher mechanical stability reduces the relaxation energy, whereas larger mixing effects enhance the relaxation energy. Therefore, the relaxation energy of the precipitated phase with 75 at.% copper become the largest among the three structures studied due to the compromise formed between the effects of mechanical stability and mixing.
B. The Griffith work is influenced by segregation
We take the interface between the pure copper and ferritic matrix as a typical mode to The alteration of the chemical bonding of nickel is totally different to that of aluminium.
The spatial distribution of nickel at the fracture free surface is similar to that at the interface due to the d electron. The charge accumulations in the interval region of Fe1-Ni, Fe2-Ni, and Fe3-Ni for the free surface are all greater than that for the interface, indicating a stronger chemical bonding between these atoms at free surface. The enhanced chemical bonding arises from the contraction of bond lengths at the free surface. Because the chemical bonding at the free surface is stronger than at the interface, the segregation energy at the free surface is larger than at the interface (by 0.07 eV). △E N i I -△E N i F for the segregation of nickel is consequently positive. BCC copper-rich precipitates play important roles in dislocation pining and misfit growth in ferritic steels. Dislocation pinning will strengthen the ferritic matrix, whereas the misfit growth will improve the ductility. The influence of the misfits produced by copper precipitates on ductile-brittle transformation has been proven to be considerable [58,59]. The interfacial cohesive property and the amounts of precipitates present contribute importantly to ductile-brittle transformation. The ductile-brittle transition depend the competition between fracture stress and flow stress. Flow stress increases with decreasing temperature.
When the flow stress is larger than the fracture stress at lower temperatures, the ferritic matrix is brittle; and when the flow stress is smaller than the fracture stress at higher temperatures, the ferritic matrix is ductile. Therefore, the DBTT can be altered by fracture stress, which is determined by the interfacial cohesion of precipitates. We can now predict that the segregation of aluminium can lower the DBTT due to the enhancing fracture stress of copper precipitates, whereas the segregation of nickel can increase the DBTT due to the reducing fracture stress of copper precipitates.
The nickel-induced interfacial embrittlement of copper-rich precipitates explains the observation that the DBTT of low-carbon, copper-precipitation-strengthened steels increase with nickel and copper content [58]. This effect also accounts for the observation that the shifts of DBTT in RPV steels after neutron irradiation are enhanced by the copper and nickel content [23,24]. Furthermore, it can account for the observations that the influence of copper content on DBTT decrease is progressive when the nickel content decreases and the influence of nickel content on DBTT disappears for model alloys after neutron irradiation (at a neutron fluence of 72 × 10 18 m −2 ) with copper contents below 0.08 at.% (0.1 wt.%) [25,26].
IV. CONCLUSION
We | 2011-04-01T15:49:25.000Z | 2011-02-21T00:00:00.000 | {
"year": 2011,
"sha1": "8cca01bd6dbed2ec36c75ebe1772e335aa54fd5b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8cca01bd6dbed2ec36c75ebe1772e335aa54fd5b",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
193100 | pes2o/s2orc | v3-fos-license | The phn Island: A New Genomic Island Encoding Catabolism of Polynuclear Aromatic Hydrocarbons
Bacteria are key in the biodegradation of polycyclic aromatic hydrocarbons (PAH), which are widespread environmental pollutants. At least six genotypes of PAH degraders are distinguishable via phylogenies of the ring-hydroxylating dioxygenase (RHD) that initiates bacterial PAH metabolism. A given RHD genotype can be possessed by a variety of bacterial genera, suggesting horizontal gene transfer (HGT) is an important process for dissemination of PAH-degrading genes. But, mechanisms of HGT for most RHD genotypes are unknown. Here, we report in silico and functional analyses of the phenanthrene-degrading bacterium Delftia sp. Cs1-4, a representative of the phnAFK2 RHD group. The phnAFK2 genotype predominates PAH degrader communities in some soils and sediments, but, until now, their genomic biology has not been explored. In the present study, genes for the entire phenanthrene catabolic pathway were discovered on a novel ca. 232 kb genomic island (GEI), now termed the phn island. This GEI had characteristics of an integrative and conjugative element with a mobilization/stabilization system similar to that of SXT/R391-type GEI. But, it could not be grouped with any known GEI, and was the first member of a new GEI class. The island also carried genes predicted to encode: synthesis of quorum sensing signal molecules, fatty acid/polyhydroxyalkanoate biosynthesis, a type IV secretory system, a PRTRC system, DNA mobilization functions and >50 hypothetical proteins. The 50% G + C content of the phn gene cluster differed significantly from the 66.7% G + C level of the island as a whole and the strain Cs1-4 chromosome, indicating a divergent phylogenetic origin for the phn genes. Collectively, these studies added new insights into the genetic elements affecting the PAH biodegradation capacity of microbial communities specifically, and the potential vehicles of HGT in general.
INTRODUCTION
Bacteria are key agents affecting the fate and behavior of polycyclic aromatic hydrocarbons (PAH), which are widespread environmental pollutants. At least six genotypes of PAH degraders have been identified from phylogenies of the ring-hydroxylating dioxygenase (RHD) that initiates PAH metabolism (Moser and Stahl, 2001). The RHD genotypes also show conservation in the organization of other PAH degradation enzymes that are associated with the RHD (Moser and Stahl, 2001;Stolz, 2009). A given RHD genotype can be possessed by a variety of bacterial genera, suggesting horizontal gene transfer (HGT) is an important process for dissemination of PAH-degrading genes. Moreover, the RHD genotypes have characteristic patterns of distribution, and typically occur within genera of a single bacterial family (Moser and Abbreviations: AHL, N -acylhomoserine lactone; CoA, coenzyme-A; GEI, genomic island; HGT, horizontal gene transfer; ICE, integrating conjugative elements; OAA, oxaloacetic acid; oph, ortho-phthalate degradation genes; PAH, polycyclic aromatic hydrocarbon; PCA, protocatechuate; PHA, polyhydroxyalkanoate; phn, phenanthrene degradation genes; pmd, protocatechuate meta-dioxygenase and other protocatechuate degradation genes; RHD, ring-hydroxylating dioxygenase; ROS, reactive oxygen species; T4SS, type IV secretion system. Stahl, 2001), possibly reflecting different pathways of gene flow that are operative within differing phylogenetic groups. Information on such pathways is limited, but their elucidation is essential to understand processes shaping the function of microbial communities.
Information on mechanisms of HGT in proteobacterial PAH degraders is mostly derived from analyses of sphingomonads (Romine et al., 1999;Demaneche et al., 2004;Stolz, 2009), and species of Pseudomonas (Dennis and Zylstra, 2004;Li et al., 2004;Basu and Phale, 2008;Heinaru et al., 2009). For both, studies have focused on plasmids. In Pseudomonas, PAH-degrading genes are characteristically of the nah phylotype, which are invariably located on IncP-9 plasmids. Complete sequencing of one such element, pDTG1, revealed all nah genes were contained within a Tn3like transposon (Dennis and Zylstra, 2004), potentially providing a means of intracellular gene movement as well. But, although IncP-9 plasmids have a host range that spans the alpha-, beta-, and gamma-proteobacteria (Suzuki et al., 2010), these elements have not yet been identified to carry PAH-degrading genes beyond the genus Pseudomonas. Plasmids of the sphingomonads show a similar pattern, and are limited to that group. Thus, mechanisms of HGT for other RHD genotypes are largely unknown.
Many vehicles of HGT exist as elements integrated into bacterial chromosomes rather than autonomously replicating plasmids, and are termed integrating conjugative elements (ICE) or genomic islands (GEI). High throughput sequencing has revealed that GEI are wide spread in bacterial genomes, and can encode a variety of adaptive traits (van der Meer and Sentchilo, 2003;Dobrindt et al., 2004;Mario et al., 2009). However, while there has been extensive study of GEI associated with pathogenicity-related functions, comparatively little is known about GEI that confer other types of phenotypes, including biodegradation activities. To date, the clc element is the only biodegradation-related GEI examined in some detail, and it is a pKLC102/PAGI-type GEI (Toleman and Walsh, 2011) that encodes degradation of chlorocatechols/aminophenols (Gaillard et al., 2006). Involvement of GEI in PAH biodegradation has not been examined, although these are now being revealed by genome sequencing projects. For example, in Alteromonas sp. SN2 (Jin et al., 2011), genes for an nah-type RHD are carried on 100 kb Tn3-like transposon, a structure similar to that of the nah genes on pDTG1. Also, in Polaromonas naphthalenivorans CJ2, nag -type RHD genes are associated with an unclassified GEI (Yagi et al., 2009).
Recently, genomes were determined for two bacteria of the phn AFK2 genotype: Delftia sp. Cs1-4 (closed genome sequence) and Burkholderia sp. Ch1-1 (shotgun genome sequence). The phn AFK2 genotype is interesting as it has appeared as a predominant class of PAH degraders in surveys of some soils and marine sediments (Lozada et al., 2008;Ding et al., 2010), but relatively little is known about this group. To date, the most extensive genetic information about the phn AFK2 genotype has been limited to a ca. 24 kb GenBank entry (AB024945) from Alcaligenes faecalis AFK2 that included genes for the RHD and nine downstream enzymes. This GenBank entry, however, was not associated with any empirical evidence for the predicted functionality, which as yet remains unestablished. Furthermore, while HGT of the phn AFK2 genotype is indicated by its structural conservation across multiple genera, the vehicle(s) mediating transfer are unknown.
Here, we present the first genomic and functional analyses of the phn AFK2 genotype. The studies focused on the closed sequence of Delftia sp. Cs1-4 and, where relevant, additional analyses were done with Burkholderia sp. Ch1-1. Our findings included: elucidation of a new ca. 232 kb GEI that encoded the entire pathway for PAH (phenanthrene) catabolism, discovery of four new genes associated with PAH degradation and elucidation of a novel operon structure for a key pathway enzyme (o-phthalate dioxygenase). Collectively, these studies provided new insights into the evolutionary and metabolic processes associated with PAH degradation and HGT in microbial communities.
BACTERIAL STRAINS, PLASMIDS, AND CULTURE CONDITIONS
Bacterial strains and plasmids used in this study are listed in Table S1 in Supplementary Material. Burkholderia sp. strain Ch1-1 and Delftia sp. strain Cs1-4 were isolated from PAH-contaminated soil collected in Chippewa Falls, WI, USA, based on their abilities to grow on phenanthrene as sole carbon source (Vacca et al., 2005). Cells of strain Ch1-1 and Cs1-4 were routinely grown on mineral salt medium (MSM; Hickey and Focht, 1990) supplemented with phenanthrene as a sole carbon source (1 g/l) and incubated at 25˚C with shaking at 150 rpm. When pyruvate was used as a substrate, it was added to MSM from a filter-sterilized stock to a final concentration of 50 mM. E. coli strains were grown at 28 or 37˚C on Luria-Bertani (LB) medium supplemented as appropriate with antibiotics. The PAH were obtained from Aldrich Chemicals (Milwaukee, WI, USA), Packard (La Grange, IL, USA), ACROS Organics, and PFALTZ and Bauer (Waterbury, CT, USA) at the highest purity available.
PROTEOMICS
Phenanthrene-grown cells (200 ml, OD 600 = 0.6) were filtered through glass wool and then recovered by centrifugation at 3,800 × g for 5 min. The cells were washed twice with phosphate buffer and resuspended in phosphate buffered saline. Pyruvategrown cells were prepared similarly, except for omission of the filtration step. "In Liquid" digestion and mass spectrometric analysis was done at the University of Wisconsin-Madison, Biotechnology Center, Mass Spectrometry Facility. Cells were incubated on ice (15 min) in 20% (w/v) trichloroacetic acid/acetone (10-fold excess) and precipitated proteins were collected by centrifugation (10 min, 16,000 × g ). The protein pellets were washed sequentially with ice-cold acetone (2×), ice-cold methanol (1×), and then solubilized in 4 μl of 8 M urea in 100 mM NH 4 HCO 3 (ABC). After a 10-min incubation, this denatured protein solution was diluted to 25 μl by addition of 1.25 μl 25 mM dithiothreitol, 2 μl acetonitrile (ACN), 15.12 μl 25 mM ABC, 0.12 μl 1 M Tris-HCl (pH 7.5), and 2.5 μl trypsin [100 ng/μl Trypsin Gold (Promega Corp.) in 25 mM ABC]. Following a 12-h of trypsin digestion (37˚C) another 2 μl of trypsin was added and a second digestion done for 2 h at 42˚C. Digestion was terminated by addition of trifluoroacetic acid to a final concentration of 0.3% (w/v).
Peptides were analyzed by nanoLC-MS/MS using an Agilent 1100 nanoflow system (Agilent Technologies, Palo Alto, CA, USA) connected to a hybrid linear ion trap-orbitrap mass spectrometer (LTQ-Orbitrap XL, Thermo Fisher Scientific, San Jose, CA, USA) equipped with a nanoelectrospray ion source. Capillary HPLC was done using an in-house fabricated column with integrated electrospray emitter essentially as described by (Martin et al., 2000) except that 360 μm × 75 μm fused silica tubing was used. The column was packed with Jupiter 4 μm C 12 particles (Phenomenex Inc., Torrance, CA, USA) to ca. 12 cm. Sample loading (8 μl) and desalting were achieved using a trapping column in line with the autosampler (Zorbax 300SB-C 18 , 5 μm, 5 mm × 0.3 mm, Agilent Technologies). During sample loading and desalting, an isocratic mobile phase of 1% (v/v) ACN, 0.1% formic acid (w/v) was used, and run at 10 μl/min. Peptides were then eluted by a gradient (200 nl/min) of 0.1% (w/v) formic acid in water (Buffer A) and 95% (v/v) ACN, 0.1% (w/v) formic acid in water (Buffer B) with increasing Buffer A from 0 to 40% (75 min), 40 to 60% (20 min), and 60 to 100% (5 min). The LTQ-Orbitrap acquired MS/MS spectra in data-dependent mode with MS survey scans (m/z 300-2,000) collected in centroid mode at a resolving power of 100,000. Spectra were collected on the five most-abundant signals in each survey scan. Dynamic exclusion was used to increase dynamic range and maximize peptide identifications, and excluded precursors between 0.55 m/z below and 1.05 m/z above previously selected precursors; precursors remained on the exclusion list for 15 s. Singly charged ions and ions for which the charge state could Frontiers in Microbiology | Microbiotechnology, Ecotoxicology and Bioremediation not be assigned were rejected from consideration for MS/MS. Raw MS/MS data was searched against the non-redundant Delftia sp. Cs1-4 amino acid sequence database (5,867 protein entries; GenBank entry NC_015563) using an in-house Mascot search engine allowing variable modifications (methionine oxidation; glutamine, asparagine deamidation) a peptide mass tolerance of 15 ppm and 0.6 Da fragment mass.
Protein annotation and significance of protein identifications were done with Scaffold (version 3.00.07, Proteome Software Inc., Portland, OR, USA). Peptide identifications were accepted if identities of at least two unique, matching peptides were established at ≥90.0% probability via the Peptide Prophet algorithm (Keller et al., 2002). Protein identification probabilities were assigned by the Protein Prophet algorithm (Nesvizhskii et al., 2003). Proteins that contained similar peptides and could not be differentiated based on MS/MS analysis alone were grouped to satisfy the principles of parsimony.
DNA MANIPULATIONS, CLONING, AND SEQUENCE ANALYSIS
Standard procedures were used for DNA manipulations (Sambrook et al., 1989). Restriction enzymes were obtained from Promega (Madison, WI, USA) or New England Biolabs (Beverly, MA, USA). Genomic DNA was extracted using Wizard Genomic DNA Purification kit (Promega, Madison, WI, USA). Primers were synthesized by the UW-Madison Biotechnology Center. Ex Taq DNA polymerase (Takara, Madison, WI, USA) was used in PCR amplification reactions. DNA Fragments were purified from agarose gels with QIAquick spin columns (QIAGEN, Valencia, CA, USA) and sequenced at the Univ. Wisconsin Biotechnology Center. GenBank files of all sequences used in this study were imported into Geneious (v. 5.4.5; Biomatters Ltd., Auckland 1010, New Zealand) for manual curation, and alignment by Clustal, MUSCLE, or MAUVE, as appropriate. Whole genome sequences were uploaded to Islandviewer 1 for identification of GEIs. Alignment figures were generated with Geneious v. 5.4.5 (http://www.geneious.com).
GENERATION OF ΔphnAcAd MUTANTS AND MUTANT COMPLEMENTATION
Genes putatively encoding the phenanthrene dioxygenase large and small subunits (phnAcAd), upstream (Cs1-4 genome positions and downstream (strain Cs1-4 genome positions 1962038-1962817) fragments were amplified with primers phnAcbglII/phnACKpnI and phnACsacII/phnAcsacI, respectively (Table S2 in Supplementary Material). The amplicons were gel purified and cloned into pGEM-T Easy (pSCH448 and pSCH449). These fragments were sequentially assembled on the same sites on pJK100 (pSCH453). Next, pSCH453 was introduced into E. coli BW19851 (λ pir) then conjugated into strain Cs1-4 and Tc s and Km r transconjugants were selected (strain SCH455, ΔphnAcAd:Km). Replacement of phnAcAd with the kanamycin resistance gene was confirmed by PCR using primers phndectF and phndetcR.
For complementation, a 2,064-bp fragment containing phnAcAd (strain Cs1-4 genome positions 1960120-1962184) was amplified with primers PhnAcNdeF1 and PhnAcSacIIR1 (Table S2 in Supplementary Material) using strain Cs1-4 genomic DNA as a template. The amplicon was then cloned into pGEM-T easy (pSCH434). The insertion was released by SacII and cloned into the site on pSCH442 and under the control of a strong promoter (PnpdA) identified in strain Cs1-4 (Chen and Hickey, 2011). A construct with the correct insertion (pSCH462, PnpdA + phenAcAd) was identified by sequencing multiple random colonies, which was then introduced into the ΔphenAcAd:Km mutant, yielding strain SCH471. The phenAcAd expression vector pSCH453 was used in a similar manner to test for complementation of a Burkholderia sp. Ch1-1 ΔphnAcAd:Km mutant.
HETEROLOGOUS EXPRESSION
The pET5a expression system (Promega, Madison, WI, USA) was used for gene expression in E. coli. Primers, phnF and phnR ( Table S2 in Supplementary Material) were used for cloning of the 5.3-kb phn RHD (phnAb-Ad) cluster from genomic DNA of Burkholderia sp. Ch1-1. The purified fragment was cloned into pGEM-T Easy, yielding pTPHE2 and propagated in E. coli DH5α. The Fragment inserted into pTPHE2 was released by digestion with NdeI, and then purified and ligated into NdeI-digested pET5a to give pPHE2. The orientation and sequence of phnAb-Ad in pPHE2 was confirmed by sequencing. Plasmids were introduced by electroporation into E. coli strain BL21 AI strain for heterologous expression. Single colonies of E. coli BL21 AI (pPHE2) and the negative control E. coli BL21 AI (pET5a) were inoculated into separate tubes of LB (3 ml) and incubated overnight at 37˚C. Aliquots of each culture (2 ml) were then inoculated into fresh LB (100 ml), incubated at 37˚C. Upon reaching OD 600 = 0.6, arabinose was added to a final concentration of 0.2% (w/v) for induction. Following overnight incubation at 25˚C, cells were collected by centrifugation (3,800 × g, 5 min) then washed with, and resuspended in, M9 medium (43). For substrate specificity tests, PAH were added from hexane stocks to a final concentration of 100 μM. The cells were incubated at 25˚C, and the reaction was stopped by addition of an equal volume of ethyl acetate. Analysis of phenanthrene and phenanthrene metabolites was done in a manner similar to that described by Kang et al. (2003).
STRUCTURE OF THE phn ISLAND
The Delftia sp. Cs1-4 genome consisted of a single, 6.8 Mb chromosome. MAUVE Alignment of the strain Cs1-4 chromosome with that of its closest relative with a complete genome sequence, Delftia acidovorans SPH1, indicated that genes encoding the entire pathway for phenanthrene catabolism were located within a 232,325-bp region absent in D. acidovorans SPH1 ( Figure 1A). Analysis by GEI detection algorithms also identified this 232 kb region as a GEI (Figure 1B), now termed the phn island. The 3 terminus of the island was located in a non-coding region, upstream of a gene putatively encoding an S fimbrial protein. This terminus was delimited by a 57-bp sequence, which also occurred nearby in the chromosomal region as an inverted repeat (Figures 2A,B).
There were 10 clusters of genes related by encoding: (1) phenanthrene catabolism to o-phthalate, (2) o-phthalate degradation to protocatechuate, (3) protocatechuate catabolism to central intermediates, (4) fatty acid/polyhydroxyalkanoate biosynthesis, (5) a type IV secretory system, (6) a PRTRC system, (7) element integration, and (8-10) hypothetical proteins. The island also contained genes encoding quorum sensing signal molecules (Figure 2A). Genes encoding phage-like mobilization functions and prophage stabilization were distributed across the element, and a 32-bp palindromic sequence was present at two locations (Figures 2A,B). While the G + C content of the phn island as a whole was similar to that of the strain Cs1-4 chromosome (66.7%), that of the region encoding the upper pathway for phenanthrene degradation (phn cluster; Figure 2A) showed a marked divergence at ca. 50% G + C.
Frontiers in Microbiology
Further downstream, there were two regulatory genes: tetR was present in strains Cs1-4 and Ch1-1, but fragmented in A. faecalis AFK2, while a marR regulator was conserved between the clusters. The gst gene was also conserved, as were phnH-G. But, the clusters diverged at phnI, which was interrupted by a transposase in strain Cs1-4. In strain Cs1-4, phnI was the last gene in the cluster (Figure 4).
Four new genes (termed phnJKLM ) were identified (Figure 4), which proteomics indicated were linked to phenanthrene degradation (see below). The first of these, phnJ, encoded a protein containing a signal peptide, and a conserved domain of unknown function (DUF) 1302. The product of the following gene, phnK, contained a signal peptide and DUF 1329. The phnL product was a Ycf48-like protein, which in phototrophs, Frontiers in Microbiology | Microbiotechnology, Ecotoxicology and Bioremediation functions in the assembly of Photosystem II (Komenda et al., 2008;Rengstl et al., 2011). Lastly, phnM, encoded an RND-type hydrophobe/amphiphile efflux protein. In the other phn AFK2 phylotypes, phnJ was either truncated (A. faecalis AFK2) or fragmented (Burkholderia sp. Ch1-1). Further comparison of the phnJKLM cluster could be done only with strain AFK2, which differed from strain Cs1-4 in that phnLM were fragmented and only phnK was intact. GenBank searches revealed the phnJKLM cluster was widely distributed across diverse alpha-, beta-, and gammaproteobacteria, the latter with representatives from terrestrial and marine environments ( Table 1). A common feature linking these diverse organisms was the location of phnJKLM adjacent to genes for oxidoreductases (Table 1).
In strain Cs1-4, the lower pathway of phenanthrene catabolism entails transformation of o-phthalate to protocatechuate (PCA), and PCA degradation to oxaloacetate and pyruvate via the meta pathway (Figure 3). The genes for o-phthalate and PCA degradation (oph and pmd, respectively) occurred in separate clusters (Figures 2A, 5, and 6). The organization of the oph cluster ( Figure 5) was generally similar to that of Comamonas testosteroni KF-1. However, a significant difference between the oph clusters of strain Cs1-4 and that of C. testosteroni KF-1 was the presence in the strain Cs1-4 cluster of two non-identical copies of ophA2 (Figure 5), which encodes the oxygenase component of phthalate-4,5-dioxygenase (Batie et al., 1987).
The pmd cluster occurred in two loci, one located on the phn island (Cluster 1; Figures 2A and 6) and the other outside of it (Cluster 2; Figure 6). There was also a third locus in strain Cs1-4 where additional copies of pmdAB [protocatechuate 4,5 (meta)dioxygenase] were located, but were not associated with any other PCA degradation genes (DelCs14_3008, 3009). The structure of pmd Cluster 1 was similar to that of Ramlibacter tataouinensis TTB310 while Cluster 2 organization was like that of other Comamonadaceae as exemplified by Comamonas sp. DJ-12 (Figure 6). The structures of the pmd clusters diverged in two ways (Figure 6).
First, Cluster 1 lacked pmdK/pmcT (putatively encoding an aromatic acid transporter) and instead had a gene for a predicted LTTR-domain protein located in that position. Secondly, Cluster 1 encompassed a gene predicted to encode an NAD-binding dehydrogenase, the function of which in the meta pathway is unknown. The latter gene was conserved in the pmd clusters of strains Cs1-4 and TTB310, but the NAD-binding dehydrogenase was present only in strain Cs1-4 (Figure 6).
A cluster of genes near the 5 end of the island-encoded enzymes were predicted to link biosynthesis of fatty acids with that of the carbon storage polymers, polyhydroxyalkanoates (PHA; Figure 2A). Key intermediates of fatty acid synthesis used in production of medium chain length PHA are enoyl-CoA, 3ketoacyl-CoA, (S)-3-hydroxyacyl-CoA and 3-hydroxyacyl-acylcarrier-protein (Witholt and Kessler, 1999). Enzymes linking these to PHA synthesis are enoyl-CoA hydratase (Fiedler et al., 2002;Tsuge et al., 2003), 3-ketoacyl-(acyl-carrier-protein) reductase (Ren et al., 2000), and epimerase (Madison and Huisman, 1999;Witholt and Kessler, 1999). Within the putative PHA cluster, three genes (DelCs14_1623, 1625, 1627) were predicted to encode MaoC-like proteins, which have been demonstrated as important in PHA biosynthesis (Park and Lee, 2003). These MaoC-like proteins were orthologs of the (R)-specific enoyl-CoA hydratase, PhaJ1 (Park and Lee, 2003). The product of DelCs14_1636 was a FabG ortholog, which is a 3-ketoacyl-(acyl-carrier-protein) reductase. Epimerase activity could be provided by two genes predicted to encode MmgE/PrpD family proteins (DelCs14_1624, DelCs14_1635) for which this function has recently been established (Lohkamp et al., 2006). Other genes with potential links to fatty acid biosynthesis were (locus): enoyl-CoA reductase (a.k.a. butyrl-CoA dehydrogenase; DelCs14_1626), acetyl-CoA synthetase (DelCs14_1645), crotonase (DelCs14_1643), and NAD(P)(H)-dependent oxidoreductases (DelCs14_1637) for the inter-conversion of aldehydes/ketones and alcohols. www.frontiersin.org Another function of the PHA cluster may be to synthesize pantothenate. This compound is essential for coenzyme-A (CoA), which in turn is required for fatty acid metabolism, and thus PHA biosynthesis as well. β-Alanine is required for pantothenate production, and is produced primarily by aspartate decarboxylation (Williamson and Brown, 1979); the enzyme catalyzing this reaction, l-aspartate decarboxylase, was the predicted product of DelCs14_1648. Aspartate formation in turn proceeds from oxaloacetic acid (OAA), a product of the phenanthrene lower pathway. Levels of OAA (and acetyl-CoA) could also be modulated by formyl-CoA transferase (DelCs14_1628), which catalyzes the reversible transfer of CoA between acetyl-CoA and OAA, and citryl-CoA lyase (DelCs14_1639, _1629), which transforms citryl-CoA into acetyl-CoA and OAA. The PHA cluster contained five proteins of a complete ABC transport system with similarity to that for branched chain amino acids. The function of the transporter is unknown, but the presence of a periplasmic binding protein (DelCs14_1630) suggested substrate import.
The phn island contained 25 genes predicted to be involved in some aspect of DNA mobilization ( Table 2). Like other GEI, it possessed a bacteriophage P4-like integrase located near the site of insertion, which had the greatest similarity to an integrase from Burkholderia phage BcepC6B (Table 3). While additional close orthologs (≥60% homology; Bi et al., 2012) of Int Cs14 were not identified, a number of low similarity hits were found in betaproteobacterial genera of the Burkholderiales, including Frontiers in Microbiology | Microbiotechnology, Ecotoxicology and Bioremediation the pathogens Burkholderia pseudomallei and Burkholderia thailandensis (Table 3). A feature common to most of these was integrase localization adjacent to genes predicted to encode RngG/CafA-orthologs (RNaseG; Table 3). The type IV secretion system (T4SS) of the phn island was similar to that of the F-plasmid in overall arrangement of the tra functions, and possession of five tra genes specific to F-plasmidlike T4SS (Lawley et al., 2003; Table 4). But, the phn island T4SS lacked a number of the F-plasmid T4SS genes, and contained seven hypothetical genes the F-plasmid lacks. One gene notably absent from the phn island was traJ that, on the F-plasmid, controls T4SS gene transcription (Frost et al., 1994). The phn island TraI protein belonged to the PFL_4751 family of relaxases, which are required for transfer of SXT/R391 elements, as well as ICE of Pseudomonas fluorescens Pf-5 (Flannery et al., 2011).
Genes encoding other types of DNA integration/excision functions were dispersed throughout the island (Figure 2A; Table 2). There were 10 transposases, and those of the IS4-and IS66-type were closely associated with the phn cluster. Seven genes had predicted functions that could have roles in GEI stabilization or regulation of GEI excision. Potential regulators were a LexA-like protein and FlhC ortholog (Flagellar transcriptional activator). The LexA family includes bacterial prophage repressor proteins involved in the SOS response (Beaber et al., 2004) while FlhC (flagellar transcriptional activator) orthologs can function in global regulation of cellular processes. Both LexA and FlhC are part of a regulatory loop that is activated as part of the SOS response, which controls excision and transfer of SXT-type GEI (Beaber et al., 2002;Burrus and Waldor, 2003). Proteins that encoded potential toxin/antitoxin systems that could function in GEI stabilization included ( Table 2): a protein with a PIN (PilT N terminus) domain, two Fic/Doc (Filamentation induced by cAMP/Death on curing)like proteins and the OLD protein/UvrD pair (Overcome lysogeny defect/UV repair). The latter combination is commonly associated with putative GEI, and has been hypothesized to be either a toxin-antitoxin system (RNase toxin linked with an antitoxin), or a recombinase possibly involved in GEI integration (Khan et al, 2010). The phn island also contained all seven constituents of a "PRTRC" system, which is cataloged in TFAM as "a genetic system associated with mobile elements" 2 Lastly, the lasI /luxR pair (Figure 2A) could direct the formation of N -acylhomoserine lactone (AHL) cell signaling molecules. The role of AHL as global regulators of many cellular functions is well established, and lasI /luxR orthologs have been identified on GEI in Burkholderia cepacia (Baldwin et al., 2004). However, in strain Cs1-4, deletion of lasI had no detectable effect on its growth or degradation of phenantherene (Chen and Hickey, 2011); thus, the function of a LasI-encoded AHL are unknown.
COMPARATIVE PROTEOMIC ANALYSIS
More than 600 proteins were identified in each of the proteomes of pyruvate-and phenanthrene-grown Delftia sp. Cs1-4, including 12 proteins of the phenantherene degradation upper pathway and 13 proteins of the lower pathway ( Table 5). The relative abundance of PhnF was the greatest of all phenanthrene degradation pathway proteins detected, and ranked third overall in relative abundance in the phenanthrene proteome (Table S3 in Supplementary Material). In the pyruvate proteome, the same array of upper pathway proteins were identified, but at lower abundances. In contrast, no proteins of the lower pathway were detected in the pyruvate-grown cells. Based on comparative abundance of peptide scans in pyruvate vs. phenanthrene-grown cells, all phenanthrene degradation proteins detected exhibited induction during growth on phenanthrene, except perhaps PhnG (Table 5). Peptides were detected for all four of the new phenanthrene degradation genes described above (phnJKLM ), and of these PhnJ ranked just behind PhnF in terms of overall abundance (Table S3 in Supplementary Material). Furthermore, all four proteins showed apparent induction by growth on phenanthrene. In the lower pathway, peptides matching both copies of OphA2 were identified, with those from locus DelCs14_1655 more abundant. Likewise, peptides matching gene products from both pmd clusters were detected, including both Frontiers in Microbiology | Microbiotechnology, Ecotoxicology and Bioremediation copies of pmdB,C,J,K. However, peptides matching the third copy of pmdAB (DelCs14_3008-09) were not detected.
GENETIC ANALYSES OF phnAa-d
The genes encoding the RHD large and small subunits, phnAcAd, were deleted by allelic exchange in strain Cs1-4, and the resultant ΔphnAcAd mutant was unable to grow on phenanthrene ( Figure 7A). Complementation in trans with phnAcAd restored growth of the ΔphnAcAd mutant to levels similar to that of the wild type ( Figure 7A). The growth impairment imposed upon the ΔphnAcAd mutant was specific to use of phenanthrene as a sole carbon source, as the mutant's growth on pyruvate was indistinguishable from that of the wild type or complemented mutant ( Figure 7B). The phnAa-d cluster was cloned from Burkholderia sp. Ch1-1, and expressed in E. coli; phnAa-d from Ch1-1 to determine if interruption by a transposon (see above) affected the ability of the cluster to produce a functional enzyme. Heterologous expression was successful, and E. coli harboring phnAa-d transformed all PAH tested ( Figure 7C). No transformation was detected by the control E. coli BL21 (AI)(pET5a). Naphthalene and fluorene were most rapidly transformed, and the parent compounds were not detectable by HPLC after 1 h of incubation. For the remaining PAH tested, the apparent preference of the RHD decreased in the order of phenanthrene > anthracene > pyrene ( Figure 7C). Notably, the RHD transformed ca. 50% of pyrene to pyrene dihydrodiols.
DISCUSSION
The phn island represented a type of GEI that, to the best of our knowledge, has not been previously reported. Classes of GEI are www.frontiersin.org defined by having an integrase homology of ≥60%, significant structural synteny and a common site of chromosomal integration (Bi et al., 2012). For the phn island, the only protein reaching the 60% homology threshold with Int Cs14 was the integrase of the podophage BcepC68, an element with no additional similarities to the phn island. Thus, other GEI of the phn island class have not yet been revealed by genome sequencing projects. The preferred site(s) of integration for phn island-like elements cannot be conclusively identified without orthologs of Int Cs14 for comparison. But, one of the most common targets for GEI, tRNA genes, can probably be ruled out as the nearest such gene was ca. 60 kb downstream of the phn island. Integration of GEI at sites besides tRNA loci is most extensively studied with the SXT/R391-type elements, which target prfC (Peptide Chain Release Factor 3; Burrus and Waldor, 2003). Because tRNA genes and prfC encode essential functions, they are widely conserved and sequences embedded in these can serve as effective targets (attB sites) for homologous regions (attP site) in an ICE (Toleman and Walsh, 2011). It is unclear whether the fimbrial protein gene present in the chromosome at the 3 end of the phn island had the broad conservation typical of ICE integration loci. However, the 57-bp sequence located in the 3 terminus of the phn island and repeated in the adjacent chromosomal region strongly resembled an attP-attB pair, but was far larger than any such sequence recognition site yet reported. Further study is needed to elucidate the potential function of that 57-bp motif.
Processes governing GEI mobilization are key components of their biology and, while these are currently undetermined for the phn island, that element possessed genes known to mediate mobilization of SXT/R391-type GEI. Conjugal transfer of the SXT element occurs via a T4SS, and TraD (T4SS conjugal coupling factor) of the phn island was a protein of the SXT/TOL subfamily. Also, both the phn island and the SXT element possessed orthologs of FlhC and the lambda CI repressor (SetC and SetR, respectively in the SXT element), which are key (along with FlhD) in regulating SXT element's excision and circularization (Burrus and Waldor, 2003). The phn island and SXT element differed in that the latter has an flhD ortholog (setD) adjoining setC, whereas the phn island did not. But, there was a second locus in strain Cs1-4 (DelCs14_4452, _4453) that could supply FlhD. With the SXT element, these regulatory proteins are part of a system that triggers its excision in response to DNA damage, which can result from exposure to UV light or reactive oxygen species (ROS). A similar system could operate with the phn island and ROS exposure might be particularly important, as ROS would be generated via activity of oxygenases the island encodes for phenanthrene catabolism, and/or catecholic metabolites that are produced during that process (Elstner, 1982;Dalton et al., 1999;Schweigert et al., 2001).
Genomic islands are emerging as key elements shaping the biodegradative capacity of bacterial communities. Other GEI that carry biodegradation functions are the (substrates degraded): 100 kb clc element (chlorocatechols, aminophenols; Gaillard et al., 2006), 90 kb bph-sal element (biphenyl; Nishi et al., 2000), 55 kb biphenyl catabolic transposon Tn4371 (biphenyl; Toussaint et al., 2003), 100 kb Tn3-like Alteromonas sp. SN2 transposon (naphthalene; Jin et al., 2011) and an unclassified GEI in P. naphthalenivorans CJ2 (naphthalene; Yagi et al., 2009). In terms of catabolic activity, all of these GEI (except for the clc element) were similar to the phn island in encoding both the "upper" and "lower" pathways for catabolism of a polyaromatic compound. However, the ca. 235 kb phn island was distinct from all of the above in encoding additional functions such as a potential link of catabolism to PHA/fatty acid biosynthesis, production of AHL signaling molecules and >50 hypothetical proteins. Thus, phn island-encoded functions likely extended to cellular processes beyond biodegradation. The phn island was also distinguished from other biodegradation GEI as it was the only element of this group with apparent similarity in its mobilization biology to that of SXT/R391-type GEI.
The present study provided the first insight into a potential mechanism of HGT for the phn AFK2 genotype. "Curing" experiments with A. faecalis AFK2 indicated possible association of the phenanthrene degradation phenotype with a ca. 43 kb plasmid (Kiyohara et al., 1990). But, which, if any, of the phn, oph, or pmd genes were carried by that plasmid was not reported. The divergence in G + C content of the phn cluster (ca. 50%) indicated a phylogenetic origin distinct from any host of the phn genes that is currently known. As noted above, the G + C content for Delftia sp. Cs1-4 was 66.7%, while that of A. faecalis AFK2 was 68% (Kiyohara et al., 1982) and 62.5% for Burkholderia sp. Ch1-1 3 . The G + C content for Acidovorax sp. NA3 has not been reported, but in other Acidovorax genomes, it ranged from 64.9 to 66.8% (see text footnote 3). Thus, the phn cluster constituted a module mobilized independently of the phn island. Mechanisms for HGT of the phn cluster are unknown, but the bracketing by IS4-and IS66-type transposons may have implicated participation in HGT by those elements. Finally, although the phylogenetic origin of the phn cluster cannot be unequivocally determined, a betaproteobacterial source would be consistent with that of other phn island components (e.g., T4SS cluster, PRTRC cluster, etc.). Also, while the 50% G + C content of the phn cluster was divergent from the genomic G + C content of its known hosts, which were all of the order Burkholderiales, it would be in line with the median genomic G + C content for the betaproteobacteria as a whole (Lightfield et al., 2011).
Six other GEI were identified by BLAST searches that had a T4SS cluster with high homology to that of the phn island; three of these also contained a PRTRC region like that of the phn island. All of these GEI were located in betaproteobacteria of the Comamonadaceae (Acidovorax sp. JS42) or Rhodocyclaceae (Alicycliphilus denitrificans K601, Dechloromonas aromatica RCB), although none could be classed with the phn island because of low integrase homology with Int Cs14 (≤23%). But, the GEI from strains JS42 and K601 were of the same class based on high integrase homology (96%), a common site of insertion (tRNA Ala ), and overall synteny. Two significant findings regarding the phn cluster in strain Cs1-4 were the relocation of phnI, and the identification of four new genes, phnJKLM. The latter were not annotated in the GenBank record for strain AFK2, but were readily identified by ORF analysis of that sequence, and it is alignment with the phn region of strain Cs1-4. The degeneration of phnJLM in strain AFK2 suggested these genes were part of an ancestral phn module common to strains AFK2 and Cs1-4, but not utilized in the former. In contrast, while specific functions of phnJKLM in strain Cs1-4 were unknown, all exhibited apparent induction by growth on phenanthrene, suggesting some role in the utilization of that compound. Furthermore, since phnJKLM orthologs were associated with oxidoreductases for metabolism of a variety of hydrocarbons, their functions may be broadly applicable in hydrocarbon degradation, rather than specific to degradation of phenanthrene, or even PAH in general.
Segregation of phnI from the main body of the phn cluster could potentially have significant regulatory and metabolic implications. Singleton et al. (2009) demonstrated that in Acidovorax sp. AN3 phnAc, phnB, and phnC were co-transcribed and, since phnI was upstream of phnC, it too would be included in that transcript. But, in strain Cs1-4, such co-transcription of phnI is uncertain as ca. 8 kb and five genes separate it from the body of the phn cluster. In this case, phnI expression may be under separate transcriptional control, which could potentially affect the catabolic pathway. PhnI catalyzes the last step of the upper pathway (2-carboxybenzaldehyde conversion to o-phthalate), and a notable difference between strains Cs1-4 and Ch1-1 (in which phnI was included within the phn cluster) was that the former accumulated o-phthalate during growth on phenanthrene while the latter did not (Shetty, 2011). In strain Cs1-4, o-phthalate accumulation could reflect dis-synchronization of PhnI activity with that of the upstream enzymes in the phn catabolic pathway.
Growth on phenanthrene is known to induce phn expression (Singleton et al., 2009), but regulatory mechanisms controlling transcription of the phn genes are unknown. Conservation of the marR-like element within the phn cluster may have indicated it served a regulatory function in phenanthrene degradation that is common to all of the phn genotype bacteria. The marR elements are generally responsive to aromatic compounds in general (Alekshun and Levy, 1999), and in P. naphthalenivorans CJ2, regulates expression of naphthalene degradation genes (Jeon et al., 2006). Also, marR genes can be up-regulated by exposure to naphthenic acids (Zhang et al., 2011), chemicals that have some resemblance to metabolites produced during phenanthrene metabolism. Thus, a possible role for marR is regulation of the phn genes via interaction with one or more phenanthrene metabolites.
The oph and pmd clusters each presented interesting types of apparent redundancy. The multiple pmd clusters were similar to that in other bacteria, such as the multiple phenol degradation clusters (mhp genes) in Dechloromonas aromatica (Salinero et al., 2009). The physiological significance of such redundancy is as yet unknown. But, for Delftia sp. Cs1-4, proteomics data presented here proved both pmd paralogs were expressed, and apparently induced by growth on phenanthrene. A search of all currently available genomes possessing genes annotated as "phthalate 4,5dioxygenase" yielded no other examples of an oph operon with two copies of ophA2. Two nearly adjacent copies of phthalate 4,5-dioxygenase oxygenase were located in the actinobacterium Pseudonocardia dioxanivorans CB1190 (Psed_3921, Psed_3923; NC_015312.1), but these were not associated with any other oph genes. Thus, the structure of the oph cluster in strain Cs1-4 is likely the fist example of its kind. Phthalate dioxygenase is composed of a reductase, OphA1, and a single alpha subunit, OphA2 (Batie et al., 1987;Kweon et al., 2008), and thus the roles of two non-identical copies of OphA2 is unknown. Proteomics data established expression of both copies, but the within-cluster copy (DelCs14_1655) appeared more abundant from spectral scans. We speculate that the second copy of OphA2 may be optimized for a substrate other than o-phthalate.
Heterologous expression and targeted mutagenesis established phnAa-d as encoding the RHD responsible for initiating metabolism of phenanthrene. As yet, evidence for this activity has been indirect, such as monitoring transcript production (Singleton et al., 2009). Heterologous expression demonstrated that, like other PAH RHD, the phnAa-d enzyme transformed a number of PAH including a four-ring compound, pyrene. However, a notable difference between the phn RHD and the nah-like RHD was that organisms possessing the latter typically grow on many of the PAH that are substrates for the nah RHD. But, phenanthrene is the only PAH that supports growth of all currently cultured bacteria with the phn AFK2 genotype (Acidovorax sp. NA3, Delftia sp. Cs1-4, and Burkholderia sp. Ch1-1).
In conclusion, the present analyses of the phn island have added new dimensions to our knowledge of PAH biodegradation, mechanisms of HGT that shape microbial communities and the nature of GEI in general. This study has provided starting points for investigations into new biodegradative functions, such as the roles of PhnJKLM or the two copies of OphA2, as well as identification of molecular mechanisms mediating phn island mobilization. The acquisition of complete genome sequences for additional bacteria of the phn AFK2 genotype and/or possessing close orthologs of Int Cs14 would greatly facilitate future studies on the structure and function of phn island-type GEI.
ACKNOWLEDGMENTS
These studies were supported by funding (to William J. Hickey) from the Univ. of Wisconsin-Madison, College of Agricultural and Life Sciences (Hatch-McIntire-Stennis), National Science Foundation (MCB0920664), and the O.N. Allen Professorship in Soil Microbiology. Sequencing, assembly, and computational annotation of the Delftia sp. Cs1-4 and Burkholderia sp. Ch1-1 genomes was done by the U.S. Department of Energy, Joint Genome Institute, through the Community Sequencing Project (CSP795673 to William J. Hickey). The work conducted by the U.S. Department of Energy, Joint Genome Institute was supported by the U.S. Department of Energy, Office of Science under contract No. DE-AC02-05CH11231. | 2016-06-17T04:37:17.827Z | 2012-04-04T00:00:00.000 | {
"year": 2012,
"sha1": "189181575385226cf0069f1d9f022b020ba52b4a",
"oa_license": "CCBYNC",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2012.00125/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "189181575385226cf0069f1d9f022b020ba52b4a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
207393489 | pes2o/s2orc | v3-fos-license | Burnout syndrome as an occupational disease in the European Union: an exploratory study
The risk of psychological disorders influencing the health of workers increases in accordance with growing requirements on employees across various professions. This study aimed to compare approaches to the burnout syndrome in European countries. A questionnaire focusing on stress-related occupational diseases was distributed to national experts of 28 European Union countries. A total of 23 countries responded. In 9 countries (Denmark, Estonia, France, Hungary, Latvia, Netherlands, Portugal, Slovakia and Sweden) burnout syndrome may be acknowledged as an occupational disease. Latvia has burnout syndrome explicitly included on the List of ODs. Compensation for burnout syndrome has been awarded in Denmark, France, Latvia, Portugal and Sweden. Only in 39% of the countries a possibility to acknowledge burnout syndrome as an occupational disease exists, with most of compensated cases only occurring in recent years. New systems to collect data on suspected cases have been developed reflecting the growing recognition of the impact of the psychosocial work environment. In agreement with the EU legislation, all EU countries in the study have an action plan to prevent stress at the workplace.
In parallel with growing high demands on employees across all occupations the negative influence on workers' mental health is continuously rising and chronic stressrelated occupational diseases, especially burnout syndrome, are becoming an important issue 1,2) .
The ILO/WHO List of occupational diseases (ODs) revised in 2010, which serves as an example for all countries in the world, includes in the Chapter 2.4, among the "Mental and behavioural disorders", the "Post-traumatic stress disorder and additional mental or behavioural disorders not mentioned". On the other hand, the European Commission Recommendation (EC 2003) concerning the European Schedule of ODs does not include any stressinduced disorder. However, in several countries, there is a possibility to update the List of ODs and criteria of the diseases 3,4) .
The term "burnout" was first used in 1974 by Freudenberger in his study called "Staff burnout". Shortly after that, in 1976, burnout syndrome was defined by Maslach and Jackson as a three dimensional syndrome characterized by exhaustion, cynicism and inefficacy, i.e. the opposite to engagement, described as energy, involvement and efficacy 5) . Burnout syndrome prevention has been discussed worldwide due to the economic burden of the absenteeism and other negative consequences related to job satisfaction, work performance and patient care. According to Kissling et al., the German yearly average number of sick days attributed to burnout rose from 0.67 in 2004 to 9.1 d in 2011, i.e. 14 times 6) . In the Netherlands, about 15% of sickness absence of the working population was caused by burnout and the annual cost of this disorder reached 1.7 billion Euros in 2005 7) .
Nowadays, there is no doubt that burnout syndrome can be caused and/or aggravated by work and it is very important to diagnose it early and introduce preventive measures. About 8% of the German working population believe they suffer from burnout syndrome 6) . In a study among 7 400 Czech physicians, 34% feel that they already show symptoms of burnout and 83% perceive themselves to be at a risk of it 8) . Differences among various specialists were seen, the highest risk being found among traumatologists, hemodialysis workers, infectionists, internists, gynaecologists, radiologists and surgeons. Interestingly, in this study, occupational physicians stood at the other side of the scale with the smallest proportion of physicians with burnout syndrome.
Most exposed occupations are the helping professions such as health care workers, social workers, police officers, and teachers and high touch jobs such as customer services; some studies also concern lawyers, managers, etc., who however are less involved 9) . Based on data collected by The Health and Occupation Research (THOR) Network in the UK, a national occupational health surveillance scheme utilising voluntary data reported by medical experts (including psychiatrists and occupational physicians), workload and communication difficulties with other workers seem to represent the most significant risk factors for developing work-related mental health problems 10) .
The reason why it is difficult to prevent stress in the workplace is also the problem of its evaluation. The level of stress varies with the type of job, and its perception is considerably subjective. Karasek's Job strain model uses structured questionnaires and deals with two aspects, such as "demands" and "decision latitude". Subjects with high demands and low decision latitude are the most strained and at the highest risk of developing stress related disorders 11) . According to Siegrist's effort-reward imbalance model, the most stressful condition occurs when the reward does not match the effort made 12) . Recent review on the job stress models showed the highest value for predicting burnout syndrome of following models: the Job Strain Model and the Effort/Reward Imbalance Model (which were the most used and whether used in combination most effective), the Mediation Model of Maslach and Leiter, and other new models, such as the Job Demand Resources Model and the Demand Induced Strain Compensation model.
The latest revision of the International Classification of diseases (ICD-10) does not denominate burnout syndrome as an individual diagnosis. Burnout syndrome is listed as an additional diagnosis under Chapter XXI -Problems related to life-management difficulty, coded as Z 73.0 (State of vital exhaustion). Similarly, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), i.e. the 2013 update to the American Psychiatric Association, does not include burnout syndrome. According to some specialists, burnout syndrome is simply a state of emotional and physical exhaustion which to a great extent overlaps with depression. However, most researchers believe that burnout should be better considered as a work-related disorder 13) .
This study aims to map the evaluation systems of burnout syndrome in the countries of the European Union (EU), possible compensation of this disorder, and preventive measures used. It was agreed by the Charles University Review Board and supported by the institutional grants.
Occupational health specialists from all 28 EU countries were contacted electronically and asked to complete a Industrial Health 2018, 56, 160-165 questionnaire. These contacts were based on the list of participants invited by the Directorate-General for Employment, Social Affairs and Inclusion of the EU Commission in 2015 and 2016 in Luxembourg as national experts in diagnostic criteria and occupational diseases statistics, to unify the data collection on occupational diseases. If needed, members of the Monitoring Occupational Diseases and tracing New and Emerging Risks in a NETwork (MODERNET) were approached. Questions focused on the presence of an official national List of ODs, the possibility to acknowledge burnout syndrome and compensate patients, diagnostic criteria, and prevention strategies. In the second step, the participants from the countries where the compensation was declared were contacted to provide data concerning the compensation and number of cases.
Participants from 23 EU countries completed the questionnaire and were included in the study. As can be seen in Table 1, the List of ODs is used in 21 countries. In 9 countries (Denmark, Estonia, France, Hungary, Latvia, Netherlands, Portugal, Slovakia and Sweden) burnout syndrome can be acknowledged as an occupational disease. Solely in Latvia, burnout syndrome is explicitly listed on the List of ODs. A further 5 countries (Denmark, Estonia, Hungary, Slovakia and Portugal) accept chronic stressrelated occupational diseases as occupational through the "open item" in the List of ODs. In the Netherlands and Sweden, countries not using List of ODs, any disease or injury may be acknowledged as occupational, supposing a sufficient proof of the causality has been given. France may use "Additional occupational disease recognition system". On the other hand, in Italy, burnout syndrome can be reported only as a disease of probably other than of occupational origin, without awarding any benefits. German List of Occupational Diseases was enlarged in 2017 and now includes 80 diseases; however the burnout syndrome is not listed. Also the diagnostic criteria differ among the countries. In most countries, the evaluation is strictly individual and the decision is made by regional/national committees of recognition of occupational diseases. Cases with concurrent non-occupational factors should be excluded.
Compensation, after the causality of occupational origin has been recognized, may be reimbursed in a total of eight countries. Till now, the benefits through the social insurance to patients with burnout syndrome have been provided in five countries only (in Denmark, France, Latvia, Portugal, and Sweden).
Additional specific criteria are used in several countries to acknowledge burnout syndrome as an occupational disease. In Denmark, where there is the highest number of compensated cases, burnout syndrome can be acknowledged only if certain psychiatric diagnoses are present, as shown in Table 1. Evaluation is based on exposure data such as working under a stressful environment, occurrence of many short deadlines or lack of support from the management, etc. Cases must be presented to the ODs Committee, composed of representatives of the Labour Market (Danish employers and the Danish workers organization). About a third of compensated cases came from healthcare and educational environment, another third from socialwork area and the rest from other branches. In Latvia, 17% cases were directors, 14% firefighters and engine drivers, 14% taxing inspectors, 12% healthcare workers, 12% judges, 10% teachers, 7% bookkeepers and 14% other occupations. The compensated occupations in Portugal included bookkeeper, commercial clerk, businessman, pharmaceutical assistant, toolmaker (all 14%), and two unspecified jobs (30%).
Much of compensated cases in Sweden with known jobs were healthcare professionals (36%, i.e. doctors, nurses, assistant nurses), followed by jobs requiring shorter university education (24%), directors of small or mediumsized companies (20%), and other jobs (20%).
In France, chronic stress-related occupational disorders (i.e. depression, anxiety disorders and burn-out syndrome) can be recognized and compensated as occupational diseases using an additional occupational disease recognition system (Système complémentaire): if the disease can be directly attributed to the victim's usual work activity and has led to his or her death or permanent disability. In such cases of "off-list" recognition, presumption of origin is also forfeited. The portfolio must be submitted to the Regional Committee for the Recognition of Occupational Diseases to evaluate whether there is a direct and essential link between the usual work activity and the disease.
Netherlands' Center for Occupational Diseases (NCOD) Guidelines for occupational physicians, general practitioners and psychologists are followed in the Netherlands where burnout syndrome is included in "stress-related disorders" 7) . The 13 psychological criteria for inclusion are: malaise-apathy, sense of being over-burned, anhedonia, sense of powerlessness, demoralization, depression, emotional instability concentration problems, tension, ruminating, irritability, demotivation, inability to think clearly. In addition, physical problems (fatigue, sleeping problems, headache, abdominal pain, muscle pain etc.) may be counted. The three exclusion criteria are: acute stress disorder, psychiatric pathology and somatic disease. Excessive exposure to factors associated with effort-reward imbalance model of Siegriest and Job Strain Model of Karasek are considered sufficient to cause occupational burnout. Some of them are: high psychological job demands, little job autonomy, little social support from colleagues and/or managers, procedural injustice in the organization, relational injustice in the organization and high emotional demands. The final decision of work relatedness is based on a judgement of the occupational specialists and on the number of stressing factors and their higher level as opposed to the intensity of exposure to non-work-related psychosocial stressors. The most occurring professions within the acknowledged cases were: service staff and sellers (21%); intellectual, scientific and artistic occupations (20%); directors (19%), administrative staff (17%), and 23% other. The highest numbers of acknowledged cases may be explained by the fact that in the Netherlands there is no compensation given.
As can be seen in Table 1, the numbers of compensations in the countries differ considerably, as several of them did not acknowledge any occupational burnout syndrome in spite of the possibility to do so.
The burnout syndrome has not yet been officially accepted as an occupational disease in most of EU countries, even though according to Eurobarometer survey in 2014, 57% of the responders reported that exposure to Industrial Health 2018, 56, 160-165 stress is the main health and safety risk they face in their workplace today 14) . One of the reasons might be that only 53 % surveyed establishments in the EU-28 report had sufficient information on how to include psychosocial risks in the risk assessments. Also, the absence of a clear individual diagnosis for burnout syndrome and the use of many different "F" diagnoses such as anxiety, depression, or adjustment disorder can lead to confusion in how it is reported.
However, most countries have developed systems to collect data on suspected cases of burnout, either by using their open-item in the List of ODs, or, in the absence of a list of ODs, allowing burnout to be acknowledged and compensated. In the countries, where the compensation is possible, the compensated cases are seen especially in the recent years (Denmark, Latvia, Sweden). In addition, a high number of cases were acknowledged according to NCOD criteria in the Netherlands in 2015, as can be seen in Table 1. In some countries, for example France and the United Kingdom, there are also additional surveillance systems (not linked to compensation) which enable the reporting of "new" occupational diseases, including burnout. In France, it is RNV3P (Réseau National de Vigilance et de Prévention des Pathologies Professionnelles, i.e. National Network for Monitoring and Prevention of Occupational Diseases) which started in 2001 and has collected 508 cases of burnout syndrome until 2015 14) . Similarly, in the UK system called THOR the psychiatrists have reported 24 cases of burnout (1999 to 2009), occupational physicians 34 cases (1996 to 2015) and general practitioners 6 cases (2006 -2015) 10) .
Prevention of stress-related diseases in the workplace is carried out in EU countries as obliged by law, in agreement with the EU legislation. The Risk assessment includes prevention by avoiding or reducing workloads, threat of violence, harassment, lone working or recommends obligatory working pauses, etc. 14) . An action plan to prevent stress at the workplace has been built up in 40% of establishments with more than 19 employees in 28 EU countries, and the percentage rises with the size of the enterprises. Sixty percent of establishments in EU introduced direct participation of employees to address psychosocial risks and average 16% establishments use a psychologist 15) . The situation in EU countries is shown in Table 1.
In Sweden, social and organizational factors are the second most common cause of reported occupational illnesses after musculoskeletal factors. This concerns about a third of reported occupational illnesses with an increase of 70% since 2010.
Emphasis on prevention from the state authorities and so from workers shows that burnout syndrome is a problem that economically burdens, and in a context of effectivity and workers' health, has to be dealt with. With such perspective European Agency for Safety and Health at Work (EU-OSHA) supported prevention of occupational stress related disorders by the campaign: "Healthy workplaces manage stress", that provide tools and guidance for stress management. It was shown, that a psychologically safe environment contributes to good patient care and at the same time works as a protective factor against burnout of the staff.
There are several limitations in our study. One of them is that we were not able to get answers from all European countries even if we have repeatedly attempted (five countries are missing). Another limitation is the fact, that there are significant differences in the legislation in European countries and even the acknowledgement of an occupational disease does not mean that similar benefits are given to the patients in individual countries. The countries don't register the occupations of the compensated patients in a similar way and mostly, only the economic sector branches are available. Importantly, due to the design of the study, data collection provided by national experts could have given a selection bias concerning participants. Also, their specialisation was closer to the compensation part than to preventive measures. Therefore, to complete the data, we have used data from ESENER EU-OSHA study 15) . There is no European Society for Occupational Health, which, to our meaning could improve the communication and the unification of both preventive measures and compensation criteria.
The recognition of burnout syndrome is problematic. In national nosological classifications such as the Dutch and Swedish classifications, burnout is defined quite differently from the way it is defined in research. In Dutch classifications, burnout is equated with neurasthenia, a condition that is nothing new since it was isolated about 150 yr ago. In Swedish classifications, the term exhaustion disorder is employed, but the involvement of work at an etiological level is not required for exhaustion disorder to be diagnosed. However, this problem probably resulted from the international disagreement on the aetiology and diagnostic criteria of burnout 16) . For this reason, the lack of an official diagnosis of burnout limits the access to treatment, disability coverage, and workplace accommodations. Therefore, it would be advantageous that WHO in its 11th version of ICD and DSM defines specific diagnostic criteria for diagnosing burnout syndrome as a specific nosographic entity to guide policymakers for ascertain burnout syndrome as an occupational disease 13) .
Moreover, adding new items to the List of ODs can bring about a preventive effect by increasing both employer attention and employee caution, leading to the implementation of more preventive measures. It is the main reason why the ILO/WHO List of ODs (revised 2010) includes Mental and behavioural disorders. Nevertheless, more work is needed to create clear, strict and objective diagnostic and evaluation criteria, especially the evaluation of the working conditions, so that their misuse would be prevented. | 2018-04-03T03:16:02.452Z | 2017-11-03T00:00:00.000 | {
"year": 2017,
"sha1": "5936eef1c1ca7d663fbeb22ac8ac1a66eb894f79",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/indhealth/56/2/56_2017-0132/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5936eef1c1ca7d663fbeb22ac8ac1a66eb894f79",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
126316763 | pes2o/s2orc | v3-fos-license | EFFECTIVENESS OF GSECE LEARNING MODEL (GUIDING, SEARCHING, EXPERIENCING, COMMUNICATING, EVALUATING) FOR PHYSICS LEARNING AT VOCATIONAL HIGH SCHOOL.
Andik Kurniawan 1 , Indrawati 1 , Sudarti 1 , Sutarto 1 , I Ketut Mahardika 1 , and Iis Nur Asyiah 1 . Department of Post Graduate Science Education, Jember University, East Java, Indonesia. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History
ISSN: 2320-5407
Int. J. Adv. Res. 6(1), 1534-1538 1535 As a solution to the above problems, a workable solution is to improve the learning process. There is also one of the learning models that can be proposed is the GSECE learning model. This learning model takes shelter under constructivism learning which emphasizes that knowledge built into the mind of learners and this learning model emphasizes the meaningfulness of learning. Constructivist learning is based on the fact that skills and knowledge gained are not passively accepted and memorized but involve the active participation of learners through thought and immediate activity so that knowledge will be developed [2]. A truly contextual learning will occur when students (learners) are able to process new information or knowledge in such a way as the student's mind of reference (memory, experience, and response) [3]. In addition learning in contextual learning tends to look for meaning, search for sensible relationships, as well as search for usefulness between material concepts learned with real-world life situations.
One of the learning models that can be proposed is GSECE learning model has 5 important stages. Guiding first stage is the teacher gives instructions and directs students to determine the material or topics in accordance with the material to be studied. It aims to enable students to gain a higher understanding. According to Ulya states that with teachers give guidance to students in the learning process, it can bring students to a higher understanding [4]. This is in line with Vygotsky opinion which says that an actual child's developmental level with a higher level of potential development can be achieved by a child if received counsel or assistance from someone more mature or more competent [5]. Another opinion was also delivered by Nwagbo which states with the teacher provides guidance in the form of illustrations for students until students are able to generalize and conclude will make students able to apply the learning materials widely [6].
The second stage, Searching is the teacher gives the task of looking for information relating to the material or topics that can be obtained from reading and or images and video. It aims to provide opportunities for students to explore ideas independently and simultaneously optimize the potential of students to be able to master physics concepts well. According to Bruner learning activities will work well and creatively if the students find their own conclusions about the topics studied so that the concept can be understood more deeply and more lasting in the memory of students [5], [7]- [12]. Further Bajongga says that learning that optimizes the potential in students will foster creativity and students will be able to find and develop their own facts and concepts and problem solving [13].
The third stage of experiencing the teacher provides an opportunity for students to find the phenomenon or information related to the topic of everyday phenomena experienced by students. Learning that connects or links material or learning topics with the knowledge they have will make learning more meaningful [5], [14], [15]. This is supported by the opinion of Khaerul which states that learning that exposes students to problems in everyday life can be more meaningful and meaningful for students [16].
The fourth stage of communicating is the teacher giving the opportunity to students to communicate to peers about the material or topic of the search results and the results of the study based on the experience of students in everyday life. It aims to train students to communicate as well as to see the extent to which students master the material in learning activities. According to Piaget the process of exchange of ideas through communication and interaction with peers and adults plays an important role in the intellectual development of children and the formation of knowledge in children [5]. In addition, the communication will show the ability of students in understanding the material and with communication will also occur interaction and discussion with the group that makes students active in learning [17].
The fifth stage evaluating the teacher evaluates the student's learning outcomes after following the learning. It aims to obtain information and data that can be used as a basis to determine the level of progress, development, and achievement of student learning, and the effectiveness of teacher teaching. To obtain information and data of an accurate learning activities about the level of achievement of learning objectives by students it is necessary to evaluate learning activities that can be a test [18].
In relation to the issues discussed and proposed solutions on the advantages of the learning model, an experimental study was conducted to determine the effectiveness of the learning outcomes of the control class using conventional learning and experimental class using the GSECE learning model. In addition, to see student responses and activities of control classes using conventional learning and experimental classes using the GSECE learning model.
Methodology:-
This study is a quasi experiment with the consideration that not all variables can be strictly controlled. The research design used was pre -post test control group design. The design of this study can be seen in The population in this study were the students of class X TKJ in SMK Negeri 7 Jember in the academic year 2017/2018. Before determining the control class and experimental class, Leven's test was tested from the daily physics value data on the previous material. This is to see the homogeneity of all sample candidates. After the test Leven's test obtained sample class X TKJ 1 with the number of students 35 people as a control group and class X TKJ 2 with the number of students 35 people as an experimental group. The dependent variable in this study consists of learning outcomes, student responses and student learning activities. The independent variables consist of the GSECE learning model for the experimental group and the conventional learning model for the control group. Data collected in this study include learning outcomes measured by using tests, student responses measured by questionnaire and student activity measured by observation sheet.
Data were analyzed with independent sample t-test to determine the main difference of independent variable (learning model) to dependent variable (learning result). Descriptive analysis techniques used to describe student responses and student learning activities.
Result:-
The result of the research described is the learning achievement achieved by the students between the groups after following the learning model of GSECE (experimental group) and conventional model (control group). The results are presented in Table 2 Table 2:-Results of post test analysis with independent sample t-test Table 2 shows that the mean post-test scores of the experimental group students were 78.29 (SD = 6.34), and for the control group students were 71.14 (SD = 6.23). It shows that the learning outcomes of students who follow the learning with the GSECE model is higher than students who follow the learning with conventional teaching methods. Based on Table 2, t obtained = 4.75 indicates that the value of t count > t table (4.75> 1.99) and P value (0,000 <0.05) then Ho is rejected, meaning that there is a difference between the average value experimental group exams with an average of control group exam.
Student responses in the experimental group using the GSECE learning model with the control group using the conventional learning model are presented in Table 3.
Student Reponse GSECE learning model Conventional learning model Percentage average 78,46 60,45 Category good good enought Table 3:-Description of student response value Based on Table 3 can be described that the response of students in the experimental group learning using GSECE learning model obtained the average value of 78.46 percentage including good category while the student response in the control group using the conventional learning model obtained the average value of percentage of 60, 45 including the category is quite good. From the above data, it can be described that the student response using GSECE learning model is better than conventional learning model.
1537
Student activity in the experimental group using the GSECE learning model with the control group using the conventional learning model is presented in Table 4 Table 4 it can be explained that the student activity in the experimental group using the GSECE learning model obtained the average value of the percentage of 69.86 including the active category. Student activity is control group which use conventional learning model obtained by average value percentage equal to 62,92 including active category. From the above data, it can be described that in the experimental group using the GSECE learning model the students are better than the control group using the conventional learning model.
Discussion:-
This study aims to determine the effectiveness of GSECE learning model with learning outcomes, student responses and student learning activities. From the results of this study indicate that there is a significant difference in the average of student learning outcomes of students who learn to use the GSECE model is 78.29 with the group of students who learn to use conventional learning model that is 71.14.
The results of the overall hypothesis testing previously described show that the GSECE learning model through five important stages of learning has proven superior effectiveness toward learning outcomes, student responses and student activities compared with conventional learning models.
The main factor that makes the GSECE learning model more superior lies in its syntax which emphasizes students using all their potential to seek, and find their own concepts and then connect with experience in everyday life. The learning model with students looking for, finding and connecting experiences in daily life can be more meaningful learning [19] and can sharpen intelligence and enable students to respond effectively to learning materials [20]. Another relevant study conducted by Mulyasa states that the reasoning that optimizes the potential of students seeking, finding and experiencing directly proven to increase student motivation in learning so that students become more active in teaching and learning activities [21].
Conclusion:-
Based on the research results can be concluded several things as follows. First, the results of physics learning between the groups of students who were given learning with the GSECE model better than the group of students who were given a conventional learning model. Both student responses in the group of students who were given learning with the GSECE model were better than the group of students who were given the lessons with the conventional model. The three activities of the students in the group of students who were given learning with the GSECE model were better than the group of students who were given the conventional learning model. | 2019-04-22T13:09:28.679Z | 2018-01-31T00:00:00.000 | {
"year": 2018,
"sha1": "fecd7547f34d78d1cf04bbb1809d0bf06ab7c817",
"oa_license": "CCBY",
"oa_url": "http://www.journalijar.com/uploads/666_IJAR-21707.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "4f96fe055f31c9d4be586320e363ee7e51cd2bf3",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
21311472 | pes2o/s2orc | v3-fos-license | Apathy is associated with incident dementia in community-dwelling older people
Objective To assess whether apathy and depressive symptoms are independently associated with incident dementia during 6-year follow-up in a prospective observational population-based cohort study. Methods Participants were community-dwelling older people in the Prevention of Dementia by Intensive Vascular Care trial, aged 70–78 years, without dementia at baseline. Apathy and depressive symptoms were measured using the 15-item Geriatric Depression Scale (GDS-15). Dementia during follow-up was established by clinical diagnosis confirmed by an independent outcome adjudication committee. Hazard ratios (HRs) were calculated using Cox regression analyses. Given its potentially strong relation with incipient dementia, the GDS item referring to memory complaints was assessed separately. Results Dementia occurred in 232/3,427 (6.8%) participants. Apathy symptoms were associated with dementia (HR 1.28, 95% confidence interval [CI] 1.12–1.45; p < 0.001), also after adjustment for age, sex, Mini-Mental State Examination score, disability, and history of stroke or cardiovascular disease (HR 1.21, 95% CI 1.06–1.40; p = 0.007), and in participants without depressive symptoms (HR 1.26, 95% CI 1.06–1.49; p = 0.01). Depressive symptoms were associated with dementia (HR 1.12, 95% CI 1.05–1.19), also without apathy symptoms (HR 1.16, 95% CI 1.03–1.31; p = 0.015), but not after full adjustment or after removing the GDS item on memory complaints. Conclusions Apathy and depressive symptoms are independently associated with incident dementia in community-dwelling older people. Subjective memory complaints may play an important role in the association between depressive symptoms and dementia. Our findings suggest apathy symptoms may be prodromal to dementia and might be used in general practice to identify individuals without cognitive impairment at increased risk of dementia.
Little is known about the relation between apathy and dementia in community-dwelling older people. In a recent study, a cluster labeled "cognitive/motivational symptoms" that included symptoms of apathy was linked to incident dementia over 3 years in elderly without depression. 6 Others reported that in otherwise healthy patients with symptomatic small vessel disease, apathy, not depression, was associated with cerebral white matter damage. 7 Similarly, in mild cognitive impairment, apathy has been associated with progression to dementia, 8 in some studies contrary to late-life depression. 4,5 Finally, from a pathophysiologic perspective, apathy is associated with incident vascular disease, which is a risk factor for dementia. [8][9][10] We hypothesize that apathy symptoms in community-dwelling older people without dementia are associated with future cognitive decline and dementia independently of depressive symptoms. Furthermore, since memory complaints can occur both as a symptom of depression and as a sign of incipient dementia, we explore how subjective memory complaints influence the relation between depressive symptoms and incident dementia.
Participants
Participants were derived from the Prevention of Dementia by Intensive Vascular Care (PreDIVA) trial. 11 This trial tested the efficacy of a nurse-led, multicomponent cardiovascular intervention on the prevention of dementia in community-dwelling older persons. Community-dwelling older people (aged 70-78 years) registered with a participating general practice were invited to participate in the trial (figure e-1, http://links.lww.com/ WNL/A11). Exclusion criteria were dementia and disorders likely to hinder successful long-term follow-up (e.g., terminal illness, alcoholism). Recruitment was from June 7, 2006, until March 12, 2009. Follow-up time ranged from 6 years for participants randomized in 2009 up to 8 years for those randomized in 2006-2007. Participants were assessed at baseline and during 2-yearly follow-up assessments. At these visits, data were collected on medical history, medication use, cardiovascular risk factors, cognitive status (Mini-Mental State Examination [MMSE]), depression (15-item Geriatric Depression Scale ), and disability (AMC Linear Disability Scale). The final assessment was conducted on March 4, 2015. The study was approved by the medical ethics committee of the Academic Medical Center, Amsterdam, Netherlands. Participants gave written informed consent before their baseline visit.
Symptoms of apathy and depression
Symptoms of apathy and depression at baseline were defined corresponding to previous studies. 9,10,12,13 Symptoms were measured using the GDS-15. Apathy symptoms were operationalized as the 3 apathy items on the GDS-15 (GDS-3A), validated previously. 14 These items are (1) "Have you dropped many of your activities and interests?" (2) "Do you prefer to stay at home, rather than going out and doing new things?" (3) "Do you feel full of energy?" (reverse-coded). Depressive symptoms were operationalized as the remaining 12 items on the GDS-15 (GDS-12D). Isolated apathy symptoms were operationalized as the GDS-3A score in participants with a score ≤1 on the GDS-12D and isolated depressive symptoms as the GDS-12D score in participants with a score ≤1 on the GDS-3A. Participants with >1 missing item on the GDS-3A or >2 items missing on the GDS-12D were excluded from all analyses.
Dementia and cognitive decline Dementia was defined as a clinical diagnosis according to DSM-IV criteria, confirmed by 2 members of an independent, blinded outcome adjudication committee based on all available clinical information. Diagnoses of dementia were re-evaluated after 1 year of additional follow-up to avoid falsepositive diagnoses. For participants who dropped out of the study, the dementia status was retrieved by a research nurse from electronic health records or contact with the general practitioner at the end of the study, and presented to the blinded outcome adjudication. The dementia diagnostic process is described in more detail elsewhere. 11 As secondary endpoint, cognitive decline was operationalized as the change in MMSE score during the study compared to baseline.
Statistical analyses
Apathy symptoms, isolated apathy symptoms, depressive symptoms, and isolated depressive symptoms at baseline were assessed separately as continuous predictors. Participants with missing data were left out of the analyses. Only baseline symptoms were considered for the main analyses to ensure incident dementia was studied in individuals in whom dementia was excluded, and to avoid bias caused by selective drop-out of participants with apathy/depression. Since no relation was expected between baseline symptoms and subsequent random treatment allocation, the intervention and control group were analyzed as a single cohort. A sensitivity analysis adjusting for the intervention was performed to evaluate confounding by treatment allocation. Hazard ratios (HRs) for dementia were calculated using proportional hazard Cox regression. Model 1 adjusted for baseline age and sex; model 2 additionally adjusted for baseline history of cardiovascular disease, history of stroke/ TIA, disability, and MMSE score. The rationale for model 2 was to evaluate whether apathy and depressive symptoms add value to other determinants commonly evaluated in clinical practice for the prediction of dementia. As secondary outcome, we assessed change in MMSE score during the study compared to baseline, using a repeated measurements mixed model adjusted for time to measurement and baseline MMSE score. We explored all covariates for possible interactions with apathy/depressive symptoms in their relation to dementia. To assess the effect of the competing risk of death, we repeated the Cox proportional hazard analyses with mortality and the combined endpoint of dementia or mortality as outcomes. To assess whether there was a dose-effect relationship for the GDS-3A, we assessed HR per step increase on the GDS-3A.
We performed several exploratory analyses: (1) to assess whether apathy/depressive symptoms are associated with dementia hazard in participants with normal to high MMSE scores, we performed analyses in subgroups based on the median baseline MMSE score ≥28 and <28; (2) to assess whether apathy/depressive symptoms are associated with short-term rather than long-term development of dementia, as reported for depressive symptoms, [15][16][17] we evaluated HRs across tertiles of time to dementia and visually examined whether the dementia survival curves for participants with and without symptoms diverged consistently over time; (3) to analyze whether symptom stability influenced their relation with dementia, we divided participants into 5 categories based on the change in number of symptoms between visits 1 and 2: increase (>1 higher), decrease (>1 lower), and if not in one of those: stable low (mean <1), stable moderate (mean 1 through <2), and stable high (mean ≥2); (4) since the relation between the GDS-12D and incident dementia may be strongly influenced by a single question (GDS-15 number 10) directly referring to subjective memory complaints-"Do you feel you have more problems with your memory than most people?"-and according to a meta-analysis of the factor structure of the GDS-15 this question may also be considered part of the apathy construct, 18 we performed a sensitivity analysis with this question removed from the GDS-12D. In these analyses, we defined isolated apathy as apathy in participants with ≤1 symptom on the remaining 11 items on the GDS-12D.
Results of the main analyses are listed in table 2. Adjusted for age and sex, both apathy symptoms (HR per symptom 1.23, 95% CI 1.08-1.40; p = 0.002) and depressive symptoms (HR 1.11, 95% CI 1.04-1.19; p = 0.002) were associated with an increased risk of dementia. Associations were similar for isolated apathy (HR 1.20, 95% CI 1.01-1.43; p = 0.036) and depressive symptoms (HR 1.15, 95% CI 1.02-1.30; p = 0.018). After additional adjustment for disability, MMSE score, history of cardiovascular disease (CVD), and history of stroke, only apathy (HR 1.21, 95% CI 1.06-1.40; p = 0.007) and isolated apathy symptoms (HR 1.20, 95% CI 1.00-1.45; p = 0.046) were associated with an increased risk of dementia. There was an interaction between isolated apathy symptoms and a history of stroke (HR 1.58, 95% CI 1.01-2.49; p = 0.046), suggesting a stronger association with dementia in this participant group, but this was not significant after Bonferroni correction for the number of tested interactions. Adjusted for age and sex, apathy, isolated apathy, depressive, and isolated depressive symptoms were all associated with decline in MMSE score (β range −0.11 to −0.15, all p values <0.001) (table 3). Additional adjustment for baseline disability, history of CVD, and history of stroke slightly attenuated these results. There was no significant time by predictor interaction in any of the models.
Sensitivity analyses with additional adjustment for treatment allocation did not meaningfully alter results. In analyses regarding the competing risk of death (tables e-1 and e-2, http://links.lww.com/WNL/A12), all 4 predictors were associated with a higher risk of death in all models (HR range 1.10 to 1.34, p value range p ≤ 0.001 to p = 0.002). Tables e-3 through e-6 list results according to the number of apathy symptoms. There was no clear dose-response relationship between the number of apathy symptoms and dementia risk. Decline in MMSE score during the study did increase with more apathy symptoms. There was also a cumulative relation between the number of apathy symptoms and mortality (figure 1). Results were generally similar for isolated apathy.
Results in groups with baseline MMSE score ≥28 or <28 are listed in tables e-7 and e-8 (http://links.lww.com/WNL/ A12). Compared to our main results, associations with dementia were less strong in the group with an MMSE score <28, and not significant. In participants with an MMSE score ≥28, adjusted for age and sex, apathy symptoms (HR 1.26, 95% CI 1.04-1.51; p = 0.016) and isolated apathy symptoms (HR 1.27, 95% CI 1.01-1.61; p = 0.041) were associated with incident dementia. These associations were slightly attenuated in model 2, leaving none significant. Results for the Abbreviations: CI = confidence interval; HR = hazard ratio. Model 1: crude; model 2: adjusted for age and sex; model 3: additionally adjusted for disability (AMC linear disability scale), MMSE score, history of cardiovascular disease, and history of stroke. Symptoms measured using the 15-item Geriatric Depression Scale (GDS-15). Apathy symptoms: score on the 3 apathy items on the GDS-15 (GDS-3A); depressive symptoms: score on the remaining 12 items on the GDS-15 (GDS-12D); isolated apathy: GDS-3A score in participants with a score ≤1 on the GDS-12D; isolated depressive symptoms: GDS-12D score in participants with a score ≤1 on the GDS-3A.
analyses divided according to tertiles of time to dementia are listed in table e-9 (http://links.lww.com/WNL/A12). The associations seem to be strongest for all predictors for incident dementia <3.89 years, compared to dementia between 3.89 and 5.77 years and >5.77 years after baseline. However, the divergence between dementia survival curves for participants with and without apathy symptoms at baseline was constant over time ( figure 2). Similarly, the survival curves for depressive symptoms did not show any clear changes in the association between depressive symptoms and dementia over time. Longitudinal results regarding the change in apathy/ depressive symptoms from visits 1 to 2 are listed in tables e-10 and e-12. Participants in the decreasing number of symptoms category had lower HRs compared to those in the increasing, stable moderate, and stable high categories, but group sizes were small. Results for the sensitivity analysis removing GDS item 10-"Do you feel you have more problems with your memory then most people?"-are listed in
Discussion
Our results suggest symptoms of apathy are associated with incident dementia in community-dwelling older people without cognitive impairment. This association is independent of depressive symptoms, age, sex, MMSE score, disability, and history of CVD or stroke. Depressive symptoms were also associated with dementia independent of age and sex but no longer when the symptom of subjective memory complaints was left out. Both apathy and depressive symptoms are independently associated with cognitive decline according to the MMSE, independent of age, sex, baseline MMSE score, disability, and history of CVD or stroke. The associations may be stronger for short-term (<4 years) compared to long-term dementia incidence and attenuated in patients with remitting symptoms.
The association of depressive symptoms in older people with cognitive decline and dementia has been reported previously. [19][20][21][22] However, these studies did not differentiate among apathy, subjective memory complaints, and other depressive symptoms. The association of apathy with cognitive decline has not been described previously in unselected community-dwelling older people but corresponds to findings in mild cognitive impairment and other neurologic disease. 4,5,23,24 Furthermore, a recent study reported that a symptom cluster labeled "cognitive/motivational symptoms," which included symptoms of apathy, predicted incident dementia over 3 years in elderly without depression. 6 Findings in patients with mild cognitive impairment that apathy is associated with future cognitive decline while depression is not 4,5 were not corroborated by our study, Table 3 Association between predictors at baseline and adjusted mean difference in Mini-Mental State Examination (MMSE) score during study compared to baseline Mixed model with repeated measurements of MMSE score the study predicted by baseline apathy/depression status. Model 1: adjusted for baseline MMSE score; model 2: additionally adjusted for age and sex; model 3: additionally adjusted for disability (AMC linear disability scale), history of cardiovascular disease, and history of stroke; M/n: MMSE measurements, number of individuals; β: adjusted mean difference. Symptoms measured using the 15-item Geriatric Depression Scale (GDS-15). Apathy symptoms: score on the 3 apathy items on the GDS-15 (GDS-3A); depressive symptoms: score on the remaining 12 items on the GDS-15 (GDS-12D); isolated apathy: GDS-3A score in participants with a score ≤1 on the GDS-12D; isolated depressive symptoms: GDS-12D score in participants with a score although the association with incident dementia was stronger for apathy symptoms, and for depressive symptoms seemed largely dependent on the symptom of memory complaints. This last finding was unexpected and could suggest that dysphoric symptoms of depression are not related to future cognitive decline while symptoms of apathy are. However, given the exploratory nature of this finding and the limited scope of the GDS-15, our study does not provide enough evidence to warrant this conclusion and requires affirmation by future studies. The absence of a dose-response relationship between apathy symptoms and dementia risk is contrary to reports regarding incident cardiovascular disease. 12 This absence may be related to the strong doseresponse relation between the number of apathy symptoms and the competing risk of death. In participants with high apathy scores, mortality may occur before dementia can develop. Our findings suggesting that apathy and depressive symptoms are more related to development of dementia in the short term rather than the long term agree with previous reports on depressive symptoms and dementia. 15-17 They could indicate that apathy and depressive symptoms are an early marker of dementia, rather than a true risk factor. However, we found no clear decline of HRs across tertiles of time to dementia, and the divergence between the dementia survival curves seemed relatively constant. Our results therefore remain inconclusive. The finding that patients with decreasing symptom scores have a lower risk of dementia is concordant with previous results in depression. 21 These findings may however be affected by small group numbers and requires more in-depth analyses in larger studies to allow for any conclusions. Due to the risk of selective drop-out, the effect sizes in these analyses cannot be compared directly to those of the main analyses.
This study has some limitations. Individuals deemed unlikely to be able to complete 6 years of follow-up due to a medical condition were excluded from participating. This may have affected the distribution of apathy and depressive symptoms in our cohort and influenced their associations with dementia incidence. Results regarding the dose-response relation between apathy symptoms and dementia should be interpreted with caution due to the relatively small number of dementia cases and participants endorsing all apathy symptoms, especially for isolated apathy. The narrow age window at baseline (70-78 years) may have left us unable to detect any interactions between apathy/depressive symptoms and age in their relation with dementia. Selective drop-out may have influenced the reliability of our analyses regarding decline in MMSE score, since MMSE scores were unavailable from the time participants discontinued the study. However, our main analysis with all-cause dementia as outcome was not hampered by selective drop-out, with 98% complete follow-up for this outcome. Finally, it is important to stress that the depressive symptoms referred to in this article do not include apathy symptoms and therefore do not represent the overall construct of depression as it is currently diagnosed, which includes symptoms of apathy.
In our study, community-dwelling older people without manifest cognitive impairment endorsing all GDS-3A apathy symptoms had approximately double the hazard of dementia compared to those endorsing none, independent of age, disability, and MMSE score. Subgroup analyses imply the hazard is higher in patients with normal to high MMSE scores (>27 points). The more than twofold increase in mortality risk further illustrates the clinical relevance of apathy symptoms in these persons. Our population-based sample, its similarity to national (cohort) data, 25 and the completeness of follow-up on all-cause dementia (98.0%) and mortality (99.8%) suggest our results are generalizable. The factor structure of the GDS-15 and the likelihood of individuals endorsing these symptoms may vary across cultures and translations. 18 Our findings require validation and further exploration, preferably in datasets with more dementia cases, longer follow-up, detailed apathy and depression assessment, and broader age range. The mechanism relating apathy to dementia is unknown. Conceivably, apathy is a behavioral manifestation of subclinical cerebral atherosclerosis, cerebrovascular brain damage (e.g., small vessel disease), or other pathology associated with cognitive decline and dementia. 7,26 This could be assessed in largescale longitudinal studies involving brain MRI.
This study contributes to the accumulating evidence stressing the relevance of recognizing apathy in the general older population, regardless of concurrent depressive symptoms. 7,10,12,26,27 Given their independent association with incident dementia and mortality, clinicians should be watchful of apathy symptoms, although their possibly limited specificity must be considered when evaluating future risk of dementia. Distinguishing apathy from depression is important. Although depression is a relatively well-known risk factor for health deterioration in old age, persons with apathy symptoms without dysphoric symptoms may easily be overlooked, especially since they may tend to withdraw from care. Since depressive symptoms other than memory complaints did not seem to be related to dementia risk, apathy symptoms may be better suited to alert general practitioners to vulnerable persons who could benefit from a more active caregiving approach. If replicated, the differential associations for apathy and depressive symptoms suggest that studies regarding the relation between depression and dementia must distinguish the 2 to allow comprehensive interpretation. In research, our findings suggest apathy symptoms may be prodromal to dementia and may be useful to consider when trying to identify persons at increased risk. | 2017-12-29T21:32:37.132Z | 2018-01-02T00:00:00.000 | {
"year": 2018,
"sha1": "decf6939ae2e977485fe9ec7166467351e94dc55",
"oa_license": "CCBYNCND",
"oa_url": "https://n.neurology.org/content/neurology/90/1/e82.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "decf6939ae2e977485fe9ec7166467351e94dc55",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271439155 | pes2o/s2orc | v3-fos-license | Optimal Weighted Voting-Based Collaborated Malware Detection for Zero-Day Malware: A Case Study on VirusTotal and MalwareBazaar
: We propose a detection system incorporating a weighted voting mechanism that reflects the vote’s reliability based on the accuracy of each detector’s examination, which overcomes the problem of cooperative detection. Collaborative malware detection is an effective strategy against zero-day attacks compared to one using only a single detector because the strategy might pick up attacks that a single detector overlooked. However, cooperative detection is still ineffective if most anti-virus engines lack sufficient intelligence to detect zero-day malware. Most collaborative methods rely on majority voting, which prioritizes the quantity of votes rather than the quality of those votes. Therefore, our study investigated the zero-day malware detection accuracy of the collaborative system that optimally rates their weight of votes based on their malware categories of expertise of each anti-virus engine. We implemented the prototype system with the VirusTotal API and evaluated the system using real malware registered in MalwareBazaar. To evaluate the effectiveness of zero-day malware detection, we measured recall using the inspection results on the same day the malware was registered in the MalwareBazaar repository. Through experiments, we confirmed that the proposed system can suppress the false negatives of uniformly weighted voting and improve detection accuracy against new types of malware.
Introduction
As the Internet becomes a critical piece of infrastructure for our lives, malware threats are increasing yearly.Malware steals confidential information, destroys important data, and sometimes uses the data as collateral for blackmail, money laundering, and other highly malicious crimes.In recent years, a large-scale distributed denial-of-service attack was carried out by exploiting IoT devices infected with a malware program called Mirai [1].In 2017, a large number of computers around the world were infected with ransomware called WannaCry [2].Furthermore, in 2022, a targeted attack by a malware application called Emotet was attached in phishing emails [3].
Although accurate and quick malware detection is essential, a single detection method can overlook the attack.The number of new malware attacks is constantly being observed each year according to the report [4].This report indicates that the malware detection method should collect intelligence about new malware in real-time to respond to its activity.Many malware detection systems have been widely studied, but they can still overlook the malware depending on the type.Even today, as the accuracy of machine learning technology continues to improve, this method still has problems in terms of usability and safety.
While collaborative malware detection has been widely studied to tackle the issue of hard-to-detect attacks [5][6][7][8], they are still insufficient to detect new malware.These methods conduct voting based on the detection results obtained from multiple anti-virus vendors testing the same sample and use the majority result to determine the sample's maliciousness.While studies [5][6][7][8] showed that this approach could detect malware more accurately than the results of a single anti-virus inspection, the majority decision would negatively impact the accuracy of the judgment.For example, even if reliable detection engines with high accuracy determine that a suspicious file is malware, the malware will be overlooked when most classifiers determine that the file is benign.Based on this concern, these majority-vote-based methods might miss the new type of malware because most anti-virus engines lack sufficient information to detect the new malware at that time.
To consider the reliability of the anti-virus engines in a final decision, we weighted the voting value of malware detection by anti-virus engines according to their accuracy history.This method can be expected to improve the detection accuracy by reducing false negatives, that is, overlooked malware.This study mainly examined the accuracy of collaborative malware detection when weighting is set to the optimal setting based on VirusTotal results.To verify the actual detection accuracy against unknown malware, we used a malware repository service called MalwareBazaar.By inspecting malware registered in MalwareBazaar on the same day as it was registered, we simulatively measured the detection accuracy of unknown malware with insufficient information.
The remainder of this paper is structured as follows.Section 2 describes the background for the research domain of malware detection.Section 3 provides the key concept, architecture, and detailed components of the proposed system.Section 4 reports the evaluation experiment for the proposed system, and Section 5 gives and discusses its results.Section 6 and Section 7 discuss the differences and originality of our work and the limitations on interpreting the detection results.In Section 8, we summarize this paper.
The main contributions of this paper are as follows: • An investigation of the actual detection accuracy of each anti-virus engine on VirusTotal [9] using actual malware samples.Combining multiple anti-virus engines could improve detection accuracy based on the observation that each engine has different strengths when it comes to the various malware categories.
•
A demonstration of the potential of an optimal weighted-voting-based malware detection system to improve detection accuracy against unknown malware.We evaluated the overall recall when the weights for each anti-virus engine were assigned according to the above findings.We inspected the malware the same day it was first registered in the MalwareBazaar repository.This overall recall shows that false negatives for weighted voting decrease compared to the results of uniform weighting, even if the type of malware is new.
•
A verification of whether the procedures for malware detection experiments conducted in this study enabled the evaluation of the detection accuracy against unknown malware.We examined the same malware files under the same conditions after more than 20 days to confirm that detection accuracy improved.As a result, we clarified that the evaluation of the detection accuracy for unknown malware can be simulated according to the above procedures.
Background
The signature-based detection method detects malicious programs by registering hashes or binaries of malware files as signatures in advance and matching them with samples at the time of inspection [10].Therefore, the signature detection method can detect already known malware but fails to detect unknown malware.Because a delay occurs between the time when an anti-virus vendor discovers malware and the time when the vendor registers the signature, computers are at risk of being infected with the newly discovered malware during this delay.
Heuristic detection is a method to determine whether a file is a malware application by extracting and registering behavioral features such as API call sequences and machine language opcodes from known malware in advance and comparing the program code from the file analysis to see if there is anything that behaves similarly to the registered features.Data mining and machine learning techniques are used to register the behavioral features of malware [11,12].This detection method can deal with subspecies and unknown malware that are difficult to detect by the signature-based method.Therefore, it cannot detect new types of malware that behave in an uncommon manner.Although the accuracy of detection is improving with the development of machine learning technology, it is still not realistic to detect malware using heuristic detection methods alone, and most heuristic detection methods are used to compensate for the shortcomings of signaturebased detection methods.
The behavior detection method, also known as dynamic heuristic detection, involves executing the sample code in an isolated environment, such as a virtual sandbox or other isolated environments, and monitoring its behavior and changes in the computer's internal environment to detect malicious entities [13,14].However, malware developers attempt to circumvent behavior detection by incorporating features into their programs that hide the malware's malicious behavior.In addition, false positives and negatives are expected to occur due to the ambiguity of the decision criteria, and the method is also time-consuming and burdensome for the computers that deploy it.As with the heuristic detection method, this method is mostly used to compensate for the shortcomings of the signature detection method.
Collaborative security [5][6][7][8][15][16][17] is an approach in which multiple security systems and organizations share information and make security-related decisions.The advantages of this approach include rapid response to threats and optimization of each organization's resources by sharing information among multiple organizations and systems.In ref. [5], the author proposed a blockchain-based collaborative malware detection system that enables instantaneous and tamper-resistant detection by multiple anti-virus vendors.The anti-virus vendors join a consortium blockchain network and provide the detection results obtained by their own anti-virus engine.These results are used to calculate the maliciousness to determine whether a sample file is malware based on majority-voting-style malware detection.Throughout the experiment, it was shown that majority voting using detection results from multiple anti-virus engines was effective in reducing the number of false positives.However, depending on the voting results at the time of malware identification, detection results output from multiple anti-virus engines may be ignored even though they are accurate, resulting in unacceptable detection omissions.George et al. [7] tackled the issue of the risk of malicious Android Package Kit files causing malicious behavior on mobile devices that use the Android platform.As a solution, the authors proposed a system that classifies the functions or behaviors of mobile malware and performs malware detection by prediction and majority voting using a machine learning model on a blockchain node.Experimental results showed that the proposed system has better detection accuracy than a single malware detector.
In recent research, ensemble-based attack detection methods that combine multiple machine-and deep-learning-based detectors have been proposed [18][19][20][21].These studies use multiple detectors to overcome the weaknesses of the machine and deep learning, such as a limited amount of training data, an imbalance between normal and abnormal samples, and the selection of optimal hyperparameter settings.In the simple strategy, the results are ensembled based on majority voting [18] and veto voting [18,19].The veto voting determines the "malware" regardless of the number of votes, even if one vote claims the file is malware.The literature [20,21] makes the final malware detection based on the predicted class probabilities from multiple detectors.To ensemble the results from multiple detectors, Islam et al. [20] attempted weighted voting predictions based on the prediction accuracy of each detector.Xue and Zhu [21] used a soft voting strategy to ensemble the multiple detection results.The soft voting strategy is one of the majority voting strategies that averages the class probabilities from multiple detectors and obtains the answer according to the maximum average class probability.This strategy can be useful in an environment where each detector can give the class probabilities of all categories.Although these papers mentioned weighted majority voting, they do not clarify how weights are calculated.
Aim of This Research
Our research focuses on how to reflect the reliability of each anti-virus engine in the value of their votes on voting-based collaborative malware detection.Most votingbased malware detection systems [5][6][7][8] are limited by not considering each anti-virus engine's detection performance, and they use votes with uniform weights to determine malware.The potential drawback is the possibility of overlooking malware when the system determines a malware sample as benign based on the votes of multiple-classifiers with low detection accuracy, even if the results of detection engines with high detection accuracy judge it as malicious.According to the previous studies, there will be a bias in the categories that each engine is more likely to detect, depending on which signatures the signature-based method has or which features the behavior method uses for analysis.
Architecture
As shown in Figure 1, the proposed method comprises one or more control servers and multiple anti-virus engines.The control server mainly inspects the malware acquired from users through multiple anti-virus engines and makes the final decision based on those results.The anti-virus engines are assumed to present as Web APIs or applications bundled together if the vendor allows it.The system workflow follows the five steps illustrated in Figure 1: (1) When a user uploads a file to the proposed service, the control server receives it and (2) asks the anti-virus engines to inspect it via an API.(3) Each detection result from each anti-virus vendor is aggregated onto the control server, and (4) then the server makes a final decision according to the algorithm detailed in Section 3.3.3.Finally, (5) the system returns the output to the users. (3) (2) (2)
Malware Detection Algorithm
The proposed system detects malware according to the following steps: 1.
Express the detection results from multiple anti-virus engines (detailed in Section 3.3.1).
2.
Assign an appropriate weight for the vote from each anti-virus engine (detailed in Section 3.3.2).
3.
Calculate the maliciousness to make a final decision on malware detection (detailed in Section 3.3.3).
Table 1 shows the notation used in Sections 3.3.1-3.3.3.
Expression of Detection Results
The results of each detection should be expressed numerically and uniformly to enable the system to handle different result representations from multiple anti-virus engines.Each engine might output different anomaly scores: signature-based methods might present a discrete value of 0 or 1, while machine learning methods might express the maliciousness as a probability value between 0 and 100.Some anti-virus engines might also present additional information, such as malware categories.
In this study, the output of every engine was normalized to a value between 0 and 1 to obtain a voting value.The proposed system expresses benign results as "0" and malicious results as "1".If the value output by the engine does not fall within the range, it is normalized using the maximum value the engine can output.
Assignment of Appropriate Weights
In this study, we introduced a method of weighting the detection results output from anti-virus engines by applying an appropriate weight value according to the analysis history.The proposed system sets different priorities at multiple levels depending on the contents of the detection results output by each anti-virus engine.
To reflect the reliability of detection results, the proposed system uses a weight matrix W ∈ R C×K , where C is the total number of malware categories handled and K is the total number of active anti-virus engines.The malicious voting weight for category c of the k-th anti-virus engine W ck takes three weighting levels w high > w mid > w low .If an anti-virus engine provides a binary-category classification rather than a multi-category classification, the system assigns the same weight to all categories.
The three weighting levels are assigned by Equation ( 1) based on the detection rate R ck .R ck is a value that indicates the performance of anti-virus engines, which is calculated in advance by attempting the detection process against a set of binary files whose true categories are already known.
The metrics that can be used as R ck are not limited to any specific one if they are related to detection performance.The system administrator might adopt the recall to reduce false negatives, the F-value to balance precision and recall, or some other metric.
Unfortunately, this rule alone does not allow for the calculation of weights when the reported malware category is not included in the set of categories maintained by the system.In this case, the system takes the following measures to make the results as useful as possible.Some anti-virus engines using machine learning techniques might calculate the anomaly score s.If the anomaly score is included in the anti-virus engine response, W ck is assigned a weight value by Equation ( 2) where ŝ is normalized s to the range of [0, 1].If there is no anomaly score response, W ck is assigned w low . (2)
Calculation of the Anomaly Score for the Final Decision
The calculation of the anomaly score is inspired by the methods used in [5].A sample file is determined as malware based on whether the calculated anomaly score M(0 ≤ M ≤ 1) is greater than or equal to a predefined threshold T. The anomaly score is calculated regardless of the malware category to avoid missing attacks even when category classification accuracy is unstable.Although this score is the same as that in Fuji et al. [5], the difference is that weights are used to add up V m as shown in Algorithm 1, which includes all three steps.
Algorithm 1 Proposed malware detection algorithm
Experiment 4.1. Experiment 1: Area-of-Expertise of Each Anti-Virus Engine
To rank anti-virus engines in terms of detection results, we first investigated whether malware categories exist for which anti-virus engines have a detection advantage.It is necessary to determine the priority of weighting, which is indispensable for selecting the appropriate size of weights according to the record of an anti-virus engine's detection results.We collected malware files and those hashes for which malware categories have already been identified using the online malware repository service MalwareBazaar [22].MalwareBazaar provides malware hashes and files for a target malware category by specifying the name as the search tag (Figure 2).We used this search tag as the correct label for malware categories.
In this evaluation experiment, we used the malware analysis web service VirusTotal [9] as multiple anti-virus engines to assess the same sample files and to acquire the results.VirusTotal provides online malware detection using more than 70 anti-virus engines.Users can inspect files and websites and search for inspection results using URLs and file hashes.We obtained the analysis results in the JSON format using the VirusTotal API.The prototypes were created as Python scripts and used in the experiments.
We then used the results to calculate and compare the average recall for each anti-virus engine's malware category, hoping to clarify the distribution of detection rates and the existence of each engine's area of expertise.The recall is calculated by tp/(tp + f n), where tp is true positive and f n is false negative.
Experiment 2: Zero-Day Malware Detection Performance
This experiment aimed to verify that weighted voting-based malware detection could improve the recall compared to non-weighted voting-based methods.
In our experiments, we collected malware samples using MalwareBazaar to verify the accuracy of both methods for detecting unknown malware.From a practical standpoint, evaluating detection performances for including unknown malware files is desirable because new malware files are created daily.However, it is difficult to intentionally collect the latest unknown malware immediately after its appearance.Because new malware files are uploaded to the MalwareBazaar repository daily, the acquired malware would likely be the latest, including unknown malware.
We retrieved the malware samples registered in MalwareBazaar and inspected them on VirusTotal the same day (see Table 2).Each group had up to 250 samples because the VirusTotal API is limited to calling 500 samples per day and consumes the resources of two APIs to upload a file and download the corresponding detection result.Groups D and E had fewer samples than the others, as the number of registered malware samples on that day did not exceed 250.The detection result was evaluated by recall as described in Section 4.1.We compared the recall between the proposed system and Fuji et al. [5] based on the uniformly weighted voting method.Hereafter, Fuji et al.'s method is referred to as the conventional method.The threshold value T was set to 0.5, as in the environment conditions in [5].This allowed us to compare the recall between the proposed system with weighted voting and the conventional method with simple majority voting.The voting weights for each anti-virus engine were calculated using the recall scores observed in experiment 1.We empirically set the weights as w high = 3, w mid = 2, w low = 1 and D low = S low = 0.65, D high = S high = 0.85.
Experiment 3: Accuracy over Time
In this experiment, we verified whether the malware samples in experiment 2 likely contained unknown malware by seeing if they became detected over time.As mentioned in Section 4.2, measuring accuracy against truly unknown malware is challenging.The results of experiment 2 alone cannot determine whether it was truly unknown to those anti-virus engines.
Therefore, we conducted detection experiments on the same malware again after a period of time had passed.We then measured and compared the recall scores of experiment 2's results to verify whether detection accuracy had improved.In general, malware that has just been observed would be overlooked if it is a new or variant type.Still, as time passes and more information is gathered, detecting attacks of new malware should gradually become possible.We re-inspected the same samples as those used in experiment 2 more than 20 days after the first inspection to compare the change in accuracy over time.Thus, the samples used in the experiment were all the same as those in the results of experiment 2 (Table 2).Each sample was re-inspected on the date shown in Table 3.
Experiment 1
This experiment provided a sufficient basis for using areas-of-expertise survey results as criteria for weighting votes.Table A1 in the Appendix A shows the average malware recall for each anti-virus engine, calculated based on the detection results by VirusTotal for a total of six malware categories, including Spyware, Backdoor, Trojan, Ransomware, Adware, and Worm.This result shows that even when the same malware was tested, differences in recall scores were caused by differences in anti-virus engines, indicating that each anti-virus engine has areas-of-expertise depending on the malware category.
Experiment 2
The experimental results showed that the proposed system was superior to the conventional method in terms of recall when 7 days of samples were used as the detection target.Figure 3 shows the result of recall between the conventional method and the proposed method for each sample group.Our method improved recall by about 0.1 across all samples, indicating that weighted voting can reduce false negatives.Therefore, the proposed system detects malware more accurately than the uniformly weighted voting and is also effective for detecting unknown malware that has not yet been registered.
Experiment 3
As a result of the experiment, the results of experiment 2 can be interpreted as the results when using unknown malware for some anti-virus engines.Figure 4 shows the recall results between the first inspection date and the re-inspection date for each sample group.The experimental results showed that the malware recall scores improved with time.
One factor contributing to this result was that the malware pattern databases of some of the anti-virus engines used for detection via VirusTotal were updated within 20 days.Thus, this result indicates that the malware used in experiment 2 was undetectable in the early stages after the malware's appearance.
Closely Related Research
Davies et al. [6] proposed a ransomware detection approach based on a cumulative malicious score through various benign-or-malicious binary classification tests to determine the final outcome.Similar concepts can also be seen in the literature [7,23].These studies did not consider the weights of each analysis, whereas this study provided insight that detection accuracy can be improved by setting appropriate weights.
Many studies [24][25][26] used the VirusTotal results to automatically generate correct labels for the supervised learning method.These studies mainly investigated whether VirusTotal detection results could provide correct labels for more accurate machine-learning training.They inspected malware samples obtained from public datasets [24], manually created datasets [25,26], or the distribute API on VirusTotal [25].The distribute API enables us to retrieve the latest files uploaded to VirusTotal from users worldwide.We evaluated the detection accuracy against malware recently registered in MalwareBazaar, and used the tags as correct labels.This indicates that our results are not based solely on VirusTotal judgments, but may approximate more general results of the data labeled by annotators.However, it is important to interpret our results cautiously, as the labels may not always be tagged by reputable MalwareBazaar users.
Fung et al. [23] conducted research most closely related to our study.Their approach determines maliciousness by using the classification history as feedback from multiple detectors based on binary-classification.Unlike their approach, our work investigated the detection performance of multiple malware categories.
Similar to our work, Cocca et al. [27] evaluated the accuracy of anti-virus engines listed on VirusTotal on a large dataset collected by MalwareBazaar.They were interested in the prospect that there could be different detection results between the anti-virus engines.However, this research was not necessarily conducted on the same day it was registered; thus, unlike our research, it did not investigate the detection performance of new types of malware.
Future Work and Limitations
Since some notes apply to interpreting our findings, we discuss these notes by showing this study's future work and limitations.
The proposed system's evaluation experiments were conducted under the condition that all samples were malware.The false positive rate must still be evaluated by applying the system to benign files to demonstrate detection performance accurately.Generally, collecting unbiased benign files to evaluate false positives fairly is challenging.Appropriate methods should be considered, such as using manually created files as a ground truth dataset for evaluation, while allowing for some bias, as Zhu et al. [25] did.If our system is implemented by applying the VirusTotal API, each anti-virus engine's false positives are shown on VirusTotal statistics based on user reports [28].These statistics could be useful in deciding which anti-virus engine to use in practice.
Furthermore, the detection performance reported in this study must be interpreted with caution as to whether it truly represents the accuracy of detecting unknown malware.Although the MalwareBazaar submission policy states, "Please refrain from uploading malware samples older than 10 days to MalwareBazaar", the freshness and correctness of the samples depend on whether the users that upload adhere to the standards.Therefore, it is necessary to re-examine the best way to collect samples for experiments in the future.
Although this study used VirusTotal, some studies have pointed out that some scanners alter their verdict even if the same file is inspected [24,25].Hence, only scanners known to be stable and suitable scanners should be incorporated into the detection.For instance, Salem et al. [24] removed unstable scanners by a certainty score calculated by tallying up the output labels analyzing the same application multiple times.This approach also helps prevent adversarial scanners that consistently return irresponsible results from trying to mislead the final decision in cooperative systems, even without VirusTotal.
Conclusions
Highly accurate malware detection methods are needed to cope with the ever-increasing number of malicious attacks causing damage to society.Although malware detection tech-niques such as signature-based, heuristic-based, and behavior-based methods have been widely used, different detection methods are suitable for different attacks.Given the many new types and variants that emerge, a single technique is not enough to detect all malware.
While collaborative malware detection could improve detection performance by integrating multiple detection results, these approaches can even overlook new types of malware.Most methods output the final result by majority vote, which means they miss attacks that are difficult to detect for many detection methods.Since variants and unknown malware are developed by attackers to evade detection, most anti-virus engines do not have sufficient evidence to detect malware when it appears quickly.
This study verified the detection accuracy of collaborative malware detection using VirusTotal and MalwareBazaar to achieve high accuracy even against unknown malware.We developed a prototype system that applies weights to the malware detection results of anti-virus engines based on their areas-of-expertise.Even if the malware is new, false negatives can be reduced by prioritizing dynamic analysis methods with high detection accuracy.
We evaluated the detection accuracy against unknown malware to confirm that the optimization of weights determined based on reliability could deal with new types of malware.By inspecting malware that appeared in MalwareBazaar on the same day, we measured the detection accuracy of unknown malware with insufficient information.
Through the evaluation, we confirmed the new system's superiority over uniformly weighted voting in terms of detection accuracy improvement due to the suppression of false negatives.We also verified that the new system can improve detection accuracy over time by examining the transition of malware recall.
Figure 1 .
Figure 1.Architecture of the proposed system.See Section 3.2 for an explanation of the numbers in parentheses.
Figure 2 .
Figure 2. Screenshot of how malware samples were retrieved in MalwareBazaar for experiment 1. Users can search for malware tagged with XX by entering tag:XX.This figure shows the search result for "tag:Spyware" as shown in the red box at the top.
Figure 3 .
Figure 3.Recall scores of the proposed system and conventional method[5].
Figure 4 .
Figure 4. Recall scores of the proposed system over time.The results shown in "observation" are the same as the "proposed system" values in Figure3.
Table 1 .
Notation used in Section 3.
WWeight matrix (W ∈ R C×K )w high ,w mid , w low Weights of vote, w high > w mid > w low D high , D low Criterion for assigning vote weights for the detection rate S high , S low Criterion for assigning vote weights for the anomaly score R ck Detection rate for category c, which is from the k-th anti-virus engine (R ck ∈ [0, 1]) ŝ Normalized anomaly score s from the anti-virus engine virus engine determines file is malicious then
Table 3 .
Malware samples used in experiment 3. The groups were the same as those used in experiment 2. | 2024-07-26T15:11:07.582Z | 2024-07-23T00:00:00.000 | {
"year": 2024,
"sha1": "44d66945e7b0af7702155e48c260656a263fb475",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/fi16080259",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cfba8e1faba5d4d5588c5a27e8f6611d10d0cfff",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
216334279 | pes2o/s2orc | v3-fos-license | Surgical management of caesarean scar pregnancy: a case report
Caesarean scar pregnancy (CSP) is a rare entity and can cause serious complications. There is a rising trend in the number of cases being reported possibly due to the increasing prevalence of caesarean sections. Increasing the use of imaging studies such as ultrasonography and magnetic resonance imaging (MRI) helps in detecting these cases. Early diagnosis would help avoid complications such as scar rupture and excessive hemorrhage, which may require a hysterectomy. This can endanger the woman’s life and also affect future fertility.
INTRODUCTION
Caesarean scar pregnancy (CSP) is a rare entity and can cause serious complications. 1 There is a rising trend in the number of cases being reported possibly due to the increasing prevalence of caesarean sections. Increasing the use of imaging studies such as ultrasonography and magnetic resonance imaging (MRI) helps in detecting these cases. Early diagnosis would help avoid complications such as scar rupture and excessive hemorrhage, which may require a hysterectomy. This can endanger the woman's life and also affect future fertility. 2 Patients who are vitally stable have more treatment options including conservative management. Hence, obstetricians/gynaecologists and radiologists must be highly vigilant of this potentially fatal complication. 3
CASE REPORT
A 25-year-old female presented to outpatient department of gynecology with chief complaint of two-month amenorrhea with bleeding per vaginum on and off since 10-12 days with severe pain in lower abdomen since 2 days. She had history of dilation and curettage in present pregnancy in view of incomplete abortion.
In obstetric history, she was G2P1l1 with previous one caesarean delivery. Her first Caesarean section was due to fetal distress and was done 13 months ago. General physical examination was normal. On per speculum, cervix was normal, no discharge or bleeding per vaginum was seen. On bimanual examination, cervix was tightly closed, uterus was bulky, anteverted, tender and bilateral fornices were free with no tenderness.
On investigation, routine blood and urine investigations were normal. Her urine pregnancy test was positive. Trans vaginal ultrasound revealed single well-defined gestational sac in uterus at the site of caesarean scar with poor choriodecidual reaction. Myometrial thickness between the sac and urinary bladder is 1.6 mm with possible adhesion between uterus and urinary bladder. Mild vascularity seen around gestational sac. Single embryo of 6 weeks 5 days seen, cardiac activity was not seen, adenexa normal Figure 1. A diagnosis of caesarean scar pregnancy was considered. Patient was given option for medical management with methotrexate or surgical management with laparotomy, she has undergone for laparotomy. Intraoperative findings were soft and vascular mass seen at the site of previous scar Figure 2. Incision was given over bulge and products of conception were gently removed. It was communicating with uterine cavity, edges of scar tissue were excised and freshened, gentle uterine curettage was done. Tissue was sent for histopathological examination and diagnosis of caesarean scar ectopic pregnancy was confirmed.
DISCUSSION
The exact cause of CSP is still not clear. There is an early invasion of the myometrium and it is presumed that this occurs through a microscopic tract in the cesarean section scar tissue. 4 The incidence has been reported to be 1:1 800 to 1:2 200 pregnancies. 4 In CSP, the gestational sac gets embedded within the fibrous tissue of the previous cesarean section scar. The gestational age at diagnosis ranged from five to 12.4 weeks (mean 7.5±2.5 weeks) and the time interval between the last cesarean and the CSP was six months to 12 years. 5 There are many risk factors implicated in the development of CSP. These include the number of cesarean sections, the time interval between the previous cesarean section and the subsequent pregnancy, and the indications for the previous cesarean section, but it is not clear whether these factors are directly related to CSP. 5 On review of the various case reports, it was noted that CSP were incidental ultrasonography finding in an asymptomatic woman while some present with mild painless vaginal bleeding. In a lesser percentage of patients, it was accompanied with mild to moderate abdominal pain. The uterus may be tender during examination if the CSP is in the process of rupture. A patient with a ruptured CSP may present in a state of collapse or hemodynamically unstable. 5 To reduce morbidity and fatal complications, it is important to diagnose a scar pregnancy as early and as accurately as possible. The diagnosis may be late till uterine rupture occurs or the woman goes into hypovolemic shock, and it may be difficult to differentiate between a miscarriage and a scar pregnancy due to similarities in presentation and examination findings. Transvaginal sonography remains to be an important tool in diagnosing CSP and could soon be the gold standard for the diagnosis of scar implantation. 4 Diagnostic criteria are as follows An empty uterine cavity and an empty cervical canal A gestational sac in the anterior part of the uterine isthmus An absence of healthy myometrium between the bladder and gestational sac. 4 As it is a rare condition, there are no specific guidelines available for the management of CSP. The main aim of treatment of CSP is to prevent massive blood loss and conserve the uterus to maintain future fertility, women's health, and quality of life. 6 Management may be either medical or surgical. Various treatment options include dilatation and curettage and excision of trophoblastic tissues using laparotomy or laparoscopy, local and/or systemic MTX administration, bilateral hypogastric artery ligation, associated with dilatation and evacuation under laparoscopic guidance, and selective uterine artery embolization (UAE) in combination with curettage and/or MTX injections. 1 In a case of CSP, management was done by injecting potassium chloride into the gestational sac and a combination of local and systemic methotrexate administration. The patient was followed-up by monitoring the beta human chorionic gonadotropin level until it reached non pregnant level and followed-up with scan and MRI until complete resolution of the pregnancy sac. 7 In cases with a viable fetus, local injection of potassium chloride and hyperosmolar glucose or crystalline trichosanthin will act as an embryocide. 3 Jurkovic et al, recommended surgical repair of the scar either as a primary treatment or as a secondary operation after the initial treatment in women who desire further pregnancies. 3 This could decrease the risk of recurrence of CSP. Once the gestational mass is surgically excised, it has been noted that hCG returns to normal much more quickly within one to two weeks. Various case reports of patients with caesarean scar ectopic pregnancy even in the absence of bleeding, supports of management as the surgical option. 8 This includes elective laparotomy and excision of the gestational mass. The benefit of surgery is less recurrence because of the resection of the old scar, with a new uterine closure. Other is a shorter follow-up period. 4 In another study with Caesarean scar pregnancy cases, surgical excision of scar is considered as a key management and helpful to prevent recurrence. 9 Uterine artery embolization (UAE) followed by dilatation and curettage to reduce bleeding is used in some cases. UAE requires less follow up as compare to methotrexate. 10 High intensity focused ultrasound combined with suction curettage under hysteroscopic guidance was recently reported to be a safe and effective modality of treatment when the gestational period is more than eight weeks. 11
CONCLUSION
Caesarean scar ectopic pregnancies can have very fatal and poor outcomes, including uterine rupture, massive haemorrhage and maternal death. Thus, it is important that early and accurate diagnosis of caesarean scar pregnancy is obtained in order to avoid complications and preserve fertility. Its incidence is rising due to the increasing incidence of caesarean sections. The liberal use of transvaginal ultrasound to assess early pregnancies helps early diagnosis and planning of the management. If the condition is not diagnosed, a simple gynaecological procedure such as a dilatation and curettage may end up with massive hemorrhage and unexpected complications. Every pregnant woman with a past history of a caesarean section should have a careful ultrasonographic assessment of the previous scar. As there are no evidence-based recommendations available, clinicians will have to depend on the available case reports and counsel the women accordingly on the various treatment options available to make an informed choice. Consultants should be involved in patient counselling and planning the further management of such cases. | 2020-04-02T09:07:16.347Z | 2020-03-25T00:00:00.000 | {
"year": 2020,
"sha1": "101ab38ab1fd490c1bd6c963d1c10ac5feef2d34",
"oa_license": null,
"oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/8125/5460",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d9388315548f0355e7ac7faa3b54a374e56c256a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55754463 | pes2o/s2orc | v3-fos-license | The nature of separator current layers in MHS equilibria I. Current parallel to the separator
Separators, which are in many ways the three-dimensional equivalent to two-dimensional nulls, are important sites for magnetic reconnection. Magnetic reconnection occurs in strong current layers which have very short length scales. The aim of this work is to explore the nature of current layers around separators. A separator is a special field line which lies along the intersection of two separatrix surfaces and forms the boundary between four topologically distinct flux domains. In particular, here the current layer about a separator that joins two 3D nulls and lies along the intersection of their separatrix surfaces is investigated. A magnetic configuration containing a single separator embedded in a uniform plasma with a uniform electric current parallel to the separator is considered. This initial magnetic setup, which is not in equilibrium, relaxes in a non-resistive manner to form an equilibrium. The relaxation is achieved using the 3D MHD code, Lare3d, with resistivity set to zero. A series of experiments with varying initial current are run to investigate the characteristics of the resulting current layers present in the final (quasi-) equilibrium states. In each experiment, the separator collapses and a current layer forms along it. The dimensions and strength of the current layer increase with initial current. It is found that separator current layers formed from current parallel to the separator are twisted. Also the collapse of the separator is a process that evolves like an infinite-time singularity where the length, width and peak current in the layer grow slowly whilst the depth of the current layer decreases.
Introduction
The importance of the fundamental energy release mechanism called magnetic reconnection is made apparent by the key role it plays in many plasma processes on the Sun and other stars (e.g., coronal mass ejections, coronal heating, solar and stellar flares) and in the magnetosphere (e.g., powering flux transfer events and substorms) (e.g., Biskamp 2000;Priest & Forbes 2000). Magnetic reconnection permits the restructuring of the magnetic field enabling changes in magnetic topology or quasi-topology to occur. Reconnection converts magnetic energy to thermal energy, kinetic energy (bulk plasma motions) and fast particle energy. The partitioning of magnetic energy into these three forms depends on the nature of the reconnection itself and the properties of the surrounding plasma.
Reconnection in two dimensions (2D), which was first proposed as a mechanism for flares in the 1940's (Giovanelli 1946;Hoyle 1949), has been studied in detail since the 1950's (e.g., Parker 1957;Sweet 1958;Biskamp 1982;Priest & Forbes 1986;Biskamp 2000;Priest & Forbes 2000). More recently, three dimensional (3D) reconnection has been explored and has proved to be much more complex than 2D reconnection due to the multitude of possible reconnection sites, the different types of reconnection (null-point, separator, quasi-separator) and the increased intricacy of the 3D magnetic skeleton.
The lowest energy state of any magnetic field, B, in a closed volume with the normal component imposed on the boundary is its potential (current-free) field, in which ∇ × B = 0. Electric currents, j, will always be present in any magnetic field that is not at its lowest energy. Furthermore, the magnetic Reynolds number, R m = vL/η where v and L are typical velocity and length scales in the system and η is the magnetic diffusivity, is normally much larger than unity and represents the ratio of the advection and diffusion terms in the induction equation. Thus reconnection, which requires the magnetic field to be able to diffuse and, hence, R m 1, requires short length scales. Thus current concentrations, in which there are steep gradients in the magnetic field over short length scales, are sites in the solar atmosphere (and throughout the Universe) where reconnection is likely to occur. In this paper, we concern ourselves with the properties of 3D current concentrations formed at separators through the nonresistive relaxation of the magnetic field, rather than the nature of the reconnection that occurs within them.
In 2D, current layers are known to form following the collapse of 2D null points. This has been studied in detail both in the zero-beta approximation (e.g., Green 1965;Syrovatskii 1971;Somov & Syrovatskii 1976;Craig 1994;Bungey & Priest 1995) and in the non-zero beta approximation (e.g., Rastätter et al. 1994;Craig & Litvinenko 2005;Pontin & Craig 2005;. The key features associated with these types of current layers are (i) a collapse of the separatrices through the null forming cusp regions, (ii) enhanced current about the null point and along the separatrices extending beyond the ends of the main current layer and (iii) higher density plasma within the cusp regions than outwith them. Klapper (1998) showed analytically that in 2D, in the zero-beta approximation, a current layer at a null point will never reach equilib-Article number, page 1 of 15 arXiv:1410.8691v1 [astro-ph.SR] 31 Oct 2014 rium, since the collapse time of the null is infinite. This fact is still true when the plasma beta is non-zero (Craig & Litvinenko 2005;Pontin & Craig 2005;. , who studied the magnetohydrodynamic (MHD) collapse of 2D nulls in the absence of resistivity, showed that a state may be reached in which the magnetic field and plasma are close to equilibrium everywhere save within the highly localised current accumulation itself.
In 3D, current layers are also likely to form. In general, however, most models of 3D reconnection have considered driven reconnection. These experiments typically start from a potential field and initiate reconnection by driving at the boundaries at either a fast or slow rate. Aspects of the nature of the reconnection in these models are dependent then on not only the initial magnetic field, but also are heavily dependent on the rate and nature of the boundary drivers. 3D reconnection can occur at null points, as does 2D reconnection, and this has been studied under different driven regimes, (e.g., Craig et al. 1995;Craig & Fabling 1996;Priest & Titov 1996;Pontin et al. 2004;Pontin & Craig 2005;Pontin et al. 2005a,b;Pontin & Galsgaard 2007;Masson et al. 2009;Priest & Pontin 2009;Pontin et al. 2011;Masson et al. 2012;Pontin et al. 2013). 3D reconnection can also occur in the absence of null points , for instance, at separators (e.g., Galsgaard & Nordlund 1996;Priest & Titov 1996;Longcope & Cowley 1996;Longcope 2001;Parnell et al. 2010a,b; or at quasi-separatrix layers (e.g., Priest & Démoulin 1995;Démoulin et al. 1996Démoulin et al. , 1997Aulanier et al. 2006;Wilmot-Smith et al. 2009b).
It is, however, generally believed that in plasma systems, such as the solar atmosphere or Earth's magnetosphere, the stressing of magnetic structures due to the slow driving of magnetic field lines leads to a build up of free magnetic energy (the excess energy above potential) associated with electric currents. If topologically or geometrically complex 3D magnetic fields are stressed then the equilibria that are formed will have current layers located, for example, where the field-line mapping is discontinuous or has strong gradients.
Eventually, the length-scales within these current layers become sufficiently short that the magnetic Reynolds number is less than or equal to one, allowing reconnection to occur (initiated, for instance, by micro-instabilities). However, practically all models of reconnection (whether simulations of solar flares or experiments studying aspects of the fundamental physics of reconnection), such as many of those mentioned previously, start from an initial potential magnetic field, i.e. one with no free energy.
Our ultimate aim is to study spontaneous reconnection, as opposed to driven reconnection, in systems which have excess magnetic energy stored in current layers that are in equilibrium with their surroundings. In order to study this type of reconnection, the initial magnetohydrostatic (MHS) equilibria with current layers needs to be created. Non-zero beta MHS equilibria involving a current layer situated at a 2D null have been studied by Craig & Litvinenko (2005); Pontin & Craig (2005); with the resulting spontaneous reconnection studied by Fuentes-Fernández et al. (2012a,b). Current layers in 3D systems have also been considered. These include current layers generated by (i) the shearing of uniform magnetic fields (Longbottom et al. 1998;Bowness et al. 2013), (ii) the tangling of multiple flux tubes (Wilmot-Smith et al. 2009a,b), (iii) at 3D null points (Pontin & Craig 2005;Fuentes-Fernández & Parnell 2012, 2013, (iv) current layer formation due to ideal MHD instabilities (Browning et al. 2008) and (v) at quasi-separatrix layers Titov et al. 2003;Aulanier et al. 2005;Wilmot-Smith et al. 2009c). However, MHS equilibria with current layers situated on magnetic separators have never been studied before. So, in this paper, we study these equilibria. In a follow up paper, we will look at the nature of the spontaneous reconnection that occurs in these single-separator currentlayer systems.
Although 3D null points are in some ways the natural equivalent to 2D nulls, in other ways 3D magnetic separators are their equivalent. Generic separators are special field lines that are the intersection of two separatrix surfaces 1 , such as bald-patch separatrices or fan surfaces. In the latter case, the separators run from one 3D null point to another. Thus, like 2D nulls, separators lie at the boundary between four topologically distinct flux domains (Priest & Titov 1996;Longcope & Silva 1998;. Also, perpendicular cuts across a separator reveal that the projection of the magnetic field lines in these planes can be hyperbolic or elliptic which is analogous to the magnetic field structure about a 2D X-point or O-point, respectively (Parnell et al. 2010a).
When separator reconnection occurs, flux from two oppositely situated flux domains moves into the other two domains, which is akin to what is observed at 2D null-point reconnection. However, in 3D this reconnection does not involve field lines that reconnect one pair at a time to create a new pair of field lines. Instead, whole surfaces counter-rotate about the separator reconnecting a continuum of field lines (but containing a finite amount of flux). Furthermore, in the same way that 2D reconnection leads to a discontinuous mapping of the field lines, so also separator reconnection is associated with a discontinuous field line mapping. Numerical experiments Parnell et al. 2010bParnell et al. , 2011 and analytical work (Wilmot-Smith & Hornig 2011) reveal that separator reconnection is quite different from 3D null-point reconnection. The field lines reconnect at some location along the separator, where the parallel electric field (parallel current) is enhanced, away from the null points.
Magnetic separators have been recognised as important locations of 3D reconnection for many years, as current builds up easily along them due to their special situation at the boundary between topologically distinct domains (e.g., Sonnerup 1979;Lau & Finn 1990;Priest & Titov 1996;Parnell et al. 2011). However, the nature of current accumulations in the vicinity of separators has not yet been properly investigated.
Here, we study the MHS equilibria that are created through the non-resistive MHD relaxation of a non-potential magnetic field containing two 3D null points connected by a separator. As we will show, these equilibria involve current layers embedded in potential magnetic fields exactly as is found in the collapse of both 2D (e.g., Pontin & Craig 2005;) and 3D nulls (e.g., Fuentes-Fernández & Parnell 2012, 2013). The numerical model, Lare3d, used to perform the relaxation, is detailed in Sect. 2. The setup and properties of the initial magnetic field and plasma are described in Sect. 3. A series of experiments have been performed which all start from the same initial setup, save for the initial current which differs. The final equilibrium configurations of all the experiments have current accumulations with the same basic nature and character-istics. Thus, Sect. 4 highlights the nature of the final (quasi-) equilibrium state achieved after the relaxation in one particular example experiment. Sect. 5 then considers the effects of varying the initial current and compares the characteristics of the current layers formed in each relaxation. We conclude with a discussion of our findings in Sect. 6 which is followed by an appendix (Appendix A) where further details of the initial non-potential magnetic field used for the relaxation experiments can be found.
Numerical model
We are interested in determining the current accumulations that occur due to the non-resistive MHD relaxation of an initial nonpotential magnetic field involving a separator. Our focus is not on the process of the MHD relaxation (although this is discussed briefly, in Sect. 4.2), but on the characteristics of the final MHS equilibria. The initial system we start from is not in force balance so, as soon as it starts to relax, waves are generated. These waves are damped due to the presence of viscosity and so, to generate our magnetic equilibria, we used a 3D non-resistive MHD code, namely, Lare3d (Arber et al. 2001).
Lare3d is a staggered Lagrangian re-map code, in which the scalar quantities (ρ -density, -internal energy per unit mass and p -pressure) are defined at the cell centres and the magnetic field components, B, are defined on the cell faces to help maintain ∇ · B = 0. This is done using the Evans and Hawley constrained transport method for the magnetic flux (Evans & Hawley 1988). Also, the velocity components, v, are staggered with respect to the pressure and magnetic field to prevent checkerboard instabilities and so are placed at the cell vertices (Arber et al. 2001). Lare3d works in two steps: (i) a LAgrangian step, where the MHD equations are solved in a frame that moves with the fluid; (ii) a REmap step, which is purely a geometrical mapping of the Lagrangian grid back onto the original Eulerian grid. Lare3d solves the normalised MHD equations and employs the following normalised quantities (using subscript n to denote the normalising factors and hats to represent the dimensionless variables used by the code) where x is the length. These three normalising factors then define the following normalising constants for the velocity, pressure, current and internal energy per unit mass, respectively, where µ 0 is the magnetic permeability which is equal to 1 in dimensionless units. From these equations the plasma beta can be written as Therefore, in the absence of gravity and resistive effects, the standard normalised MHD equations used in Lare3d (with the hats dropped from the normalised quantities for ease of reading) are where F ν = ρν(∇ 2 v+ 1 3 ∇(∇·v)) is the viscous force (where ν is the coefficient of kinematic viscosity) and H ν = ρν( 1 2 e i j e i j − 2 3 (∇·v) 2 ) is the viscous heating term (where e i j = (∂v i /∂x j ) + (∂v j /∂x i ) is the rate of strain tensor). To provide closure to these equations we require an equation of state: = p/(ρ(γ − 1)), where γ = 5/3 is the ratio of specific heats.
The MHD equations that are usually solved by Lare3d contain resistive terms. However, to remove resistive effects from our MHD relaxation experiments, we remove these terms by setting the resistivity η in the code to zero. Of course, though, all numerical codes suffer from numerical resistivity. In our code, we estimate the background numerical resistivity to be ≈ 0.0002. This is very small and we find that the numerical diffusion in our runs is negligible (see Sect. 4 for a detailed discussion on this issue). The viscosity in the code is set to ν = 0.01.
The dimensions of the box are −L 0 to L 0 in the x and y directions, −L 0 to L 0 +L in the z direction and the resolution of the grid is 512 3 . We have chosen this grid resolution because it is large enough to allow the experiments to evolve for a sufficient length of time for a current layer to form. We were, however, restricted in how large a grid resolution was feasible by e.g., memory and running time of the experiment.
The boundary conditions are chosen to prevent energy leaving or entering the domain. Thus the magnetic field is line tied at the boundaries and the scalar quantities (internal energy per unit mass and density) have a maximum or minimum on the boundary: i.e., the derivatives across the boundary of all three components of B, ρ and are set to zero. The velocity (all components) is set to 0 on all the boundaries.
Magnetic field
In our experiments, we study the nature of the current layer created in a system involving a single separator, which naturally also has two null points with associated spines and fans. We do not wish to influence where the current layer(s) form, therefore, we start initially with a uniform current throughout our domain so that during the relaxation the current has the freedom to choose where it collects: at the nulls, spines, fans or the separator.
The initial magnetic field we use contains just two null points connected by a single separator. It can be written analytically as follows: The details of how this analytical field was formed can be found in Appendix A. With a suitable choice of the parameters, a, b, c, L and j sep (see Appendix A), this magnetic field contains two 3D nulls: a positive null positioned at (0,0,0) orientated with its fan in a y-z plane and a negative null positioned at (0,0,L) with its fan in an x-z plane. A separator lies between the nulls along the z-axis and the electric current associated with this magnetic field is j = B 0 µL 0 (0, 0, j sep ). Hence, the current is uniform and Fig. 1: Contour plot of the initial plasma beta in a cut perpendicular to the z-axis (separator) at z = 0.5 for the main experiment with j sep = 1.5. Over plotted are the intersections of the lower and upper null's separatrix surfaces with the z = 0.5 plane in the initial state (pale-blue/pink dashed lines, respectively).
runs parallel to the z-axis throughout the domain. In all experiments discussed in this paper the scaling factors B 0 and L 0 are set to one, a = 0.5, b = 0.75, c = 0.25, L = 1 and j 2 sep < 6 (see Appendix A). A different value of j sep is imposed in each experiment.
Plasma
All the experiments discussed here have an initial uniform density of ρ 0 = 1.5, an initial internal energy per unit mass of 0 = 1.5 and an initial velocity of v 0 = 0 (where the subscript "0" indicates initial normalised values). From the equation of state we know that p = ρ (γ − 1), which implies that the initial normalised pressure, p 0 = 1.5. Although the pressure is uniform throughout the domain, the magnetic field strength varies. Initially the mean plasma beta in the domain is β = 7.8. Half way along the separator (at x = y = 0, z = 0.5), due to the close proximity to the nulls where the plasma beta is infinite, the plasma beta is high, β = 192. Fig. 1 displays contours of the plasma beta in a cut across the separator and fans at z = 0.5 in the initial state. The plasma beta is large in the vicinity of the separator (at x = y = 0) but falls off rapidly away from this region. Separators embedded in regions with such high plasma betas are either cluster separators which connect two null points within a null cluster (Parnell et al. 2010b) or are separators in planetary magnetospheres (e.g. Dorelli et al. 2007). Plasma betas of between 1 and 10 have been determined in the Earth's magnetosphere, but obviously these will be much higher near null points, (Trenchi et al. 2008).
The value of our plasma beta is high due to the value of the strength of the initial pressure. This high pressure ensures only two nulls exist in the model during the entire relaxation process. It is possible to achieve a lower plasma beta by either increasing the initial length of the separator, L, or increasing the magnetic field scaling factor, B 0 , but both methods can lead to an increase in numbers of nulls soon after the relaxation begins. Alternative scenarios will be considered to seek a low-plasma beta separator current layer in future work.
In this paper, we have normalised the times to the fast-mode crossing time along the separator, from one null to the other, as follows. The fast-mode crossing time is given by where c s is the sound speed ( γ(γ − 1)) and c A (z) is the Alfvén speed ( B(z) 2 /ρ = B 0 (a(Lz − z 2 ))/(L 0 √ ρ)). Initially, the sound speed is uniform throughout the domain with c s = 5/3 and the magnetic field along the separator is known analytically, hence, we find the value of t f integrated along the separator to be t f = 0.77 (using the magnetic field parameters detailed previously).
The value of the fast-mode crossing time was also calculated from both nulls along the shortest paths to the domain boundaries. We found the crossing times from the lower null to the nearest x, y and z boundaries are t f = 0.71, t f = 0.67 and t f = 0.74, respectively. Similarly, for the upper null, the fastmode crossing times from this null to the nearest x, y and z boundaries are t f = 0.65, t f = 0.74 and t f = 0.74, respectively.
Initial null point properties
We now look more closely at the initial magnetic field with the parameters set to B 0 = 1, L 0 = 1, a = 0.5, b = 0.75, c = 0.25 and L = 1 as previously stated. Information about the nature of the field can be determined by considering the eigenvalues and eigenvectors associated with the local (linear) field about each null. The eigenvalues are, where α = 6.25 − j 2 sep and the subscripts "s, f 1 , f 2 " refer to the spine, minor and major separatrix-surface eigenvalues, respectively, and "l, u" refer to the lower (positive) and upper (negative) nulls, respectively. The eigenvectors associated with these eigenvalues are In this paper, experiments with j sep = 0.75, 1.0, 1.25, 1.5 and 1.75 are investigated. All values of j sep are chosen such that both nulls are initially improper radial nulls, i.e., the eigenvalues of the fans are real and distinct (λ f 1 l λ f 2 l and λ f 1 u λ f 2 u -see Parnell et al. (1996) for more details on the nature of 3D nulls).
The main case detailed in this paper has j sep = 1.5 and contains nulls with the following eigenvalues and eigenvectors: The magnetic skeleton of this initial configuration (the main experiment described in this paper), which was found using the methods described in and Haynes & Parnell (2010), is shown in Figs. 2a and 2b. The separator (green) linking the two nulls is formed from the intersection of the two separatrix surfaces (pink or pale-blue field lines). These surfaces are seen to twist gently about the x = y = 0 line (i.e. the separator), thus the angle between the two separatrix surfaces varies along the separator.
Results
There is an initial non-zero Lorentz force in all the initial magnetic fields examined in this paper, which acts in planes perpendicular to the separator, causing the separatrix surfaces to fold towards each other as soon as the relaxation begins. In Fig. 3, the size and strength of this force in a plane perpendicular to the separator, mid way along its length, is plotted for the main experiment, where j sep = 1.5.
In this paper, we show that the collapse of the separatrix surfaces about the separator is analogous to the collapse of separatrices about a 2D null point , and that a current layer is formed at the separator. However, the collapse also causes gradients to develop in the plasma pressure which provide a counter force slowing the collapse and eventually creating an equilibrium.
Initially there are no pressure gradients to balance the nonzero Lorentz force so, as soon as the experiment starts, waves are launched and the system evolves under non-resistive MHD (i.e., there is no reconnection and so there is no transfer of flux between the four flux domains about the separator). The system relaxes ideally, save for the damping of waves via viscous effects. Since the magnetic field is complex, it is not possible to form an equilibrium without current layers. Current layers form at topological or geometrical features, so in this model they can form at either the 3D null points, the separatrix surfaces, spines or the separator. We find that the MHD relaxation causes current accumulations to form along the separator, on the separatrix surfaces close to the separator (Figs. 2c and 2d) and on the separatrix surfaces close to the boundaries at the top and bottom of the box. The latter form due to the boundary conditions which prevent the separatrix surfaces moving on them.
The system evolves to what appears to be an equilibrium state, by t = 51.67t f for the main experiment. In the rest of this section, we focus on this experiment where j sep = 1.5. We consider the structure of the magnetic skeleton of the final equilibrium field and consider the appearance of the current accumulation (Sect. 4.1), the evolution of the energetics during the
Magnetic skeleton
The structure of the final equilibrium magnetic field is described by its magnetic skeleton , 2010. In Figs. 2c and 2d the equilibrium field's magnetic skeleton is shown along with the current layer that has formed (purple isosurface of the current parallel to the magnetic field, j ). It is clear that a current layer has been created along the separator (the details of which are discussed in Sect. 5.1). The profile of this current layer in the z = 0.5 plane perpendicular to the separator is shown in Fig. 4. Contours of |j| highlight the strong current layer about the separator and the enhanced current along the separatrix surfaces. Everywhere else the current is zero indicating that the current layer is embedded in a potential magnetic field. The solid and dashed white lines here are plotted through the depth and across the width of the current layer, respectively, in this cut at z = 0.5. During the whole relaxation process only two nulls are found in each time step and the topology of the system does not change (which implies that no numerical dissipation has occurred). At the start of the relaxation the nulls move slightly further away from each other along the z-axis, but then come back towards each other briefly before slowly moving apart along the z-axis towards the end of the relaxation. The rate of movement is 1.1×10 −3 L 0 /t f just after the initial oscillations die down and 4.3×10 −4 L 0 /t f at the end of the relaxation. This very slow continuous lengthening of the separator, after the system appears close to equilibrium, suggests that the system is asymptotically approaching an equilibrium, as is seen in the formation of current layers at 2D nulls (e.g., Klapper 1998; Craig & Litvi-nenko 2005;) and 3D nulls (e.g., Fuentes-Fernández & Parnell 2012, 2013. This asymptotic behaviour is considered in more detail in Sect. 5.6. From the 3D images seen in Fig. 2 the spines and separatrix surfaces associated with these nulls do not appear to have changed greatly between the initial state and final equilibrium, because they are line tied at the boundaries. However, we know that the current has changed considerably within the domain since initially the current is uniformly distributed, but in the equilibrium state it has accumulated about the separator and, therefore, the magnetic field must have changed. In order to visualise the changes in the magnetic field, we have plotted (along with the Lorentz force which has already been discussed) the magnetic skeletons of the initial and equilibrium fields in a 2D cut at z = 0.5 in Fig. 3. This cut reveals that the separatrix surfaces have warped from their original fairly straight formation (dashed pale-blue/pink lines) to lines that now have a point of inflection at the separator such that they run almost concurrently in the vicinity of the separator (solid pale-blue/pink lines). Indeed, this 2D cut perpendicular to the separator reveals that the separatrix surfaces form cusps exactly like those seen in the collapse of the magnetic field about a 2D null (e.g., Craig & Litvinenko 2005;Pontin & Craig 2005;). The cusp regions form due to the nature of the pressure which, initially uniform, is changed through the relaxation. This is discussed in Sect. 5.4.
From the isosurfaces of the current layer shown in Figs. 2c and 2d we can see a number of interesting characteristics including the fact that it is twisted and that it has the beginnings of "wing-like" features where the current enhancement extends out along one or both separatrix surfaces. These extended enhancements seen along the separatrix surfaces were also found in the current layers formed from the collapse of a 2D null (e.g., . The isosurface of current seen here is similar in shape to that of a hyperbolic flux tube about a quasi-separator as described by Titov et al. (2003): its crosssections in cuts perpendicular to the separator start essentially as thin ellipses at one end aligned with the separatrix surface of the nearest null, then become X-like with narrow enhanced current wings along both separatrix surfaces, before returning to thin ellipses aligned with the separatrix surface of the null at the other end. In Sect. 5.1, the characteristics of the current layer are studied in detail.
Energetics
Fig. 5 displays the kinetic, magnetic, internal and total energies along with the cumulative viscous heating and adiabatic terms (dashed lines) integrated over the whole 3D domain as a function of time. All energies, except the kinetic energy, have been shifted on the y-axis for representational purposes. In particular, the internal and magnetic energies have been moved such that the initial internal energy is plotted at the same point on the y-axis as the final magnetic energy (lower dotted line). The cumulative adiabatic heating curve also starts from this same point, whilst the cumulative viscous heating curve starts from the point on the y-axis where the shifted cumulative adiabatic heating curve ends (dot-dashed line). The upper dotted line indicates the value of the shifted initial magnetic energy in this plot which coincides with the value of the final internal energy.
As required by the closed boundary conditions, the total energy is conserved throughout the run, with a standard deviation of just 0.002% of the mean, indicating that any energy losses Looking more closely at the energy curves we can see that the difference between the initial magnetic energy and the final magnetic energy is the same as the difference between the initial and final internal energies (as indicated by the dotted lines on Fig. 5), which they must be since the initial and final kinetic energies are zero. The conversion of energy from magnetic to internal (via kinetic energy) occurs through one of two processes adiabatic or viscous heating (Eq. 7). In Fig. 5, we show that the sum of the cumulative adiabatic and cumulative viscous heating terms equal (to within numerical error) this change in magnetic/internal energy. The fact that these two heating terms can account for all the magnetic energy lost during the relaxation indicates that any magnetic reconnection, caused by numerical diffusion in the system, is negligible.
As we have already said, the collapse of the initial state, which is not in force balance, creates fast magnetoacoustic waves and, hence, kinetic energy. As these waves bounce across the box they compress or expand the plasma giving rise to either adiabatic heating or cooling, respectively. The oscillatory behaviour in the magnetic, kinetic and internal energies, as well as the cumulative adiabatic heating term are clear signatures of this behaviour. (Note, the periods of these oscillations confirm that the waves in the system are fast magnetoacoustic waves). At the same time, due to the presence of viscosity within the system, these waves are damped giving rise to viscous heating. The cumulative viscous heating term monotonically increases since viscosity only acts to reduce the amplitude of waves and, hence, only converts kinetic energy into internal energy and not viceversa.
The increase in internal energy comes mostly from viscous heating, which is three times bigger than the adiabatic heating. This indicates that the relaxation process is dominated by viscous damping. By considering experiments with identical initial setups, but with different viscosities, it is possible to show that the initial and final internal and magnetic energies in these systems are the same (to within numerical error), however, the proportion of viscous heating to adiabatic heating is greater in the experiment with high viscosity indicating that increasing the viscosity increases the rate of damping, but does not effect the final equilibrium state. This is in agreement with and Fuentes-Fernández, J. (2011) who both analytically (in 1D & 2D) and numerically (in 1D, 2D & 3D) demonstrated that the final equilibria of non-resistive MHD relaxation processes principally depend on the differences between the final and initial total pressures in the system.
By t = 20t f , the oscillations in all the energies are basically completely damped. After this the energies maintain constant values, indicating the system has essentially achieved an equilibrium state. Finally, we note here that the total integrated current in the domain is initially 18.0, but it falls to 13.5 during the relaxation. This fall in current is simply a consequence of the fact that, on the boundaries, the magnetic field parallel to the boundaries may vary and, hence, this is not unexpected.
Total force
To check in more detail that our final state is an equilibrium, we first consider the balance of the Lorentz force and the plasmapressure force. Filled contours of the total force (the Lorentz force, j × B, plus the plasma-pressure force, −∇p), drawn in three different planes perpendicular to the separator, reveal that the total force in the final state is zero everywhere, except very close to the separator and along the separatrix surface of the nearest null to the plane plotted (Fig. 6). The lack of forcebalance in the immediate vicinity of these topological features is not surprising since similar behaviour is found in the equilibrium field associated with collapsed 2D and 3D null points where an infinite-time collapse of the null points is seen (e.g., Klapper 1998; Craig & Litvinenko 2005;Fuentes-Fernández & Parnell 2012, 2013. Thus, these highlylocalised, residual forces suggest that separators may also undergo an infinite-time collapse. Further evidence of this is given in Sect. 5.6. Along the separator itself the Lorentz force vanishes (since j remains parallel to the z-axis along the separator) and so the total force here is simply the pressure force, Fig. 7a. It acts outwards from around the middle of the separator towards the nulls and is small outwith the separator along the z-axis.
In a 1D cut through the depth (e.g. the solid white line in Fig. 4) and across the width (e.g. the dashed white line in Fig. 4) of the current layer, in the plane z = 0.5, the Lorentz and pressure forces behave similarly, but are opposite in sign. This means the total force vanishes everywhere except where it crosses the current layer, Figs. 7b and 7c. Note, the residual force through the depth is too weak to be seen in this graph. These small residual net forces at the current layer indicate that the current here is still growing, as expected in the case of an infinite-time singularity. show similar cuts indicating the same sort of behaviour for the total forces through a current layer formed after the collapse of a 2D null. Residual forces for the collapse of a 2D null or a 3D separator are therefore found to lie within or on the edge of the current layer. The net force through the depth of the current layer, which has a peak magnitude of 0.026 (so not visible in Fig. 7b), acts to squeeze the current layer thinner. Reconnection will eventually occur at the current layer once it is sufficiently thin such that numerical diffusion becomes important. We stop all experiments discussed in this paper before this takes place. The net force across the width acts to widen the current layer. It has a peak magnitude of 0.071, some 2.5 times larger than the net forces along the length of the current layer and through its depth. This suggests that the current layer is more likely to widen rather than lengthen as the slow relaxation continues.
The second test we carried out to see if our final state is an equilibrium was to check the value of the pressure along the magnetic field lines in the final state. In our system, an equilibrium is achieved when the forces (Lorentz and pressure) balance. Taking the dot product of the sum of these forces with B gives This implies that, in an equilibrium state, pressure will be constant along field lines. Although not plotted here, the pressure was found to remain constant (to within 1.5%) along magnetic field lines indicating that, in general, our system may have achieved an equilibrium state.
Nature of the current layer
So far we have focussed on just one experiment with an initial current of j sep = 1.5. Here, however, we now consider the effects of varying the magnitude of the initial uniform current j sep on the nature of the current layer formed in the final equilibrium states. In these experiments the initial setup is identical apart from the initial current, j sep which takes one of the following values, 0.75, 1.0, 1.25, 1.5 and 1.75.
First, the magnitude of the current within and outside the current layer is discussed in Sect. 5.1. Then the twist of the current layer is described in Sect. 5.2, whilst in Sect. 5.3 the dimensions of the current layer are calculated. The behaviour of the plasma pressure and the balance of the forces through the current layer are studied in Sect. 5.4 and Sect. 5.5, respectively. Finally, Sect. 5.6 verifies the infinite time collapse of the field about the separator and calculates the growth rate of the current layer.
Current intensity
In the final equilibria of all experiments |j| is found to be strongest along the separator, although enhanced current is also found on the separatrix surfaces (Fig. 8). Everywhere else it is very close to zero. The surface plots of |j| in Fig. 8 show the distribution of current in various horizontal planes for the final equilibrium of the main ( j sep = 1.5) experiment. Fig. 8a shows a plane perpendicular to the separator just below the upper null. The current in this cut peaks at the separator, but is also strong along the separatrix surface of the upper null. A small enhancement of current along the separatrix surface of the lower null also occurs. In the plane z = 0.5 (Fig. 8b), a large, sharp peak of current exists at the separator clearly denoting the position of the current layer. The locations of both separatrix surfaces are also clearly visible with ridges of current, approximately 4.2 times smaller than that at the separator current layer, along them. In Fig. 8c, the current in a plane just below the lower null, and therefore not cutting the separator, is plotted. There is an enhanced ridge of current all the way along the separatrix surface of the lower null which peaks on the z-axis. This cut suggests that the current layer may extend beyond the separator. These plots only show the magnitude of the current. Later we consider the direction of the current within the current layer.
The ratio of the mean current in the separator current layer about the separator over the mean current on the separatrix surfaces ( j cl / j ss ) increases with the initial current j sep from a factor of just 2.6 when j sep = 0.75 up to 3.7 for the case with j sep = 1.75 (Table 1). These factors are all much greater than two indicating that the current at the separator is not simply a combination of the current enhancements from the two separatrix surfaces, but is itself a genuine current layer associated with the separator.
In Fig. 9, the distribution of the parallel current along the z-axis is plotted. The final lengths of the separators are all dependent on the initial j sep (see Sect. 5.3 for further details) and so to enable the parallel currents to be compared, the lengths of the separators have all been normalised to one. Thus, in this plot z * = (z − z ln )/l sep where z ln is the z coordinate of the lower null in the final equilibrium and l sep is the length of the separator in the final equilibrium. The parallel current ( j ) along the z-axis is positive along the separator, but drops sharply at the nulls becoming negative in sign outside the separator. These negative values increase slightly before decreasing away from the separator. The strong currents at the top and bottom boundaries are a result of the line-tied boundary conditions on the magnetic field which prevent the separatrix surfaces from moving. The local peak in magnitude of these currents just outside the separator, in the experiments with the largest initial currents, suggest that the separator current layers have reverse currents at their ends. Although not commonly discussed, reverse currents have also been found associated with current layers formed at 2D null points (e.g., Titov & Priest 1993;Bungey & Priest 1995).
The plot of the parallel current along the z-axis (Fig. 9) has an asymmetric profile, with a greater value as you approach the lower null along the separator than as you approach the upper null, in all experiments. We suspect this is due to asymmetries in Fig. 9: j along the z-axis normalised to the length of the respective separator for experiments with initial current j sep = 0.75 (black), 1.00 (blue), 1.25 (green), 1.5 (orange) and 1.75 (red).
the initial field and plan to investigate this further in future work. Furthermore, the current peaks about 42%-43% of the way along the separator in all cases. From Fig. 9, it is clear that the average and maximum values of |j| along the separator increase with initial current j sep , (as indicated in Table 1). The gradient between consecutive maximum and average values increases with j sep , except between j sep = 1.5 and 1.75 where the gradient decreases slightly. This is probably due to the fact that the experiment with j sep = 1.75 was not run for as long as the other experiments, and so is not quite as relaxed. The run was ended early since numerical dissipation, evidenced by the formation of additional nulls, started shortly after the final equilibrium state shown here. It is also noted that in the experiments |j| along the separator varies more as j sep is increased, since the average of |j| pulls away from the maximum |j| (again, as seen in Fig. 9).
Current layer twist
From the isosurface of j in Figs. 2c and 2d and from the contours of current in cuts through the separator in Fig. 8, we can see that the current layer is twisted, i.e., as z varies, the current layer rotates. Here, we consider how this twist varies with j sep , after briefly explaining why such a twist arises.
Initially, the two separatrix surfaces lie in vertical planes which intersect at an angle dependent on the initial j sep (in the j sep = 1.5 case the angle is roughly π/3). Also, the spine's lines from each null, which bound on one edge the separatrix surfaces of the other null, initially lie in xy-planes, thus they are at right angles to the initial uniform current. The relaxation process causes the two separatrix surfaces to close up and run almost concurrently in the local vicinity of the separator. Midway along the separator (z = 0.5) this is achieved by both separatrix surfaces curving equally in towards each other (Fig. 3), but at the end of the separator the separatrix surface associated with the local null does not move, instead the other separatrix surface (and thus the spine of the local null) moves. This is due to the initial Lorentz force which, in the z = 0.5 plane, is such that both separatrix surfaces close in towards each other (see Fig. 3). However, at each null the initial Lorentz force is greater across its spine than it is across its separatrix surface. So, at the ends of the separator, the local separatrix surfaces essentially maintain their original positions and thus the current layer must ro-tate along its length through an angle approximately equivalent to that between the planes of the initial separatrix surfaces. Thus the angle, θ, through which the current layer twists between the lower and upper nulls depends on the initial current j sep (Table 1).
Length of current layer
In order to determine the dimensions of the current layer, we need to define where it starts and ends. The length of the current layer, l sep , is defined as the distance between the two null points (i.e., the length of the separator) in the final equilibrium. In Sect. 5.1, we have seen that these are also the points at which the current changes sign. This means we do not include the reverse current regions when determining the length of the current layer.
During the relaxation the null points move apart along the zaxis (as discussed in Sect. 4.1) and so all the equilibrium current layers have lengths greater than 1 ( Table 1). As j sep increases the length of the current layer increases due to the greater initial Lorentz force.
Width and depth of current layer
By looking at Figs. 2 and 4, we can see that the current layer's depth is many times smaller than its width which is much shorter than its length. However, quantifying the width and depth of the current layer is not trivial since the current gradually decreases rather than abruptly stops. We consider two approaches to determine the width and depth of the current layer in cuts perpendicular to the separator. The two methods are (i) the contour method which uses the last elliptical current contour, before the current contours deform as they start to extend along the separatrix surfaces (i.e., become X or bone shaped) and (ii) the full width at half maximum (FWHM) of the current. The first method described here is our preferred method because it typically accounts for more of the current about the separator and the values of the current contours vary less between cuts than the second method, but we include both for completeness. Fig. 10a shows 1D slices of |j|, in the z = 0.5 plane, through the depth (solid) and across the width (dashed) of the current layer for all the different experiments. The 1D slices of |j| through the current layer depth show significantly enhanced |j| forming a narrow peak about the separator. Elsewhere along this slice the current is small.
The width and depth of the current layer vary along the current layer's length. Using the contour method, they are defined by examining contours of |j| in cuts across the current layer. Plotting a contour in each cut at a value of |j| which only outlines the current layer, and not the enhanced current along the separatrix surfaces, allows the width and depth to be measured. In other words, we count only the current down to the inflection point of |j| to pick out the current layer (the transition point between elliptical and X-shaped current contours). In all cases, the same contour goes through the two inflection points that lie either side of the separator. Once the correct contour has been found the width and depth of the current layer, along the length of the separator, are determined.
In Figs. 10b and 10c, the current layer's depth and width, respectively, determined using the contour method, are plotted, against z, for all the different experiments. The current layer depths are greatest away from the nulls and narrowest at either end of the current layers near the nulls (Fig. 10b). For the current layers with the largest initial j sep , the widths follow a similar profile in which they bulge at their middle, but the widths of the other current layers remain essentially constant along the current layer's length (Fig. 10c).
In order to see how the widths and depths determined using the contour method compare to those calculated with the FWHM method, we determine the depths and widths in the z = 0.5 plane using both methods. For each experiment, the results from the two methods (where the contour method is denoted by CM) are presented in Table 1. The values of the current contours used to make these measurements in both the contour method and the FWHM method are indicated on the cuts in Fig. 10a. In general, the FWHM estimates of the current layer's width and depth are smaller than the contour method's, except in the case with the lowest initial current. Both methods indicate that as the initial current increases, so do the dimensions of the current layer. In contrast to the contour method, the value of FWHM, hence, the contour used to calculate the widths and depths, varies greatly along the length of the separator because the maximum current along the separator changes quite considerably with length along the separator, as shown in Fig. 9. We, therefore, do not feel that the FWHM method is as robust as the contour method. However, the FWHM method does indicate that the higher the initial current, j sep , the closer the equilibrium current layer appears to be to a singularity, since the depth of the current layer determined using this method decreases with increasing initial current.
Plasma pressure
As already seen, in the final equilibrium state, cuts in planes perpendicular to the separator show that the separatrix surfaces collapse creating cusp-shaped regions about the separator, within which lie regions of enhanced pressure and outwith which the pressure falls off. The pressure difference (the pressure minus the initial pressure, p − p 0 ) in the z = 0.5 plane is shown in Fig. 11a, for the experiment with initial current j sep = 1.5. The deformation of the separatrix surfaces to produce cusps at the ends of the current layer is analogous to that seen in 2D when the separatrices of a 2D null collapse to form a current layer (e.g., Klapper 1998;Craig & Litvinenko 2005;. The resulting pressure enhancements, within the two cusp regions are also reminiscent of these 2D current layers. The cusp regions form due to the requirement that total pressure must balance across the current layer in an equilibrium state. From the lower-left and upper-right flux domains in Fig. 11a, the magnetic field approaching the current layer tends to zero, but from the other two flux domains it tends to a finite value. For total pressure balance, the plasma pressure must be higher near the current layer in the first pair of domains than in the latter pair. The two flux domains with higher pressure form cusp regions as the magnetic field and pressure form a spiked wedge between the two other domains that lie almost parallel near the separator. Fig. 11b shows the 3D extent of the regions of enhanced (yellow) pressure that occur inside the cusp regions about the separator and the pressure outside the cusps which falls off away from the separator (blue). In particular, it is clear that the four regions extend beyond the ends of the separator, where one or other of the separatrix surfaces is bounded by a spine. The pressure difference weakens in these areas as you get further above or below the nulls off the ends of the separator, so it is possible that, if the domain was much longer, the pressure would reduce to uniform far away from the ends of the separator.
The resulting variation in plasma pressure in the final equilibrium obviously effects the plasma beta within the system. From Fig. 11c, which displays contours of the plasma beta in the cut at z = 0.5 for the equilibrium state, it is apparent that the enhanced regions of beta are confined to within the cusps close to the separator instead of being high anywhere within the vicinity of the separator (c.f. Fig. 1). Furthermore, the overall plasma beta in the system is slightly lower (β = 6.9) than it was initially.
There is enhanced plasma pressure along the length of the separator itself (Fig. 12a), producing a pressure gradient and, hence, a pressure force, as already discussed in Sect. 4.3. To enable all the experiments to be compared, the lengths of the separators have all been normalised to one in the same way as they were for Fig. 9. The pressure enhancement along the separator is greatest in the experiment with the highest initial current and in all cases reaches its peak at about 57%-58% of the way along the length of the separator. Beyond the ends of the separator, along the z-axis, the plasma pressure becomes constant, but the plasma pressure along the z-axis above the upper null is slightly higher than it is below the lower null. Again, we suspect this is due to asymmetries in the initial field and plan to investigate this further in the future.
A cut through the depth of the current layer, in the plane z = 0.5, reveals that the plasma pressure peaks at the separator, whilst in a cut across its width the pressure is almost constant (Fig. 12b). This behaviour agrees with that seen in the 2D cut of the pressure in the z = 0.5 plane (Fig. 11a) and indicates that in the immediate vicinity of the current layer there is a plasmapressure gradient opposing the collapse of the current layer (also seen in Fig. 7b). The details of the small residual forces that remain in the equilibrium state of each experiment are discussed next.
Forces through the depth and across the width of the current layer
As already seen from Fig. 6, each experiment reaches a state in which all the forces balance everywhere within the domain, except within the current layer itself and along the separatrix surfaces. Here, the residual forces are both small (in comparison to the initial forces) and highly localised. We call this the 'equilibrium' state, although the field is actually only in a quasiequilibrium. In the equilibrium state, the Lorentz force vanishes along the separator which means that the total force here is simply the pressure force ( Fig. 13a -here the length of each sep- arator is normalised to one). The behaviour of the total force along the z-axis in the final equilibrium state is the same in each experiment: it acts outwards towards both nulls along the separator, from the same point just over half way along the separator where the plasma pressure reaches a maximum. However, the magnitude of this force increases with j sep . The total force through the depth of the current layer acts inwards towards the separator such as to squeeze the current layer thinner (Fig. 13b), whilst the total force across the width acts outwards away from the separator (Fig. 13c). Naturally, in both figures the total force is seen to increase with increasing initial current j sep . This behaviour of the total force perpendicular to the separator, which acts to perpetuate the collapse of the separator is the same as that seen in current layers formed from the collapse of a 2D null (e.g., Fuentes-Fernández et al. 2011).
Growth rate of the current layer
These small, non-zero, and highly localised forces about the separator indicate that the current layer itself is not yet in equilibrium, even though the rest of the system is. Indeed, as already mentioned, it is possibly undergoing an infinite-time collapse, as is seen during the collapse of null points, in both 2D and 3D (Klapper 1998;Pontin & Craig 2005;Fuentes-Fernández & Parnell 2012, 2013. Here, we investigate how the current grows within the current layer and how the initial current j sep affects this.
This form of growth is the same as that seen in the collapse of 2D and 3D nulls and is suggestive that there is an infinite-time singularity along the separator implying that the system is attempting to reach a true singularity which it cannot achieve in a finite time. Since we have followed the time evolution for one order of magnitude increase in time, however, we cannot be certain. The growth rate, a 1 , is proportional to the initial uniform current j sep , and in all cases considered here is less than 0.5. The same trend is found for the growth of the minimum value of |j| along the separator. In each experiment the maximum value of |j| occurs around z = 0.4 and the minimum values occur around the upper null.
Conclusions
In this paper, we have performed the first non-resistive MHD relaxation of a single non-potential separator. Here, we have analysed the results from five experiments with varying initial uniform current in which an initially non-equilibrium magnetic field, containing two null points, their associated spines and separatrix surfaces and a separator connecting the two nulls, is allowed to evolve to an equilibrium state. These experiments determine where the current layer forms in a magnetic field containing various different topological features and what the characteristics of the current layer are.
In our experiments, the main current layers formed are centred on the separator. Separators are important topological features since, due to their position at the boundary of four topologically distinct flux domains, current builds up easily along them, as seen in our numerical experiments. Isosurfaces of the current reveal that the current layer is essentially a flat twisted band about the separator. However, lower isosurfaces of current reveal a more complex shape similar to that of a hyperbolic flux tube. At the ends of the separator, near the nulls, cross-sections perpendicular to the separator through the enhanced current regions are elongated ellipses that are aligned with the separatrix surface of the null nearest to the cross-sectional cut. In the middle of the separator the cross-sectional cuts have an X-type shape as weak wings of current are found extending along both separatrix surfaces. The separator current layers formed twist about the separator. Their degree of twist is dependent on the strength of the initial current. The current layer is twisted due to the fact that the initial uniform current is aligned with the separator causing the separatrix surfaces to twist about the separator.
The current accumulations along the separator are nonuniform, probably due to the initial asymmetries in the skeleton. Also, the current profile along the z-axis possesses reverse currents outside the separator, as has been observed in some 2D cur-rent sheets. The dimensions (width, depth and length) of the current layer, as well as the amount of current in the current layer, are all found to depend on the initial current j = (0, 0, j sep ).
The final states of our experiments are all in equilibrium everywhere except near the separator and along the separatrix surfaces. In these highly localised regions, small residual forces remain causing the separator to slowly lengthen and widen throughout the relaxation, and also to continually flatten and strengthen in current. This slow, but continual evolution suggests the system is approaching an infinite-time singularity as is seen in the collapse of 2D and 3D nulls (e.g. Fuentes-Fernández & Parnell 2012). This would imply that a true equilibrium could not be achieved in a finite time.
The plasma within the experiments starts off uniform, but, in the final equilibrium state, the separatrix surfaces about the separator have collapsed to form cusp regions in planes perpendicular to the separator: the plasma pressure builds up within the cusp regions and outwith them it falls off, as seen in the collapse of 2D nulls. Cusps of this nature are required to provide total pressure balance across the current layer.
The experiments considered here are the first such numerical models for separator current layers formed through nonresistive MHD relaxation. Here, we consider current layers arising from initial non-equilibrium magnetic fields with uniform current parallel to the separator and have observed that the current builds along the separator throughout the relaxation, as opposed to building at the null points. We would expect that having a smaller plasma beta would lead to higher currents building up at the separator current layer since the pressure gradients that counteract the collapse of the separator would be weaker. We intend to investigate current layers formed in regions of low plasma beta in a follow-up paper. Furthermore, since other possible orientations for the current may effect the final equilibrium state, we will also study current layers created from different initial magnetic field configurations.
Short length scales are necessary for 3D magnetic reconnection to take place, and so, current layers at separators, such as those formed here, are natural sites for 3D magnetic reconnection. In the future, we will use the equilibria formed here as initial states in order to study magnetic reconnection at separator current layers.
The initial analytical magnetic field was chosen as it represents a field with two 3D-null points whose separatrix surfaces intersect to form a single separator connecting the nulls. The field has constant current in the z direction parallel to the separator. It was formed by starting with the lowest order (quadratic) magnetic field that represents two nulls joined by a separator. Such a field contains 27 unknown parameters since, for each component of the magnetic field (B x , B y , B z ), there are 9 terms (x, y, z, xy, xz, yz, x 2 , y 2 , z 2 ). Without loss of generality most of these terms can be eliminated by satisfying a series of conditions. The conditions that we impose on our field are as follows, -∇ · B = 0.
-j = j zẑ , so the current is constant and is directed along the separator. -B = 0 only at x = y = z = 0 and at x = y = 0, z = L to give two nulls a distance L apart. -Only one separator exists and it lies along the z-axis.
-The lower/upper null is positive/negative with a vertical separatrix surface and spine lying in the z = 0/z = L planes.
Satisfying these conditions allows the general field with 27 parameters to be reduced to a field of the form shown in Eq. (A.1) with just five parameters (note, the magnetic field and length of the system have scaling factors B 0 and L 0 which are set equal to one here). B x = x + cxz + byz − 1 2 j sep y, B y = (2a − c)yz − (1 + La)y + bxz + 1 2 j sep x, B z = a(Lz − z 2 ) + 1 2 cx 2 + (a − 1 2 c)y 2 + bxy.
The length of the separator, L, is set to one in all experiments considered here. The four parameters a, b, c and j sep have constraints on them in order to satisfy the conditions listed previously. The constraints are L 2 + j 2 sep 4L 2 , j 2 sep < 4(1 + aL). Varying the parameters a and c modifies the geometry of the field lines in the separatrix surfaces of both nulls. Varying the parameter b rotates the upper null's separatrix surface relative to the lower null's separatrix surface. Finally j sep , the non-potential parameter, allows the separatrix surfaces of both nulls to curl around the separator.
Acknowledgements. JEHS would like to thank STFC for financial support during her Ph. D and CEP acknowledges support from the STFC consolidated grant. We would like to thank the referee for useful comments. | 2014-10-31T10:04:35.000Z | 2014-10-31T00:00:00.000 | {
"year": 2014,
"sha1": "dac7ac34606a82bd00c392e8d68b5de852715b68",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2015/01/aa24348-14.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "dac7ac34606a82bd00c392e8d68b5de852715b68",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
237478780 | pes2o/s2orc | v3-fos-license | Novel Strategy for Gallium-Substituted Hydroxyapatite/Pergularia daemia Fiber Extract/Poly(N-vinylcarbazole) Biocomposite Coating on Titanium for Biomedical Applications
The current work mainly focuses on the innovative nature of nano-gallium-substituted hydroxyapatite (nGa-HAp)/Pergularia daemia fiber extract (PDFE)/poly(N-vinylcarbazole) (PVK) biocomposite coating on titanium (Ti) metal in an eco-friendly and low-cost way through electrophoretic deposition for metallic implant applications. Detailed analysis of this nGa-HAp/PDFE/PVK biocomposite coating revealed many encouraging functional properties like structure and uniformity of the coating. Furthermore, gallium and fruit extract of PDFE-incorporated biocomposite enhance the in vitro antimicrobial, cell viability, and bioactivity studies. In addition, the mechanical and anticorrosion tests of the biocomposite material proved improved adhesion, hardness, and corrosion resistance properties, which were found to be attributed to the presence of PDFE and PVK. Also, the swelling and degradation behaviors of the as-developed material were evaluated in simulated body fluids (SBF) solution. The results revealed that the as-developed composite exhibited superior swelling and lower degradation properties, which evidences the stability of composite in the SBF solution. Overall, the results of the present study indicate that these nGa-HAp/PDFE/PVK biocomposite materials with improved mechanical, corrosion resistance, antibacterial, cell viability, and bioactivity properties appear as promising materials for biomedical applications.
■ INTRODUCTION
Developing novel bioactive materials with multifunctionality for orthopedic application is one of the most important dynamic research areas in the materials science field for scientific growth in the 21st century. 1,2 During the last few decades, artificial orthopedic substitutes gained more importance due to the increase in bone defects and joint degradation because of the aging population in the world. 3 The requirement for the fabrication of metallic implants for loadbearing orthopedic applications is increased. Ti and its alloys are widely used metallic implant materials for orthopedic applications due to their commendable fatigue properties, high strength, low wear rate, nontoxicity, and biocompatibility. 4,5 However, owing to its lack of osteoconductive and osteoinductive properties and also inherent bioinertness, dynamic research was mainly focused on the surface modification of the Ti metal implants using HAp-based bioceramic coating. 6 In recent years, HAp-based coatings have been appraised and developed as the most potential material for orthopedic applications. 7 HAp is one of the most popular biomaterials, and it is mainly used in various biomedical applications due to its high osteoconductivity and excellent biodegradability. 8 Owing to its immense properties and similarity to natural bone, biomaterials based on HAp are widely used in various biomedical applications in the form of bone fillers, coating material for metallic implants, etc.
However, most of them were found to have potential toxicity issues in this area. The investigation of the bioactive and biocompatible materials such as HAp for various health care applications are still going on as a combined effort of chemists, biologists, physicists, and engineers. 9,10 The cost of production is the main issue for the chemical syntheses of HAp as it involves using expensive chemicals. Hence, to reduce the production cost for the synthesis of HAp, different biogenic materials such as coral, eggshells, seashells, freshwater shells, and bovine bones are widely used as calcium sources. 11 Among the above-mentioned biogenic waste, Gastropoda shells are easily available in nature and are found to contain plenty of calcium carbonate, which can be used as a calcium source for the synthesis of HAp. Further, it has been also shown that natural HAp obtained from the natural source has better bioactivity compared to chemically synthesized HAp bioceramic materials. 12,13 In addition to this, some biological/ structural performances of pure HAp such as implantation efficacy, osseointegration ability, in vivo degradation rate, and antibacterial properties were improved by modifying the composition through substitution or doping small amounts of foreign ions for a variety of applications. 14 However, nanosized synthetic HAp particles are very closely similar to the HAp crystals in human natural bone. 15 Inspired by the limitation of pure HAp and benefits of nanosized HAp particles, we undertook this research. The employment of plant extract for the fabrication of nano synthetic HAp has received increasing attention owing to the absence of toxic chemicals. 16,17 In the last few years, a growing number of plants have been mainly used for rapid extracellular and efficient synthesis of HAp. 18−20 Among the plant extracts, the extracts of Chomelia asiatica, Sida acuta, Azadirachta indica, and Gmelina asiatica are cheap and eco-friendly. Among the plant extracts, Azadirachta indica gum was used as the template for the synthesis of nano HAp biomaterials. 21 On the other hand, the antibacterial property of pure HAp is limited. Generally, the infection usually occurs after surgery and can lead to implant rejection. Hence, the antibacterial property of HAp is modified through a small amount of substituting (or) doping of ions for various applications. Recently, several ions have been incorporated in HAp, such as La 3+ , Ce 3+ , Yt 3+ , Ga 3+ , etc., which were subjected to Ca 2+ substitution within the HAp matrix. 22−26 These trivalent ions can affect the biocompatibility, cell attachment toward the biological cells, and antibacterial property. Among the trivalent metal ions, gallium (Ga 3+ ) is the most beneficial trace ion, which decreases the bacterial infections of the human body. Hence, these more interesting facts triggered the recent research of substituting Ga 3+ with HAp. 27,28 Apatite-containing Ga 3+ has exhibited antibacterial effects under both in vivo and in vitro conditions. Moreover, Ga 3+ is the most beneficial trace ion due to its strong affinity to bone tissues and inhibits bone resorption, which is mainly used in biomedical applications. 29,30 In addition, Ga 3+ exhibits outstanding antibacterial activity and is supposed to play a vital role in the acceptance of orthopedic implants. 30 Furthermore, it has been proposed that substitution of Ga 3+ in HAp composition has an important result over pure HAp. The biological properties of Ga-substituted HAp (Ga-HAp) can be further enhanced by the addition of suitable reinforcing material. For this purpose, recently, young researchers are working on the reinforcing of different natural and synthetic fibers. 31 The employment of plant extract has received increasing attention over chemical-and microbial-based materials because it involves using exclusive chemicals. 32,33 Generally, plants and plant extracts have long been used in medical applications mainly due to the presence of bioactive polyphenolic compounds, which may be stored in plant parts such as leaves, roots, fruits, and seeds. Hence, different plants have been studied using recent scientific approaches to classify various biological compounds of these medicinal plants. 34,35 Pergularia daemia (forsk, family Asclepiadaceae) is native to Asia, and it grows in plains of hotter regions of India, which is commonly known as "Veliparuthi" in Tamil Nadu. The excellent antidiabetic, hepatoprotective, anti-inflammatory, and cardiovascular effects of this natural medicinal plant have been reported in Ayurvedic and folk medicine. 36,37 Since there is no individual bioactive material available to fulfill all of the essential requirements for biomedical applications, fabrication of composite as an attractive for human bone issues has become mandatory. Hence, these attractive facts triggered the recent work, which focuses on the (PDFE) combined with nGa-HAp material. Recent research is mainly directed toward the mixing of polymers with inorganic materials, which exhibit outstanding features with homogeneous mechanical properties. The addition of an electroactive polymer to the nGa-HAp/ PDFE at an ambient temperature reduces its brittleness.
Alternatively, nGa-HAp/PDFE composite combined with electroactive polymers offers better dispersion and can prospectively enhance (or) maintain the same bioactive and antibacterial properties of the nGa-HAp/PDFE composite material, while providing a mechanical, broad range of structural and degradation properties. Unfortunately, a handful of studies about the mechanical behavior of electroactive polymer in nGa-HAp/PDFE composite are available. None of them have investigated the possibility of using these ternary composites as robust coating materials to resist biofilm formation on metallic implants. Generally, electroactive polymers are an excellent selection for such composite owing to their anticorrosion properties. 38−40 Among the electroactive polymers, poly(N-vinylcarbazole) (PVK) is an excellent candidate because of its superior mechanical and thermal properties. 40 Inspired by the limitations of other polymers and benefits of PVK, we undertake this research work. Hence, in this paper, we report on the fabrication of nGa-HAp/PDFE/ PVK composite on Ti by electrophoretic deposition method and the investigation of its structural, morphological, mechanical, and corrosion stability, as well as in vitro biocompatibility. Multifunctional Ti implant surface that simultaneously suppresses bacteria growth and enhances biocompatibility as well as mechanical strength may be achieved by combining the components such as nGa-HAp, PDFE, and PVK in composite coatings. This approach has been used recently to develop composite coating using pulsed laser deposition, electrophoretic deposition, plasma spray, electrodeposition, etc. 41−45 Among these techniques, electrophoretic deposition (EPD) is an intensively used technique to deposit composite coating on Ti implant since it is the most established technique compared to others deposition techniques. The EPD technique recommends the attractive prospect to produce homogeneous and dense polymer, fiber, ceramic, and composite coatings at a reduced cost and high mechanical strength for biomedical applications. The electrophoretic deposition technique provides suspension stability, and the result of these approaches is a nGa-HAp/PDFE/PVK composite coating that either has PDFE extract and PVK dispersed in a surrounding nGa-HAp matrix. With this in mind, we fabricated multifunctional nGa-HAp/PDFE/PVK composite from an eco-friendly and lowcost material acting as the main source with outstanding antibacterial activity, biocompatibility, and enhanced mechanical properties for biomedical applications. (Figure 1a), 15 which evidences the substitution of Ga 3+ into the Ca 2+ lattice in pure HAp sample. The FT-IR peaks observed for the nGa-HAp coating (Figure 1c) showed that mediated synthesis does not influence any major changes in PO 4 3− and OH − groups except the slight variation of peaks in Ga-HAp coating. Apart from the typical nGa-HAp peaks, the absorption peaks appearing at 3016 and 1458 cm −1 are attributed to the stretching of C−H and CC groups in the PDFE (Figure 1d), respectively. 37 All of these characteristic peaks clearly evidence the formation of both nGa-HAp and PDFE units in the nGa-HAp/PDFE composite. The spectrum obtained at PVK incorporated into nGa-HAp/PDFE composite coating is shown in Figure 1e. The FT-IR spectrum shows the peaks attributable to the nGa-HAp/PDFE composite coating, and in addition, the characteristic peak at 1818 cm −1 reveals the stretching of the C−N group, indicating the presence of PVK polymer in the nGa-HAp/PDFE matrix. 38 All of the above illustrative peaks imply that the nGa-HAp/ PDFE/PVK composite coatings contain both the PDFE and PVK units. Thus, in this FT-IR study, the presence of both the PDFE and PVK units strongly reveals the formation of nGa-HAp/PDFE/PVK composite coatings on Ti sample ( Figure 1e 15,23 Hence, the formed Ga-HAp sample has a hexagonal crystal structure. In the case of substitution of ions, the ionic radius of gallium ions (0.62 Å) is low compared to that of Ca 2+ (0.99 Å), and hence it can more easily substitute Ca 2+ and take part in the formation of a compound in the HAp matrix with electrostatic interactions. It is also more evident that there is a nonexistence of any other secondary phases, which are generally seen during the formation of the Ga-HAp matrix.
Hence, the substitution of Ca 2+ by Ga 3+ takes place more simply and effectively without changing the crystal structure of the HAp matrix. The XRD peaks observed for the nGa-HAp coating ( Figure 2c 34,35 In the XRD patterns for the nGa-HAp/PDFE/PVK composite sample (Figure 2e), a broad peak and amorphous nature of nGa-HAp/PDFE/PVK composite exist along with the diffraction peaks for PVK, which clearly shows that the nGa-HAp/PDFE/PVK composite had a more ordered arrangement than the single compound due to the presence of Ga-HAp. Moreover, strong diffraction peaks were located at a 2θ value of 21.76°, which revealed face-to-face ordered structure of PVK present in the nGa-HAp/PDFE/PVK composite. 38 In addition to this, XRD patterns show that the nGa-HAp coating strongly influenced the crystalline behavior of the nGa-HAp/PDFE/PVK composite. As shown in Figure 2e, the XRD patterns of the nGa-HAp/PDFE/PVK composite show that no significant typical interfacial phases were formed or lost with the addition of PDFE and PVK, which strongly advocates the successful formation of nGa-HAp/PDFE/PVK composite sample.
HRSEM Analysis. The properties of nGa-HAp, PDFE, and PVK samples are strongly dependent on their structure and morphology. Representative HRSEM micrographs of HAp, Ga-HAp, nGa-HAp, nGa-HAp/PDFE, and nGa-HAp/PDFE/ ACS Omega http://pubs.acs.org/journal/acsodf Article the granular-like morphology with irregular pores, and thus the coating was rough and irregular in nature. Figure 3c represents the morphology of nGa-HAp-coated Ti, which displays the formation of nanorods. After PDFE were introduced into the nGa-HAp/PDFE composite coating, the typical interconnected and agglomerated rod-like morphology still dominated in the nGa-HAp/PDFE composite coatings as seen from Figure 3d. The HRSEM micrographs of nGa-HAp/PDFE composite coatings revealed a slight difference with incorporation of PDFE in the nGa-HAp unit. It is inferred that the presence PDFE in nGa-HAp/PDFE composite matrix causes the fabrication of a uniform composite coating with less porous and finer morphology, leading to increasing compactness of the nGa-HAp coatings. It has been reported that the growth of Vickers Micro-Hardness. Any metallic implant that is to be used for orthopedic implant application must be tested for its hardness, which is the most significant parameter to be followed before implantation. The Vickers micro-hardness (H v ) gives significant information about load-bearing tendency when it is implanted into the human body. The However, when PDFE is incorporated, the H v value is found to be slightly higher (354.5H v ) than the other HAp coatings. Similarly, the nGa-HAp/PDFE/PVK composite sample showed 360.4H v which is greater than that of the nGa-HAp/ PDFE-, nGa-HAp-, Ga-HAp-, and HAp-coated Ti samples. An interesting observation is that the hardness increased upon the incorporation of PDFE and PVK. The hardness value may also be attributed due to the uniform and more compact coating morphology of the composite. Thus, the obtained hardness value for nGa-HAp/PDFE/PVK composite-coated Ti will make it suitable for load-bearing orthopedic applications.
Potentiodynamic Polarization Studies. The corrosion protection performance of the blank Ti, Ga-HAp, nGa-HAp, nGa-HAp/PDFE, and nGa-HAp/PDFE/PVK compositecoated Ti samples was studied for long-term applications. To reveal the protection ability of the nGa-HAp/PDFE/PVK composite coating, a polarization study was performed in SBF medium, and the corresponding polarization curve is presented in Figure 6. From the recorded polarization curves, the Tafel region was identified and extrapolated to corrosion potential (E corr ) to get corrosion current (i corr ), and the obtained corrosion parameters are given in Table 1.
The polarization curves of Ga-HAp-and nGa-HAp-coated Ti exhibited E corr values of −596.6 ± 3.2 and −545.0 ± 4.4 mV, respectively, which was found to have a higher electropositive value compared to the E corr value of blank Ti (−618.7 ± 5.1 mV). However, the polarization curves of composite (nGa-HAp/PDFE and nGa-HAp/PDFE/PVK)coated Ti showed a slight shift toward more positive direction (i.e., −496.2 ± 8.2 and −452.4 ± 6.3 mV) than the Ga-HApand nGa-HAp-coated Ti. Thus, among the attained polarization curves, nGa-HAp/PDFE/PVK composite-coated Ti exhibited a maximum shift of corrosion potential (E corr = −452.4 ± 6.3 mV), which clearly reveals the higher corrosion protection performance of Ti. Similarly, the current density (i corr = 0.10 ± 0.06 μA/cm 2 ) for nGa-HAp/PDFE/PVK composite-coated Ti was found to decreased ( Among the coatings, the composite-coated Ti with compact grainlike morphology over the Ti proves to be more corrosion resistant than other coatings. Thus, from the polarization curves, it is evident that the nGa-HAp/PDFE/PVK compositecoated Ti has the noblest corrosion potential value and the lowest corrosion current density, which reports the greater stability of the composite-coated Ti. Thus, the investigation of polarization plots showed a maximum shift of corrosion values for nGa-HAp/PDFE/PVK composite-coated Ti toward the noble direction, which strongly indicates that the compositecoated Ti revealed the maximum corrosion protection ability in SBF solution, i.e., composite-coated Ti is stable in physiological solution and enhance the lifetime of the implant for long-term orthopedic applications. Antibacterial Activity. The antibacterial performance of Ga-HAp, nGa-HAp, nGa-HAp/PDFE, and nGa-HAp/PDFE/ PVK composite samples was tested against Gram-negative strain P. aeruginosa and Gram-positive strain S. aureus that are responsible for implant-related infections. The inhibition zones of the as-developed materials at 50 and 100 μL against P. aeruginosa and S. aureus are shown in Figure 7. Among the samples, the nGa-HAp/PDFE/PVK composite sample exhibited a higher zone of inhibition against P. aeruginosa and S. aureus, i.e., 13 and 17 mm for 50 μL, and 17 and 19 mm for 100 μL volumes, respectively. The results exhibited that the presence of Ga in HAp and PDFE in the composite is responsible for the antibacterial property of the composite ( Table 2).
The mechanism involves the interaction of the nGa-HAp/ PDFE/PVK composite with proteins and enzymes of bacteria. Due to this interaction, there occur structural damages in the cell wall and bacterial membrane, thereby preventing bacterial reproduction. Also, from the result (Figure 7), it is observed that the zone of inhibition obtained for the composite sample against P. aeruginosa (Figure 7a−d) is higher than that obtained for S. aureus (Figure 7a′−d′), which may be due to the structural difference of cell wall membrane of Gramnegative and Gram-positive bacteria.
These structural differences show a substantial increase in permeability, making the organism incompetent of regular functions and, finally, causing cell death.
In Vitro Cell Viability Test. The viabilities of MG-63 osteoblast cells on the nGa-HAp/PDFE/PVK composite determined using MTT assay at different concentrations of 0.5, 5, 50, 100, and 150 μg are shown in Figure 8. The bar diagram indicates that the nGa-HAp/PDFE/PVK composite exhibited a statistically significant increase in the cellular In Vitro Bioactivity. The in vitro apatite ability of the nGa-HAp/PDFE/PVK composite-coated Ti samples after 7 and 14 days of immersion in SBF is shown in Figure 9a,b. The surface of the nGa-HAp/PDFE/PVK composite-coated samples exhibited apatite formation at 7 and 14 days of immersion in SBF, but the surface coverage and the growth of apatite formation differ each day. On day 7, the apatite formation on the surface is found as granule-like apatite structure, which is well clear from Figure 9a. On increasing the immersion period to 14 days (Figure 9b), an improved formation of dense apatite layer with more granule-like morphology was observed on the nGa-HAp/PDFE/PVK composite-coated Ti surface.
As a result, the surfaces were completely covered with dense HAp and the formation increased with days from 7 to 14 days, which strongly evidences that the nGa-HAp/PDFE/PVK composite-coated samples accelerate the biomineralization process in the SBF solution. Generally, the Ca-rich positively charged surface can be easily attracted by the negatively charged (OH − and PO 4 3− ) ions in the SBF to form amorphous Ca-poor apatite layer, which leads to the formation of apatite layer on the nGa-HAp/PDFE/PVK composite-coated Ti samples. Thus, from the FESEM morphologies, it is clearly manifested that the apatite-forming ability at 7 and 14 days of immersion and the resultant nGa-HAp/PDFE/PVK composite coating exhibited the enhanced bioactivity in SBF solution.
Swelling and Degradation Behaviors in SBF. To widen the application of the as-developed composite in the field of medicine, the swelling and degradation behaviors were observed in SBF solution. Also to ensure the nontoxic and bioactive nature of the as-developed composite, the swelling was performed in SBF, which is similar to human blood plasma.
It is also very important to note that if any material is too stiff, there occurs the lack of adherence between the composite coating and the implant. In general, the presence of HAP in the composite provides superiority on the swelling behavior. The swelling behavior of all of the samples is shown in Figure 10. The results indicated that the swelling percentage increased up to 25% for HAp, 32% for Ga-HAp, 39% for nGa-HAp, 51% for nGa-HAp/PDFE, and 52% for nGa-HAp/PDFE/PVK composite during 14 days of immersion and then no significant changes were observed. From the swelling curves, it is inferred that in the presence of neem gum-mediated HAp, the swelling behavior of the composite sample is more significant. The swelling property also increases the surface-to-volume ratio, thus allowing the proliferation of cells and thereby promoting bone growth. It is because of the fact that the swelling property allows the sample to utilize the nutrients from the medium and promote more adhesion property.
Consequently, the degradation behaviors of the asdeveloped samples (HAp, Ga-HAp, nGa-HAp, nGa-HAp/ PDFE, and nGa-HAp/PDFE/PVK composites) are presented in Figure 11. From the result observed, the percentage of degradation for the as-developed nGa-HAp/PDFE/PVK composite was significantly low. It is prominent to note that there occurred an interface between nGa-HAp and PVK, which significantly lowers the degradation ability, thereby ensuring the stability of the composite.
Thus, the as-developed nGa-HAp/PDFE/PVK composite had an optimal property of fair swelling and low degradation,
ACS Omega
http://pubs.acs.org/journal/acsodf Article (CH 3 COOC 2 H 5 ), methanol (CH 3 OH), and sodium hydroxide (NaOH) as analytical-grade chemicals were purchased from Aldrich Chemicals, India. All of the chemicals and reagents used for these investigations were used as such without any further purification. Preparation of Plant Fruit Extracts. Healthy and fresh fruits of Pergularia daemia (Forsk) were collected from the university campus of Periyar University, Salem District, Tamil Nadu, India. The fruits of Pergularia daemia were dried in the shade. The extraction of Pergularia daemia from the fruits was carried out according to the standard protocol. In brief, the fruit fiber material was defatted using petroleum ether and then extracted by mixing ethyl acetate and methanol using a Soxhlet apparatus at 40°C for 72 h. The residue was consequently filtered through Whatman no. 1 filter paper. The ethyl acetate and methanolic extracts of Pergularia daemia were further concentrated under vaccum in a rotary vaccum evaporator at 40°C, and the extract was stored in an amber-colored airtight bottle at 4°C for further use of composite.
Preparation of PVK Solution. PVK (0.5 g) was dissolved in acetone under magnetic stirring at 80°C for 1 h and then kept aside for the composite preparation. Then, the acquired white precipitate was reserved in an ultrasonicator at 45°C for 2 h to ensure the identical mixture, and the sample was dried in a hot-air oven at 80°C for 3 h. Finally, the dried nGa-HAp powder was washed five times with ethanol and deionized water to eliminate the impurities. Finally, the dried sample of nGa-HAp was then calcined for 24 h and sintered at 700°C for 5 h and then ground into a fine powder for the preparation of composite (Scheme 1).
Synthesis of the nGa-HAp/PDFE Composite. The nGa-HAp/PDFE composite was prepared through the ultrasonication process using an ultrasonicator (EN-60US (Microplus) at the frequency of 28 kHz and 150 W) in the ratio of 2 g of nGa-HAp in 20 mL of ethanol−water mixture and 20 mL of PDF extract. This mixture was ultrasonicated for 5 h to confirm the excellent dispersion. Finally, the resultant solution is then clearly filtered, washed, and dried for 5 h at 60°C and then ground to form a powder.
Synthesis of the nGa-HAp/PDFE/PVK Composite. The electrolyte for the electrophoretic deposition was prepared using three different concentrations (0−3 wt %) of the PVK solution and gradually added to a solution of 2 wt % nGa-HAp/PDFE composite in 40 mL of ethanol−water mixture under constant stirring for 5 h. Before deposition, the final mixture of the nGa-HAp/PDFE/PVK composite (0−3 wt % of PVK) was subjected to a strong ultrasonic treatment for 0.5 h to ensure a clear dispersion of the composite into the electrolyte. They were filtered, washed, and dried at 80°C for 12 h and then ground into a fine powder.
Specimen Preparation. The Ti (with a size of 10 × 10 × 3 mm) sample, which was used as the substrate for the fabrication of the nGa-HAp/PDFE/PVK composite via the electrophoretic deposition technique. Before the deposition, all of the Ti samples were treated with abrasive papers (SiC) from 400 to 1500 grits and polishing suspensions. Finally, the polished Ti samples were cleaned by ultrasonication using distilled water followed by washing with acetone and ethanol mixture for 15 min and drying at room temperature, which was used for the nGa-HAp/PDFE/PVK composite-coating EPD technique. 9 Electrophoretic Deposition of the nGa-HAp/PDFE/ PVK Composite on Ti. During the EPD, a platinum electrode was used as the anode, and the cleaned Ti was used as a cathode (working electrode). The anode and working electrodes were immersed in 100 mL of the nGa-HAp/ PDFE/PVK composite suspension while being place parallel to one another with a distance of 4 cm apart. The electrolyte solution for the nGa-HAp/PDFE/PVK composite (0−3 wt % of PVK) deposition was prepared by mixing 3 g of the asprepared trinary composite in 30 mL of ethanol−water mixture under constant magnetic stirring for 1 h and further uniformly dispersed ultrasonically for about 1 h to obtain a clear dispersion of the electrolyte at room temperature. The electrophoretic deposition of a submicron, homogeneous thick nGa-HAp/PDFE/PVK composite (0−3 wt % of PVK) on the Ti cathode was performed at a constant voltage of 30 V for 10 min using a direct current power supply system. After every successive deposition of composites, the samples were cautiously dragged out of the electrolyte, rinsed with deionized water, dried for 24 h, and kept in a desiccator at room temperature.
Physical Characterizations. Generally, the phase and functional groups of the coating were observed by scraping off the coated material from Ti sample under identical conditions and investigating the resulting powder sample. Fourier transform infrared (FTIR) spectroscopy was employed to determine the presence of various functional groups in HAp, Ga-HAp, nGa-HAp, nGa-HAp/PDFE, and nGa-HAp/PDFE/ PVK spectra from 3500 to 500 cm −1 using an Impact 400D Nicolet spectrometer by the KBr pellet method.
X-ray diffraction (XRD) patterns of HAp, Ga-HAp, nGa-HAp, nGa-HAp/PDFE, and nGa-HAp/PDFE/PVK were analyzed to confirm the phase compositions using an X-ray diffractometer (XRD, Seifert), and the data were compiled by the International Centre for Diffraction Data (ICDD).
Vickers Micro-Hardness Test. The micro-hardness of the HAp, Ga-HAp, nGa-HAp, nGa-HAp/PDFE, and nGa-HAp/ PDFE/PVK composite-coated Ti samples, respectively, was measured using an Akashi AAV-500 series hardness tester (load 490.3 mN for a dwell time of 20 s, Kanagawa, Japan). The Vickers micro-hardness (H v ) measurements were performed for each sample by taking average of five measurements made at various sites.
Corrosion Resistance of the Coatings. The corrosion tests of blank and Ga-HAp-, nGa-HAp-, nGa-HAp/PDFE-, and nGa-HAp/PDFE/PVK-coated Ti samples were carried out in ACS Omega http://pubs.acs.org/journal/acsodf Article simulated body fluid (SBF) solution at room temperature (37°C ) by the potentiodynamic polarization technique. For this investigation, a characteristic Ti sample, a saturated calomel electrode (SCE), and a platinum plate were used as the working, reference, and counter electrodes, respectively, which was used to perform the corrosion experiments using the electrochemical workstation CHI 760C (CH Instruments).
The polarization curves were recorded at the open-circuit potential (OCP), after 1 h of immersion of the blank Ti metal in the SBF solution, and the time to reach a stable open-circuit potential was limited to 1 h owing to the fact that Ti surface may be changed within a longer duration. The potentiodynamic polarization studies of blank and Ga-HAp-, nGa-HAp-, nGa-HAp/PDFE-, and nGa-HAp/PDFE/PVK-coated Ti samples were performed under OCP condition from −0.8 to 0.4 V at a constant scan rate of 1 mV/s. To confirm the reproducibility and reliability of the polarization results, the test was carried out three times and plots were recorded using CHI 760C software.
Antibacterial Activity. The in vitro antibacterial activity of the as-developed Ga-HAp, nGa-HAp, nGa-HAp/PDFE, and nGa-HAp/PDFE/PVK composite samples were evaluated against two bacterial species, Gram-negative strain Pseudomonas aeruginosa (P. aeruginosa) and Gram-positive strain Staphylococcus aureus (S. aureus), using the agar disk-diffusion technique and minimum inhibitory concentration (MIC). P. aeruginosa and S. aureus are the most common pathogens associated with biomaterial-centered infections. The stock solution for both the organisms (1 × 10 8 CFU/mL) was prepared by mixing 1 mL of bacterial solution separately with 9 mL of broth (Luria−Bertani) and then incubated with shaking at 250 rpm for 24 h at 37°C. The nutrient agar plates were prepared, and the cultures were inoculated for disk-diffusion technique. The sterile paper disks with 5 mm diameter were dipped in Ga-HAp, nGa-HAp, nGa-HAp/PDFE, and nGa-HAp/PDFE/PVK composite samples. Each sample was taken at concentrations of 50 and 100 μg/mL and placed in the agar plate and incubated at 37°C for a period of 24 h. After the incubation period, the Petri plates were observed for antimicrobial activity based on the width of zone around the disks of each sample.
In Vitro Cell Viability Test. Human osteosarcoma MG63 cells (HOS MG63, ATCC CRL-1427TM) were cultivated in sterile tissue culture flasks containing extraction medium, Dulbecco's modified Eagle's medium (DMEM) supplied with 1% penicillin-streptomycin, and 10% fetal bovine serum at 37°C in a humidified atmosphere containing 5% CO 2 , and the MG 63 cells were passaged by trypsinization before confluence. The measurements were performed using an indirect method in which the immersion extracts were used to culture for the investigation of cell viability of the nGa-HAp/PDFE/PVKcoated Ti samples. Before the measurements, nGa-HAp/ PDFE/PVK-coated Ti samples were sterilized under ultraviolet light during the cross-linking process.
To evaluate the cytotoxicity of the nGa-HAp/PDFE/PVKcoated Ti samples, HOS MG63 cells were seeded in 12-well plates at 10 4 cells/mL by measuring the mitochondrial dehydrogenase activity using a modified 3-(4,5-dimethyl-2tiazolyl)-2,5-diphenyl-2-tetrazolium bromide (MTT) assay. The immersion extracts were collected from 20 mL of DMEM in a humidified atmosphere with 5% CO 2 at 37°C at different concentrations for 0.5, 5, 50, 100, and 150 μg. MG63 cells were seeded in 12-well plates with a density of at 10 4 cells per well in 50 μL of medium. For all samples, viability images were taken from at least three to five different locations to attain an overview of the cell attachments on the nGa-HAp/ PDFE/PVK composite-coated Ti samples. The absorbance of the samples was measured with a multimode detector at a wavelength of 570 nm, and the cell viability (%) of the nGa-HAp/PDFE/PVK-coated Ti samples was calculated with respect to control wells using the following formula In Vitro Bioactivity in SBF. The in vitro bioactivity of the nGa-HAp/PDFE/PVK composite-coated Ti samples was tested by soaking in 30 mL of Kokubo's simulated body fluid 46 (SBF) in a beaker with an airtight lid for 7 and 14 days at 37°C. After the formation of apatite layer at time periods of 7 and 14 days, the immersed samples were taken out and gently rinsed in deionized water. The bonelike apatite formed on the nGa-HAp/PDFE/PVK composite-coated Ti samples was air-dried before the investigation by FESEM analysis.
Swelling Behaviors in SBF. Absorption is the most significant process, and for this purpose, the swelling study is performed, which reveals the stability of the samples. The asdeveloped samples (HAp, Ga-HAp, nGa-HAp, nGa-HAp/ PDFE, and nGa-HAp/PDFE/PVK composites) were subjected to swelling test. The samples were soaked in 10 mL of SBF solution, which was prepared according to Kokubo's protocol. 46 The swelling test was performed for all of the samples on different days (0, 2, 4, 6, 8, 10, 12, and 14 days) of immersion. All of the samples were weighed before and after immersion. The immersed samples were removed, and the adsorbed solution was removed by air drying at 36°C, and then the samples were weighed. The difference in the weight of the samples before and after immersion was calculated in terms of swelling percentage (%) as follows: 47 where W w and W d are the wet and dry weights of the samples, respectively. The experiment was performed in triplicate to estimate the swelling behavior, and the average was taken.
Degradation Behaviors in SBF. The as-developed samples were immersed in SBF solution for different days (0, 2, 4, 6, 8, 10, 12, and 14 days) to investigate its degradation behavior. The weight of each sample was measured before the immersion and noted. After the specific time period, the samples were carefully removed and weighed. The degradation ability of the samples was calculated by adopting the following equation: 47 where D is the degradation ability in %, and W o and W d are the weights of the sample before and after degradation, respectively. The test was performed three times and the average of the obtained values was taken. Statistical Analysis. The biological data are evaluated using statistical analysis by one-way analysis of variance (ANOVA, Tukey's test for a post hoc examination). The antibacterial activity data are represented as mean ± standard deviation. The difference observed between samples was considered to be statistically significant probability values of P < 0.05. | 2021-09-12T05:23:06.923Z | 2021-08-24T00:00:00.000 | {
"year": 2021,
"sha1": "77e8c0dc41a30160d89da55fa4365d447203d77d",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c02186",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77e8c0dc41a30160d89da55fa4365d447203d77d",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7892907 | pes2o/s2orc | v3-fos-license | Climbing on Pyramids
A new approach is proposed for finding the"best cut"in a hierarchy of partitions by energy minimization. Said energy must be"climbing"i.e. it must be hierarchically and scale increasing. It encompasses separable energies and those composed under supremum.
Introduction
The present note 1 extends the results of [16], which themselves generalize some results of L. Guigues' Phd thesis [6] (see also [5]). In [6], any partition, or partial partition, of the space is associated with a "separable energy", i.e. with an energy whose value for the whole partition is the sum of the values over its various classes. Under this assumption, two problems are treated: 1. given a hierarchy of partitions and a separable energy ω, how to combine some classes of the hierarchy in order to obtain a new partition that minimizes ω?
2. when ω depends on integer j, i.e. ω = ω j , how to generate a sequence of minimum partitions that is increasing in j, which therefore should form a minimum hierarchy?
Though L. Guigues exploited linearity and affinity assumptions to lean original models on, it is not sure that they are the very cause of the properties he found. Indeed, for solving problem 1 above, an alternative and simpler condition of increasingness is proposed in [16]. After additional developments, it leads to the theorem 4 of this paper. The second question, of a minimum hierarchy, which was not treated in [16], is the concern of sections 3 to 5 of the text. The main results are the theorem 12, and the new algorithm developed in Section 5, more general but simpler than that of [6]. It is followed by the two sections 6 and 7 about the additive and the ∨-composed energies respectively. They show that Theorem 12 applies to linear energies (e.g. Mumford and Shah, Salembier and Garrido), as well to several useful non linear energies (Soille, Zanoguera). [7], [3].
Hierarchy of partitions (reminder)
The space under study (Euclidean, digital, or else) is denoted by E, and the set of all partitions of E by D 0 (E). Here a convenient notion is that partial partition, of C. Ronse [11]. When we associate a partition π(A) of a set A ∈ P(E) and nothing outside A, then π(A) is called a partial partition of E of support A. The family of all partial partitions of set E is denoted by D(E), or simply by D when there is no ambiguity.
Finite hierarchies of partitions appeared initially in taxonomy, for classifying objects. One can quote in particular the works of J.P. Benzécri [2] and of E. Diday [4]. We owe to the first author the theorem linking ultrametrics with hierarchies, namely the equivalence between statements 1 and 3 in theorem 2 below. Hierarchies H of partitions usually derive from a chain of segmentations of some given function f on set E, i.e. from a stack of scalar or vector images, a chain which then serves as the framework for further operators. We consider, here, function f and hierarchy H as two starting points, possibly independent. This results in the following definition: Definition 1 Let D 0 (E) be the set of all partitions of E, equipped with the refinement ordering. A hierarchy H, of partitions π i of E is a finite chain in D 0 (E), i.e.
Let S i (x) be the class of partition π i of H at point x ∈ E. Denote by S the set of all classes S i (x) , i.e. S = {S i (x), x ∈ E, 0 ≤ i ≤ n}. Expression (1) means that at each point x ∈ E the family of those classes S i (x) of S that contain x forms a finite chain S x in P(E), Figure 2: The initial image has been transformed by increasing alternated connected filters. They result in partitions into flat zones that increase from left to right. Figure 3: Left, hierarchical tree; right, the corresponding space structure. S 1 and S 2 are the nodes sons of E, and H(S 1 ) and H(S 1 ) are the associated sub-hierarchies. π 1 and π 2 are cuts of H(S 1 ) and H(S 1 ) respectively, and π 1 ⊔ π 2 is a cut of E.
of nested elements from {x} to E : According to a classical result, a family {S i (x), x ∈ E, 0 ≤ i ≤ n} of indexed sets generates the classes of a hierarchy iff A hierarchy may be represented in space E by the saliencies of its frontiers, as depicted in Figure 1. Another representation, more adapted to the present study, emphasizes the classes rather than their edges, as depicted in Figure2. Finally, one can also describe it, in a more abstract manner, by a family tree where each node of bifurcation is a class S, as depicted in Figure 3. The classes of π i−1 at level i − 1 which are included in S i (x) are said to be the sons of S i (x). Clearly, the sets of the descenders of each S forms in turn a hierarchy H(S) of summit S, which is included in the complete hierarchy H = H(E).
The two zones H(S 1 ) and H(S 2 ), drawn in Figure 3 in small dotted lines, are examples of such sub hierarchies. The following theorem [16] makes more precise the hierarchical structure This result shows that the connective segmentation approach is inefficient for hierarchies, and orients us towards the alternative method, which consists in optimizing an energy.
3 Optimum partitioning of a hierarchy
Cuts in a hierarchy
Following L. Guigues [6] [5], we say that any partition π of E whose classes are taken in S defines a cut in hierarchy H. The set of all cuts of E is denoted by Π(E) = Π. Every "horizontal" section π i (H) at level i is obviously a cut, but several levels can cooperate in a same cut, such as π(S 1 ) and π(S 2 ), drawn with thick dotted lines in Figure 3. Similarly, the partition π(S 1 ) ⊔ π(S 2 ) generates a cut of H(E). The symbol ⊔ is used here for expressing that groups of classes are concatenated. It means that given two partial partitions π(S 1 ) and π(S 2 ) having disjoint supports, π(S 1 ) ⊔ π(S 2 ) is the partial partition whose classes are either those of π(S 1 ) or those of π(S 2 ).
Similarly, one can define cuts inside any sub-hierarchy H(S) of summit S. Let Π(S) be the family of all cuts of H(S). The union of all these families, when node S spans hierarchy H is denoted by Π(H) = ∪{Π(S), S ∈ S(H)}.
Although the set Π(H) does not regroup all possible partial partitions with classes in S, it contains the family Π(E) of all cuts of H(E). The hierarchical structure of the data induces a relation between the family Π(S) of the cuts of node S and the families Π(T 1 ), .., Π(T q ) of the sons T 1 , .., T q of S. Since all expressions of the form ⊔{π(T k ); 1 ≤ k ≤ q} define cuts of S, Π(S) contains the whole family plus the cut of S into a unique class, i.e. S itself, which is not a member of Π ′ (S). And as the other unions of several T k are not classes listed in S, there is no other possible cut, hence
Cuts of minimum energy and h-increasingness
In the present context, an energy ω : D(E) → R + is a non negative numerical function over the family D(E) of all partial partitions of set E. The cuts of Π(E) ⊆ Π of minimum energy, or minimum cuts, are characterized under the assumption of hierarchical increasingness, or more shortly of h-increasingness [16]. Definition 3 Let π 1 and π 2 be two partial partitions of same support, and π 0 be a partial partition disjoint from π 1 and π 2 . An energy ω on D(E) is said to be hierarchically increasing, or h-increasing, in D(E) when, π 0 , π 1 , π 2 ∈ D(E), π 0 disjoint of π 1 and π 2 , we have An illustration of the meaning of implication (5) is given in Figure 4. When the partial partitions are embedded in a hierarchy H, then Rel.(5) allows us an easy characterization of the cuts of minimum energy of H, according to the following property, valid for the class H of all finite hierarchies on E Theorem 4 Let H ∈ H be a finite hierarchy, and ω be an energy on D(E). Consider a node S of H with p sons T 1 ..T p of minimum cuts π * 1 , ..π * p . The cut of minimum energy of node S is either the cut or the partition of ω into a unique class, if and only if S is h-increasing.
Proof. We firstly prove that the condition is sufficient. The h-increasingness implies of the energy implies that cut (6) has the lowest energy among all the cuts of type Π ′ (S) = ⊔{π(T k ); 1 ≤ k ≤ p} (it does not follow that it is unique). Now, from the decomposition (4), every cut of S is either an element of Π ′ (S), or S itself. Therefore, the set formed by the cut (6) and S contains one minimum cut of S at least.
We will prove that the h-increasingness is necessary by means of a counter-example. Consider the hierarchies of n levels in E = R 2 , and associate an energy with the lengths of the frontiers as follows where {T u , 1 ≤ u ≤ q} are the sons of node S. Calculate the minimum cut of the three levels partition depicted in Fig.5 a) b) and c). The size of the square is 1 and one must take half the length for the external edges. We find ω(T 1 ⊔ T 2 ) = 3 and ω(S) = 2. Hence S is the minimum cut of its own sub-hierarchy. So does S ′ . At the next level we have ω(S ⊔ S ′ ) = 4, and The condition of h-increasingness (5) opens into a broad range of energies, and is easy to check. It encompasses the case of the separable energies [6] [14], as well as energies composed by suprema [1] [17] [19]. Computationally, it yields to the following Guigues'algorithm: • scan in one pass all nodes of H according to an ascending lexicographic order ; • determine at each node S a temporary minimum cut of H by comparing the energy of S to that of the concatenation of the temporary minimum cuts of the (already scanned) sons T k of S .
Single minimum cuts
It may happen that in the family Π(S) of Relation (4) a minimum cut of Π ′ (S) has the same energy as that of S. This event introduces two solutions which are then carried over the whole induction. And since such a doublet can occur regarding any node S of H, the family M = M (H) of all minimum cuts may be very comprehensive. However, M turns out to be structured as a complete lattice for the ordering of the refinement, where the sup (resp. the inf) are obtained by taking the union (resp. the intersection) of the classes [16].
As the cardinal of Π is finite, m is strictly positive. Therefore, one can find a ε such as 0 < ε < m, and state the following: with 0 < ε < m. Then the sum ω + ω ′ is h-increasing and associates a unique minimum cut with each sub-hierarchy H(S). When ω[π * (S)] = ω[{S}], then the minimum cut for ω + ω ′ is π * (S), and it is {S} itself when {S} and π * (S) have the same ω-energy.
Note that the impact of ω ′ is reduced to the case of equality ω[π * (S)] = ω [{S}], and that ω+ ω ′ can be taken arbitrary close to ω, but different from it. Proposition 5 turns out to be a particular case of the more general, and more useful result Corollary 6 Define the additive energy ω ′ by the relations Then ω + ω ′ is h−increasing and always leads to a unique optimum cut.
Proof. When ω[π(S)] ≤ ω 0 , we meet again the previous proposition 5. When ω[π(S)] > ω 0 , then we still have the triple distinction of the previous proof, the only difference being that case 2/ and 3/ are inverted, which achieves the proof.
The result (8) is a top-down property: Proposition 7 Let π * (E) be the single minimum cut of a hierarchy H w.r. to a h-increasing energy ω, and let S be a node of H. If π * (E) meets the sub-hierarchy H(S) of summit S, then the restriction π * (S) of π * (E) to H(S) is the single minimum cut of the sub-hierarchy H(S).
Generation of h-increasing energies
As we saw, the energy ω : D(E) → R + is defined on the family D(E) of all partial partitions of E. An easy way to obtain a h-increasing energy consists in defining it, firstly, over all sets S ∈ P(E), considered as one class partial partitions {S}, and then in extending it to partial partitions by some law of composition. Then, the h-increasingness is introduced by the law of composition, and not by ω[P(E)]. The first two modes of composition which come to mind are, of course, addition and supremum, and indeed we can state Proposition 8 Let E be a set and ω : P(E) → R + an arbitrary energy defined on P(E), and let π ∈ D(E) be a partial partition of classes {S i , 1 ≤ i ≤ n}. Then the the two extensions of ω to the partial partitions D(E) We shall study these two models in sections 6 and 7, when they depend on a parameter leading to multiscale structures. A number of other laws are compatible with h-increasingness. Instead of the supremum and the sum one could use the infimum, the product, the difference sup-inf, the quadratic sum, and their combinations. Moreover, one can make depend ω on more than one class, on the proximity of the edges, on another hierarchy, etc..
Stucture of the h-increasing energies
We now analyze how different h-increasing energies interact on a same hierarchy. The family Ω of all mappings ω : D →R + forms a complete lattice where and whose extrema are ω(π) = 0 and ω(π) = +∞. What can be said about the sub class of Ω ′ ⊆ Ω of the h-increasing energies? The class Ω ′ is obviously closed under addition and multiplication by positive scalars, i.e.
Note that the infimum cut π ∧ω i related to energy ∧ω i is not the infimum ∧π * ω i of the minimum cuts generated by the ω i . It has to be computed directly (dual statement for the supremum).
climbing energies
The usual energies are often given by finite sequences {ω j , 1 ≤ j ≤ p} that depend on a positive index, or parameter, j. Therefore, the processing of hierarchy H results in a sequence of p optimum cuts π j * , of labels 1 ≤ j ≤ p. A priori, the π j * are not ordered, but is they were, i.e. if j ≤ k ⇒ π j * ≤ π k * , j, k ∈ J, then we should obtain a nice progressive simplification of the optima. We now seek the conditions that permit such an increasingness.
Consider a finite family of energies {ω j , 1 ≤ j ≤ p} on all partial partitions D(E) of set E , and apply these energies to the partial partitions Π(H) of hierarchy H (Relation (3)). The family {ω j }, not totally arbitrary, is supposed to satisfy the following condition of scale increasingness: 2 Definition 10 A family of energies {ω j , 1 ≤ j ≤ p} on D(E) is said to be scale increasing when for j ≤ k, each support S ∈ S and each partition π ∈ Π(S), we have that In case of a hierarchy H, relation (13) means that, if S is a minimum cut w.r. to energy ω j for a partial hierarchy Π(S), then S remains a minimum cut of Π(S) for all energies ω k , k ≥ j. As j increases, the ω j 's preserve the sense of energetic differences between the nodes of hierarchy H and their partial partitions. In particular, all energies of the type ω j = jω are scale increasing.
Axiom (13) compares two energies at the same level of H, whereas axiom (5) allows us to compare a same energy at two different levels. Therefore, the most powerful energies should to be those which combine scale and h-increasingness, i.e.
Under these three assumptions, the climbing energies satisfy the very nice property to order the minimum cuts with respect to the parameter j, namely: Theorem 12 Let {ω j , 1 ≤ j ≤ p} be a family of energies, and let π j * (resp. π k * ) be the minimum cut of hierarchy H according to the energy ω j (resp. ω k ). The family {π j * ,1 ≤ j ≤ p} of the minimum cuts generates a unique hierarchy H * of partitions, i.e.
if and only if the family {ω j } is a climbing energy.
Proof. Assume that axiom iii) of a climbing energy is satisfied, and denote by S j and S k the two classes of π j * and π k * at a given point x. According to Rel.(2), we must have either S j ⊆ S k or S k ⊂ S j . We will prove that the second inclusion is impossible. Suppose that class S k ⊂ S j . Then the restriction of the minimum cut π k * to S j generates a cut, π 0 say, of S j . According to proposition 7, which involves axioms i) and ii) of a climbing energy, the restriction π 0 is in turn minimum for the energy ω k over Π(S j ), i.e. ω k (π 0 ) < ω k (S j ). This implies, by scale increasingness, that ω j (π 0 ) < ω j (S j ) (here Relation (13) has been red from right to left). But this inequality contradicts the fact that S j is a minimum cut for its own hierarchy H(S j ). Therefore the inclusion S k ⊂ S j is rejected, and the alternative inclusion S j ⊆ S k is satisfied whatever x ∈ E, which results in π j * ≤ π k * . Moreover, because of the single minimum cutting axiom, each π j * being unique, so does the whole hierarchy H * .
The "only if" statement will be proved by means of a counter-example. The notation is the same as in system (7) but we add a term for the areas and we replace the length ∂T u of the frontier of each T u by the length ∂ hor T u (resp. ∂ vert T u ) of the horizontal (resp. vertical) projection of the said frontiers if j ≤ j 0 , (resp. if j > j 0 ).
Relation (14) has been established by L. Guigues in his Phd thesis [6] for affine and separable energies, called by him climbing energies. However, the core of the assumption (13) concerns the propagation of energy through the scales (1...p), rather than affinity or linearity, and allows non additive laws (see Section 7). In addition, the new climbing axioms 11 lead to the algorithms of the next section, much simpler than that of [6].
The scale sequence has been supposed finite, i.e. with 1 ≤ j ≤ p. We could replace j by a positive parameter λ ∈ R + . But the induction method for finding the optimal cut requires a finite hierarchy H. Therefore the number of the scale parameters actually involved in the processing of H will always be finite.
Implementation via a pedagogical example
We now describe step by step two algorithms for generating a hierarchy of minimum cuts. The input hierarchy H comprises the four levels and two energies {ω 1 , ω 2 } depicted in Figure 6. The energies of the classes are given along column I, left for λ = 1, and right for the energies Figure 6: Column I: initial hierarchy and associated energies (left, for λ = 1; right, the changes for λ = 2); column II: reading order; columns III and IV: progressive extraction of the minimum cuts for λ = 1 and λ = 2, the two final minuma have bold frames. that are different for λ = 2.They are composed by supremum (the energies of the three partial partitions {a, b, c}, and {e, f } are not decomposed). If class S is temporary minimum for ω λ then is remains temporary minimum for all ω µ , µ ≥ λ. The scale of apparition λ + (S) = inf{λ | S temporary minimum for ω λ } is the smallest λ for which S is a temporary minimum. A class S which is covered by a temporary minimum class Y at scale µ remains covered by temporary minima at all scales ν ≥ µ. The scale of removal λ − (S) = min{λ + (Y ), Y ∈ H, S ⊆ Y } is the smallest µ for which S is covered. Therefore, if λ + (S) < λ − (S), then class S ∈ H belongs to the minimum cuts π λ * for all λ of the interval of persistence [λ + (S), λ − (S)]. If λ − (S) ≤ λ + (S), then S, non-persistent, never appears as a class of a minimum cut. This happens to class g, for which λ − (g) = 1 and λ + (g) = 2.
When the two bounds λ + (S) and λ − (S) are known for all classes S ∈ H then the hierarchy of the minimum cuts π λ * is completely determined.They are calculated in two passes as follows 1-take the nodes following their labels (ascending pass). For node S, calculate the two energies ω λ of S and of its sons π(S), for λ = 1, 2. Stop at the first λ such that ω λ (S) ≤ ω λ (π(S)). This value is nothing but λ + (S). Continue for all nodes until he top of the hierarchy. This pass provides the λ + (S) values for all S ∈ H. In Figure 6 the sequence of the two energies is climbing: when a class is temporary minimum for λ = 1, it also does for λ = 2. Two changes occur from λ = 1 to λ = 2 : g and h have now the same energies as {a, b, c} and {e, f }respectively.
2-The second pass progresses top down. Each class Y is compared to its sons Z 1 , .Z i , and one allocates to each son Z i the new value min{λ − (Y ), λ − (Z i )}. At the end of the scan, all values λ − (S) are known.
Additive energies
The additive mode was introduced and studied by L.Guigues under the name of separable energies [6]. All classes S of S are supposed to be connected. Denote by {T u , 1 ≤ u ≤ q} the q sons which partition the node S, i.e. π(S) = T 1 ⊔ ..T u .. ⊔ T q . Provide the simply connected sets of P(E) with energy ω, and extend it from P(E) to the set D(E) of all partial partitions by using the sums All separable energies ω are clearly h-increasing on any hierarchy, since one can decompose the second member of implication (5) into ω(π 1 ⊔ π 0 ) = ω(π 1 ) + ω(π 0 ), and ω(π 2 ⊔ π 0 ) = ω(π 2 ) + ω(π 0 ). However, they do not always lend themselves to multiscale structures, and a supplementary assumption of affinity has to be added [6], by putting where ω µ is a goodness-of-fit term, and ω ∂ a regularization one, and λ j ≥ 0 an increasing function of j. The term ω µ is associated with the interior of A and ω ∂ with its boundary. In such an affine energy, the function ω ∂ must be c-additive on the boundary arcs F , in the sense of integral geometry, i.e. one must have since most of the arcs F 1 , F 2 , ...F i of the boundaries are shared between two adjacent classes. The passage "set→partial partition" can then be obtain by the summation (16). One classically take for ω ∂ the arc length function (e.g. in Rel.(20) ), but it is not the only choice. Below, one of the examples by Salemenbier and Garrido about thumbnails uses ω ∂ (S) = 1.
One can also think about another ω ∂ (S), which reflects the convexity of A.
The axiom (13) of scale increasingness involves only increments of the energy ω, which suggests the following obvious consequence: Proposition 13 If a family {ω j , 1 ≤ j ≤ p} of energies is scale increasing, then any family {ω j + ω 0 , 1 ≤ j ≤ p}, where ω 0 is an arbitrary energy over Π which does not depends on j, is in turn scale increasing.
Additive energy and convexity
Consider, indeed, in R 2 a compact connected set X without holes, and let dα be the elementary rotation of its outward normal along the element du of the frontier ∂X. As the radius of curvature R equals du/dα, and as the total rotation of the normal around ∂X equals π, When dealing with partitions, the distinction between outward and inward vanishes, but the parameter n(X) = 1 2π ∂X du |R(u)| still makes sense. It reaches its minimum 1 when set X is convex, and increases with the degree of concavity. For a starfish with 5 pesudo-podes, it values around 5. Now n(X) is c-additive for the open parts of contours, therefore it can participate as a supplementary term in an additive energy. In digital implementation, the angles between contour arcs must be treated separately (since c-additivity applies on the open parts).
Mumford and Shah energy
Let π(S) be the partition of a summit S into its q sons {T u , 1 ≤ u ≤ q} i.e. π(S) = T 1 ⊔ ..T u .. ⊔ T q . The classical Mumford and Shah energy ω j on π(S) and w.r.t. a function f comprises two terms [8]. The first one sums up the quadratic differences between f and its average m(T u ) in the various T u , and a second term weights by λ j the lengths ∂T i of the frontiers of all T u , i.e.
where the weight λ j is a numerical increasing function of the level number j. Both increasingness relations (5) and ( Fig. 1c, for two compression rates of 25 and 55 applied to the luminance ( a) and b) respectively); c) for a compression rate of 55 applied to the chrominance.
Additive energies and Lagrangian
The example of additive energy that we now develop is an extension of the creation of thumbnails by Ph. Salembier and L. Garrido [14] [15], itself based on Equation (20). We aim to generate "the best" simplified version of the image f of Fig. 1, in its color version of components (r; g; b), for the compression rate ρ ≃ 25. The bit depth of f is 24 and its size is 320 × 416 pixels. The hierarchy H of f is that depicted, via its saliency map, in Fig. 1c. It has been obtained from the luminance l = (r + g + b)/3. In each class S of H, the reduction consists in replacing the function f by its mean m(l) in S. The quality of this approximation is estimated by the L 2 norm, i.e.
If the coding cost for a frontier element is c, that of the whole class S becomes with 24 bits for m(S), and we take c = 2 for elementary cost. The total energy of a cut π is thus written ω j (π) = ω µ (π) + λ j ω ∂ (π). By applying Lagrange's theorem, we observe that the problem of finding the minimum of ω µ (π) under the constraint ω ∂ (π) ≃ 25 implies that the Lagrangian ω µ (π) + λ j ω ∂ (π) is a minimum. In this equation, the level j is unknown, but as the ω j are multiscale, we can easily calculate the sequence {π j * , 1 ≤ j ≤ p} of the minimum cuts. Moreover, the term ω ∂ (π) itself turns out to be a decreasing function of j and λ j , so that the solution of the Lagrangian is the π j * whose term ω ∂ (π j * ) is the greatest one smaller than k. It is depicted in Fig.7 a (in a black and white version).
Classically one reaches the Lagrangian minimum value by means of a system of partial derivatives. Now, remarkably, the present approach replaces the of computation of derivatives by a climbing. Moreover, we have under hand, at once, all best cuts for all compression rates. If we take ρ ≃ 55 for example, we find the image of Fig.7 b, whose partition is located at a higher level in the same pyramid of the best cuts. L. Guigues was the first to point out this nice property [6].
There is no particular reason to choose the same luminance l for generating the pyramid and, later, as the quantity to involve in the quality (21) to minimize. In the RGB space, a colour vector − → x (r; g; b) can be decomposed in two orthogonal projections i ) on the grey axis, namely − → l of components (l/3; l/3; l/3), ii ) and on the chromatic plane orthogonal to the grey axis at the origin, namely − → c of components ( The optimization is obtained by replacing the luminance l(x) in (21) by the module | − → c (x) | of the chrominance at point x. We now find for best cut the segmentation depicted in Fig.7 c, where, for the same compression rate ρ ≃ 55, color details are better rendered (e.g. right bottom), but black and white parts are worse (e.g. the letters on the labels).
Connective segmentation under constraint
This method was proposed by P. Soille and J. Grazzini in [17] and [18] with several variants; it is re-formulated by C. Ronse in a more general framework in [13]. Start from a hierarchy H and a numerical function f . Define the energy ω j for class S by ω j (S) = 0 when sup{f (x), x ∈ S} − inf{f (x), x ∈ S} ≤ c j ω j (S) = 1 when not, where c j is a given bound, and extend to partitions by ∨-composition. The class at point x of the largest partition of minimum energy is given by the largest S ∈ S, that contains x, and such that the amplitude of variation of f inside S be ≤ c j . When the energy ω j of a father equals that of its sons, one keeps the father when ω j = 0, and the sons when not. As bound c j increases, the {ω j } form a climbing energy, previously referred to in corollary [?] and depicted by the pedagogical example of Figure 8 .
Lasso
This algorithm, due to F. Zanoguera et Al. [19], appears also in [7]. An initial image has been segmented into α-flat zones, which generates a hierarchy as the slope α increases . The optimization consists then in drawing manually a closed curve around the object to segment. If A designates the inside of this lasso, then we take the following function ω(S) = 0 when S ⊆ A ; S ∈ S (23) ω(S) = 1 when not, for energy, and we go from classes to partitions by ∨-composition of the energies. The largest cut that minimizes ω is depicted in Figure 9c. We see that the resulting contour follows the edges of the petals. Indeed, a segmented class can jump over this high gradient only for α large enough, and then this class is rejected because it spreads out beyond the limit of the lasso. By taking a series {A j , 1 ≤ j ≤ p} of increasing lassos A j we make climbing the energy ω of Relation (23), and Theorem 12 applies. Unlike the previous case, where the λ j are scalars numbers, the multiscale parametrization is now given by a family of increasing sets.
Conclusion
Other examples can be given, concerning colour imagery in particular [1]. At this stage, the main goal is to extend the above approach to vector data, and more generally to GIS type data. | 2012-06-13T17:16:14.000Z | 2012-03-15T00:00:00.000 | {
"year": 2012,
"sha1": "2abafe16610a3bfb42b05ae6a7b8a84cbd03b9c4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "35c19e054d81a17743e4d50c85ef55f09a6d3278",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
250336457 | pes2o/s2orc | v3-fos-license | Neural Correlates Underlying Social-Cue-Induced Value Change
As humans are social beings, human behavior and cognition are fundamentally shaped by information provided by peers, making human subjective value for rewards prone to be manipulated by perceived social information. Even subtle nonverbal social information, such as others' eye gazes, can influence value assignment, such as food value. In this study, we investigate the neural underpinnings of how gaze cues modify participants' food value (both genders) by means of functional magnetic resonance imaging. During the gaze-cuing task, food items were repeatedly presented either while others looked at them or while they were ignored by others. We determined participants' food values by assessing their willingness to pay before and after a standard gaze-cuing training. Results revealed that participants were willing to pay significantly more for food items that were attended to by others compared with the unattended to food items. Neural data showed that differences in subjective values between the two conditions were accompanied by enhanced activity in the inferior frontal gyrus, middle temporal gyrus, and caudate after food items were attended to. Furthermore, the functional connectivity between the caudate and the angular gyrus precisely predicted the individual differences in the preference shift. Our results unveil the key neural mechanism underlying the influence of social cues on the subjective value of food and highlight the crucial role of social context in shaping subjective value for food rewards in human. SIGNIFICANCE STATEMENT We investigated how social information like others' gaze toward foods affects individuals' food value. We found that individuals more often choose food items that were looked at by another person compared with food items that were ignored. Using neuroimaging, we showed that this increased value for attended to food items was associated with higher brain activity in the inferior frontal gyrus, middle temporal gyrus, and caudate. Furthermore, functional connectivity between the caudate and the angular gyrus was associated with individual differences in values for food items that were attended to by others versus being ignored. These findings provide novel insights into how the brain integrates social information into food value and could suggest possible interventions like using gaze cuing to promote healthier food choices.
Introduction
Imagine your lunch break with colleagues at an exotic restaurant. When choosing the lunch menu, you might be interested in knowing what your colleagues are attending to. Can such subtle social information influence your food choice? Starting from early life, humans follow the direction of the gaze of another person (Shepherd, 2010), also known as joint attention. This gazefollowing response shifts the observer's attention to the gazed-at target. Interestingly, objects that are looked at by another person are processed faster (shorter reaction times) than non-looked-at objects (Sato et al., 2007;Madipakkam et al., 2019). Importantly, also the affective evaluations of objects are influenced by social cues like gaze. When objects are attended to by another person, they are more liked than ignored (Bayliss et al., 2007;Ulloa et al., 2015). This effect seems to be specifically elicited by gaze cues as other cues (e.g., pointing fingers or arrows) did not result in a similar increase in liking evaluations (Bayliss et al., 2007). Therefore, gaze cues might act as social reinforcers by highlighting objects in the environment that are attractive for another person (Shimojo et al., 2003;Bayliss et al., 2007;Terenzi et al., 2021).
Supporting this idea, a study by Madipakkam et al. (2019) has shown that gaze cuing also alters the evaluation of food. Specifically, participants spent more money, revealed by willingness to pay (WTP), for foods that were repeatedly looked at by another person than unattended to foods. Interestingly, this value shift occurred in the absence of participants' conscious awareness because they were not aware of the contingency between the gaze cue and food item presentation. This study suggests that another's attention may implicitly amplify subjective value in decision-making and more generally that social context can influence individuals' food value (Madipakkam et al., 2019). Further, it provides insights for possible interventions. For example, using gaze cuing to influence food value might promote healthier food choices. Understanding the underlying neural underpinnings of this value change will help gain insight into its underlying cognitive mechanisms, which are completely unknown. This adds an important dimension to the behavioral findings by providing insights for possible neurobiological interventions based on either neuromodulation techniques or pharmacology.
Neuroimaging studies have shown that the orbitofrontal cortex (OFC; Plassmann et al., 2007) and the striatum (Knutson et al., 2007;Levy and Glimcher, 2012) have a crucial role in value processing (e.g., WTP for food), whereas brain regions including the dorsolateral prefrontal cortex and the anterior cingulate cortex may play a role in choice execution and choice conflict, respectively (Botvinick et al., 2004;Rangel and Hare, 2010;Terenzi et al., 2022). However, whether and how this neural network plays a role also in value changes driven by a gaze-cuing paradigm is unclear. In particular, the processing of information on the other's gaze direction (Ramezanpour and Thier, 2020), as well as memory-related mechanisms for value assignment may modulate the neural network underlying value computations (Schonberg and Katz, 2020).
In the present work we used functional magnetic resonance imaging (fMRI) to examine the neural mechanisms of value changes attributed to social information by applying the gazecuing paradigm. First of all, based on studies showing gaze-cuinginduced value changes (Madipakkam et al., 2019;Terenzi et al., 2021), we hypothesized that participants' WTP for foods that are attended to by others increases compared with that for foods that are ignored. On a neural level, we hypothesized that this increased WTP is associated with higher brain activity in brain regions encoding the subjective value of rewards such as the OFC and the striatum (Plassmann et al., 2007;Rolls, 2021). We expected to observe a change in these brain areas when comparing postevaluations versus pre-evaluations. Further, we explored how these brain regions change their functional connectivity with other brain regions during the encoding of such food value change. Finally, we hypothesized if these brain activations would be sensitive already during the training phase.
Materials and Methods
Participants Twenty-nine participants took part in the study. Two participants had incomplete data because of technical problems during the acquisition, and therefore their data were excluded from all the analyses, resulting in a sample size of 27 participants (19 female, mean age 24.7, SD 4.6). All participants had normal or corrected-to-normal vision. They had no history of neurologic and psychiatric disorders. Written informed consent was obtained from all participants before the study. The study was in accordance with the Declaration of Helsinki and approved by the ethics committee of the University of Lübeck.
Stimuli
Stimuli were 36 food pictures of Korean snack items that were not familiar to the participants. Face stimuli consisted of four neutral faces (two males) taken from the Radboud Face Database (Langner et al., 2010;Madipakkam et al., 2019). The three versions of each face stimulus were looking straight ahead, looking to the left side, or looking to the right side.
Design and procedure The experiment consisted of three different phases, pre-evaluation, gaze cuing and postevaluation (Fig. 1). FMRI data were collected during each phase.
The subjects received 10 Euros per hour for participation as well as an additional 3 Euros to buy one of the snack items randomly chosen at the end of the experiment. During the pre-evaluation phase, subjects indicated their WTP for each food item by means of the Becker-DeGroot-Marschak auction (Becker et al., 1964). This procedure allowed us to measure the individual's subjective value of the food items. Choices were made using a scale ranging from 0 to 3 Euros (in 20-cent increments). Each food item was presented three times (108 trials in total). However, only in one third of the trials (36 trials), following the presentation of the foods participants were asked to provide their WTP for the products. Ratings were self-paced.
After the pre-evaluation phase, participants performed the gazecuing training phase. Here, the food items were sorted into congruent, incongruent, and neutral conditions (12 food items in each category). Each trial started with a fixation cross (jittered duration between 2 and 6 s) followed by a neutral face looking straight ahead (jittered duration between 2 and 4 s). After that, the face made a gaze shift (0.5 s), either to the left or to the right, or continued to look straight ahead. Finally, a food item was presented on the same side of the gaze shift (congruent trials) or on the opposite side (incongruent trials). In neutral trials, the face continued looking straight ahead while the food item was randomly presented on the right or left side of the face stimulus. Participants were told to respond as quickly and accurately as possible by indicating where the food item was presented (left or right side of the face) via button press. When participants pressed the wrong button, the trial was counted as incorrect. There were six runs of 36 trials each (216 trials in total).
After the gaze-cuing training phase, participants' WTP for the same food products was measured again in the postevaluation phase. Finally, participants were asked to perform a discrimination task to test whether they were aware about the contingency between the direction of the gaze and the location of the food item. Moreover, during the discrimination test, each of the 36 food items was shown on the screen together with a question asking the direction of the gaze of the face cue when the same food was presented during the gaze-cuing training; the options to answer were "to the food" (congruent condition), "looked away" (incongruent condition), or "to me" (neutral condition). Participants could respond at their own pace using a mouse cursor to indicate their answer. Food items were presented in random order with an an intertrial interval (ITI) of 0.5 s (Madipakkam et al., 2019;Fig. 2).
Data analyses
Behavioral data. Data were analyzed using R software (https://www. r-project.org/). The Shapiro-Wilk test was undertaken to demonstrate that data were normally distributed. We combined the incongruent and neutral conditions to have all food items that were not looked at in one control condition. This control condition was used in the analyses as no differences emerged between the incongruent and the neutral condition in the change of the WTP ratings after the gaze-cuing training (t (26) = 2.09; p = 0.139). No differences emerged between these two conditions also in a previous behavioral study using a similar paradigm (Madipakkam et al., 2019). Thus, we report only the results using one single control condition.
To measure the change of the WTP after the gaze-cuing phase, for each food item, pre-evaluations were subtracted from the postevaluations. These scores were then normalized based on the length of the rating scale as follows: 100 * (postevaluationspre-evaluations)/length of scale. Finally, these normalized values were mean centered. We then entered these values in a paired t test between the congruent and the control conditions.
For the gaze-cuing phase, response times and accuracy scores were entered in two separate paired t tests for the congruent and control conditions comparisons.
Participants' performance in the discrimination task was assessed using a one-sample t test against the chance level of 33% (Madipakkam et al., 2019). Further, a Pearson's correlation analysis was performed between participants' WTP changes in the congruent versus control condition and their performance in the discrimination task. One subject did not perform the discrimination task because of technical problems.
To examine neural responses that related to the gaze-cuing training, we first setup a general lineal model (GLM) with four events (EVs), (1) indicator for congruent trials, (2) indicator for incongruent trials, (3) indicator for neutral trials, and (4) indicator for the face cue. This analysis was performed using food onsets. Further, six movement parameters were included as regressors of no interest. The regressors were convolved with the hemodynamic response function (HRF). To remove low-frequency signal drift, a high-pass filter (128 Hz) was applied. Next, we calculated for each participant the contrasts of congruent . incongruent conditions, congruent . neutral conditions, as well as congruent . control (across incongruent and neutral) conditions, to examine brain activation related to the gaze-cuing training. For each of the first three EVs, congruent, incongruent, and neutral, an additional parametric regressor (i.e., the count, from one to six of each picture across the six runs) was included to further quantitatively assess the modulation effect of picture exposure frequency on brain activation. These contrast images were entered in a second-level random-effects analysis using a voxel-wise one-sample t test. When examining the brain imaging data that related to the gaze cuing training phase, four participants were excluded as their functional scans were incomplete because of technical error, resulting in a sample of 23 participants for analyses of the gaze-cuing effects on brain activity.
Next, to examine the neural signature of value change because of the gaze-cuing training (postevaluations greater than pre-evaluations), we first set up a GLM with the following six regressors: three regressors for food bidding (WTP of the three experimental conditions congruent, incongruent, neutral), and an additional three regressors for food-stimulus presentation of the three conditions without WTP (passively looking). Further, six movement parameters were included as regressors of no interest. The regressors were convolved with the HRF. Next, we calculated for each participant the contrast of congruent versus incongruent conditions and congruent versus neutral conditions, as well as congruent versus control conditions. These resulting contrast images were used in a second-level analysis using a voxelwise paired-sample t test. Regarding the pre-and postevaluation phases, the functional images of two participants were excluded as they were incomplete because of technical error. Thus, these neuroimaging analyses included 25 participants.
To test the hypothesized gaze-cuing training effect in reward related regions to food stimuli (for the contrast, congruent greater than control), a priori defined region of interest (ROI) analyses in the striatum and the OFC were performed. ROIs were defined using bilateral masks of the OFC, as well as bilateral masks of the striatum including putamen, caudate nucleus, and ventral striatum. These ROIs were derived from the Harvard-Oxford atlas (http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Atlases; we applied the Harvard-Oxford atlas to MNI space with the DAPBI toolbox).
Figure 2.
Example of a trial sequence in the discrimination test. During this test each food item was shown on the screen together with the question asking the direction of the gaze of the face cue, with the options to answer "to the food" (congruent condition), "looked away" (incongruent condition), or "to me" (neutral condition). Participants could respond at their own pace using a mouse cursor to indicate their answer. Food items were presented in random order with an ITI of 0.5 s. Next, ROIs that showed significant brain activations, as well as significant activations that emerged from whole-brain analyses, were entered in different analyses using the generalized psychophysiological interactions (gPPI) toolbox (https://www.nitrc.org/projects/gppi) to compute the functional connectivity at the time of the WTP evaluations (contrast, congruent vs control) and its association with WTP changes (postevaluations vs pre-evaluations; McLaren et al., 2012).
Further, to test whether voxels emerged from the contrast (congruent greater than incongruent) were the same as those identified in the contrast (congruent greater than neutral), a conjunction analysis was performed using the xjView toolbox (https://www.alivelearn.net/xjview). Whole-brain statistical maps were voxel-level thresholded at p , 0.001 before undergoing cluster-level familywise error (FWE) correction (P FWE , 0.05). We report significant brain activations within the ROIs that survived FWE correction for multiple comparisons using small-volume correction (P SVC-FWE , 0.05).
Results
We conducted a within-subject experiment with three different phases, pre-evaluation, gaze-cuing training, and postevaluation ( Fig. 1; see above, Materials and Methods). The pre-and postevaluation phases were identical. Here, subjects submitted their WTP for 36 different Korean food items, which were previously unfamiliar to them. Choices were made using a scale ranging from 0 to 3 euros (in 20-cent increments). Between the pre-and postevaluations, participants underwent a gaze-cuing training in which the food items were repeatedly presented together with a face. Some food items were repeatedly looked at by others (congruent condition), whereas others were ignored by others (control condition). Participants were instructed to respond with a button press as quickly and accurately as possible by indicating where the food item was presented (left or right side of the central face cue) via button press. Finally, participants were asked to also perform a discrimination task to test whether they were aware about the contingency between the direction of the gaze and the location of the food item during the gazecuing training (see above, Materials and Methods).
Behavioral results
Looked-at food items are attended to faster (gaze-cuing training phase) We first investigated whether the gaze-cuing training facilitated participants' response when the food items appeared on the screen (to the left or the right of the central cue). In particular, we tested the hypothesis that the time to detect targets that were looked at by others (congruent condition) was shorter than that for non-looked-at targets (control condition) because of gazeevoked shifts in participants' attention. To do so, we compared participants' response time (button presses) in these two conditions. A paired t test showed that participants were faster in responding to the looked-at food items (congruent condition) compared with the non-looked-at food items (control condition; t (26) = À11.6; p , 0.001; Fig. 3a).
Further, participants performed the gaze-cuing training with high accuracy (M = 97%, SD = 0.03). A paired t test showed that participants performed similarly well in both the congruent and the control condition (t (26) = 1.05, p = 0.30).
Looked-at food items are preferred more (postevaluation vs pre-evaluation phases) Next, we checked whether the gaze-cuing training has an impact on the food value by comparing the food WTP of postevaluation versus pre-evaluation phases (see above, Materials and Methods). In particular, we hypothesized that participants' WTP for attended to foods increased compared with ignored foods. A paired t test on subjects' WTP changes showed that participants were willing to spend more money for the looked-at items compared with the control items (t (26) = 3.04, p = 0.005; Fig. 3b). Because participants were asked to rate different unfamiliar snack foods, we tested through correlation analyses whether their WTP ratings were consistent across the pre-and postevaluation phases. Results show that WTP ratings between pre-and postevaluations in the congruent condition were significantly correlated (r = 0.70, p , 0.001). Similarly, WTP ratings between pre-and postevaluations for the control condition were significantly correlated (r = 0.83, p , 0.001). These results suggest that the intrinsic values of the unfamiliar food items used in our study were not arbitrary.
Further, we tested through a discrimination task whether participants were aware of the contingency between the position of the food item and the direction of the gaze during the gaze-cuing phase. A one-sample t test (against the chance level of 33%; Madipakkam et al., 2019) of participants' performance in the discrimination task did not show a significant difference from chance (t (25) = À0.08, p = 0.93). Also, there was no significant correlation between the difference in the change in WTP ratings for congruent versus control items and the participants' performance in the discrimination task (r = À0.32, p = 0.11), suggesting that the gaze-cue-induced value change occurred unconsciously.
To test for possible sex differences in the change in WTP ratings, we performed a repeated-measures ANOVA on participants' WTP changes with Condition (Congruent, Control) as a within-subjects variable and Sex (Female, Male) as a between subjects. Results did not show a main effect of Sex
Neuroimaging results
Changes in food value are reflected in increases in middle temporal gyrus, inferior frontal gyrus, and caudate responses (postevaluation vs pre-evaluation phases) To examine the neural signature of value change because of the gaze-cuing training (postevaluations greater than preevaluations), we compared the neural activation toward food items that were repeatedly looked at (congruent items) in comparison to those that were ignored by others (control items) at the time of WTP evaluations. Whole-brain statistical maps were voxel-level thresholded at p , 0.001 before undergoing cluster-level familywise error correction (P FWE , 0.05). The paired t test revealed a greater activation (contrast, congruent greater than control) in the left middle temporal gyrus [MTG; Montreal Neurologic Institute (MNI) coordinates, x = À60, y = À48, z = 9; t = 6.59; P FWE , 0.05, cluster level corrected] and in the left inferior frontal gyrus (IFG; MNI coordinates, x = 36, y = 18, z = 24; t = 4.41; P FWE , 0.05, cluster level corrected) in the postevaluation phase compared with that in the pre-evaluation phase (Fig. 4a, Table 1).
We then focused our analyses on the brain regions that were shown to code reward value (Knutson et al., 2007;Rangel et al., 2008;Levy and Glimcher, 2012) by means of a priori defined ROI analyses in OFC and the striatum (see above Materials and Methods). These analyses revealed greater blood oxygenation level-dependent (BOLD) activity (contrast, congruent greater than control) in the left caudate in the postevaluation phase compared with the preevaluation phase (cluster size = 149 voxels, t value = 3.57; cluster corrected p = 0.022) during the WTP ratings (Fig. 4b). No other significant results emerged. Table 1 and Figures 5 and 6 show the analyses including all three conditions (congruent, incongruent, and neutral).
We further examined brain activations related to the pretest phase at the time of the food presentation (using food onsets) and their associations with WTP ratings (one averaged WTP value across the different conditions for each participant). A multiple regression analysis was used to test for associations between WTP ratings and the BOLD responses at the time of food presentation. Results revealed activation in the left middle frontal gyrus (MFG; MNI coordinates, x = À33, y = 12, z = 45; t = 5.37; P FWE , 0.05, cluster level corrected) and right MFG (MNI coordinates, x = 30, y = 6, z = 54; t = 5.07; P FWE , 0.05, cluster level corrected), which were positively associated with WTP ratings (Fig. 7). To determine the association between the BOLD responses in these brain regions (6 mm sphere centered on the peak clusters identified) and WTP ratings, we performed correlational analyses. Results revealed that higher activities in these brain regions were associated with greater WTP for food items, meaning that these areas precisely reflected the food evaluations (Fig. 7). These results suggest that higher activity in the bilateral MFG precisely reflected greater WTP ratings for unknown food items. In addition, we examined whether the caudate is involved in the evaluation of foods in both the pre-evaluation and postevaluation phases. Results revealed significant activations in the left caudate at the time of the food presentation in the pre-evaluation phase (cluster size = 145 voxels, peak t value = 3.00; cluster corrected p = 0.021), as well as in the postevaluation phase (cluster size = 138 voxels, peak t value = 1.25; cluster corrected p = 0.023), with using a predefined caudate ROI. These results suggest that the caudate activity not only reflects the change of WTP but also the value representation itself. No other significant results emerged when using other predefined ROIs (bilateral masks of the OFC, putamen, and ventral striatum) for both pre-and postevaluation phases Caudate-connectivity modulation with angular gyrus reflects gaze-cue-induced value change Next, we aimed to understand the connectivity changes of the brain regions that showed a change as a function of the training to understand how they interact with other brain regions. To do so, we examined whether ROIs that showed significant brain activations, as well as significant activations that emerged from the whole brain analyses may interact with other brain regions. In particular, we aimed to examine whole-brain functional connectivity of (1) the clusters of left IFG and MTG and (2) the entire left caudate ROI for the contrast postevaluation (congruent vs control) versus pre-evaluations (congruent vs control). To do so, we performed different gPPI analyses (see above, Materials and Methods).
The results of the gPPI analyses (initial voxel level at p , 0.005 for cluster formation and then cluster-level familywise error corrected, P FWE , 0.05) revealed a significant left caudate seed-angular gyrus connectivity that was significantly associated with the WTP change (cluster size = 176 voxels, t value = À2.82, P FWE = 0.024; Fig. 8a). To note, this result did not survive when applying a more stringent criterion (e.g., voxel-level p , 0.001, P FWE , 0.05). To determine the association between the caudate-angular gyrus connectivity and the change in WTP ratings, we conducted a correlation analysis between the connectivity changes extracted from the angular gyrus cluster (6 mm sphere centered on the peak angular gyrus cluster identified in the contrast congruent greater than control) and WTP changes. We observed that a lower left caudate angular gyrus connectivity during congruent versus control trials (in the postevaluation vs pre-evaluation phases) was associated with a greater WTP change for food items (Fig. 8b). Last, to examine a possible association between the change in the left caudate individually (6 mm sphere centered on the peak caudate cluster identified for the contrast congruent greater than control) and the change in WTP, a Pearson's correlation was performed. Results showed no significant correlation (r = 0.03, p = 0.89). There was no activation change in the left angular gyrus at the whole-brain level (P FWE , 0.05).
Gaze-cue training phase Next, we examined brain activations related to the gaze-cuing training and compared the BOLD activity toward congruent items in comparison to that for control items during the training phase (see above, Materials and Methods). Whole-brain statistical maps were voxel-level thresholded at p , 0.001 before undergoing cluster-level familywise error correction (P FWE , 0.05). The one-sample t test revealed no significant results for this contrast.
Further, we also tested activation change within the striatum and OFC as a priori independent ROIs. These analyses revealed greater BOLD activity (contrast, congruent greater than control) in the left caudate (cluster size = 149 voxels, t value = 1.08; cluster corrected p = 0.020). No other significant results emerged.
Discussion
By combining fMRI and a gaze-cuing paradigm, we investigated the neural underpinnings of individuals' food value changes modulated by social information.
First, we show that the value of food that is attended to by others increases compared with ignored food, a replication of the previous finding (Madipakkam et al., 2019). Particularly, we found that participants were willing to pay more money for food items attended to by others compared with ignored ones. This finding corroborates the role of a powerful social cue like gaze not only in shifting the observers' attention but also in altering their subjective value for food rewards. Our results may be particularly relevant to understand the establishment of food preferences, consumer behavior, and food choice in early life. For example, it has been shown that gaze cues facilitate object processing in 4-month-old children (Wahl et al., 2019). Further, children are more prone to eating new foods when an adult model also eats them, compared with eating alone (Harper and Sanders, 1975).
Strikingly, in our study, we found that this change can even occur implicitly as participants were not aware of the contingency between the gaze cue direction and the position of the food item. This result suggests that individuals may use gaze to rapidly establish preferences about stimuli in the social environment, an ability that might be the result of an evolutionary adaptation because it allows to orient the attention to discoveries made by others (Shepherd, 2010).
Second, we found that participants were faster in detecting targets that were gazed at by others than that for non-gazed-at targets, which is in line with a large number of studies using gaze-cuing paradigms. This observation confirms that gaze is a powerful cue for attention (Atkinson et al., 2018). On a neural level, we found that the increased WTP for food items previously presented in congruent trials during the gaze-cuing training phase (compared with the WTP for items presented in control trials) was reflected in increased activity in the left IFG (pars opercularis and triangularis), left caudate, and the left MTG. Interestingly, in line with our results, previous studies have reported the involvement of the left IFG in attentional orienting to eye gaze cues (Bayless et al., 2013;Atkinson et al., 2018) during reward responsiveness (Rademacher et al., 2010;Liu et al., 2011), as well as during the processing of the subjective value of rewards in decision-making tasks (Massar et al., 2015;Castrellon et al., 2019). Particularly, a study by Castrellon et al. (2019) has found a positive correlation between mesolimbic dopamine D2-like receptors in the ventral striatum and the activity of the left IFG during subjective value computations (Castrellon et al., 2019). Similarly, we found increased activity in the left striatum and the left IFG in the postevaluation phase compared with the pre-evaluation phase for food items previously presented in congruent trials (vs control trials) during the gazecuing phase. Evidence for value coding not only in the ventral striatum but also in the caudate region has been found in other previous studies (Delgado et al., 2004;Tricomi and Lempert, 2015;Schultz, 2016). Particularly, it has been proposed that similar to the left ventral striatum, the left dorsal striatum is involved in value coding, whereas the right dorsal striatum is involved in probability coding (Tricomi and Lempert, 2015). In our study, although the ventral striatum showed no significant activation, the dorsal part (caudate) reflected value processing. However, no significant functional connectivity was found between the left caudate and the left IFG. It may be possible that the different task used in our study results in different neural correlates. Indeed, in the study by Castrellon et al. (2019), the subjective value of rewards was measured through a delay discounting task, a paradigm particularly used to measure impulsive decision-making (Kable and Glimcher, 2007;Terenzi et al., 2019); whereas in our study, participants' subjective value of rewards was measured through WTP ratings before and after a gaze-cuing task. It should be noted that we used predefined ROIs (including the OFC and striatum) based on previous studies investigating subjective value. However, these ROIs (despite being abundant in the literature) may be the result of the signal-to-noise ratio; thus, there is the need for caution when interpretating the functioning of these brain areas in computing subjective value (Poldrack, 2007;Elliott et al., 2020).
Interestingly, results emerging in our study show a left-hemispheric dominance reflecting the increased subjective value of food items attended to by others. This left lateralization has also been reported in other studies investigating approach motivation (Pizzagalli et al., 2005) and consumer choices (Ohme et al., 2009(Ohme et al., , 2010Mengotti et al., 2018). However, other studies have shown mixed results (e.g., stronger right-lateralized brain areas supporting reward-related behaviors), thus suggesting that additional research is still needed to understand a possible hemispheric asymmetry in the calculation of the subjective value of rewards (Plassmann et al., 2007;Tricomi and Lempert, 2015;Ramsøy et al., 2018).
Another neural signature of value changes because of the gaze-cuing training (postevaluations greater than pre-evaluations) found in this study is the increased activity in the left MTG. Previous studies have reported the involvement of this brain region in different functions such as semantic memory Figure 7. Results from the multiple regression analysis. a, b, BOLD activity in the left MFG (a) and right MFG (b) at the time of the food presentation in the pretest phase significantly predicted WTP ratings. Specifically, across participants, higher activity in these brain regions was associated with greater WTP for unknown food items. c, d, Scatter plots are for the purpose of data visualization only. Figure 8. Results from the seed-based ROI caudate gPPI functional connectivity analysis. a, Caudate-angular connectivity change during congruent versus control trials significantly predicted WTP changes (postevaluations vs pre-evaluations; cluster size, 176 voxels, t value = À2.82, P FWE = 0.024). b, Specifically, the correlation between caudate-angular connectivity and WTP changes were negative. In other words, across participants, the smaller the connectivity change the greater WTP change was observed. Scatter plot is for the purpose of data visualization only.
processing and visual perception (including spatial attention and gaze processing; Lockhofen et al., 2014;Ren et al., 2020). It might be possible that this area played a role in retrieving spatial memories associated with food items previously presented in different conditions during the gaze-cuing phase, thus influencing the calculation of the subjective value of the reward.
A further interesting neural signature of value change observed in this study is the functional caudate-angular connectivity and its modulation by WTP changes. In particular, across participants, the caudate angular gyrus connectivity decreased with increasing WTP changes. Further, when correlating the individual activation change in the left caudate with the WTP change, there was no significant correlation. This result suggests that neither the single contribution of the caudate nor the angular gyrus but their functional coupling is involved in the gazecue-induced value change. As mentioned earlier, the caudate has been reported to be involved in the processing of the subjective value of rewards (Delgado et al., 2004;Schultz, 2016), whereas the angular gyrus has a role in spatial attention and orientation toward salient stimuli (Seghier, 2013). Thus, it is likely that in the postevaluation phase (compared with the pre-evaluation phase) the connectivity between these two brain areas may be reduced because of facilitation by the gaze-cuing training in the processing of the value for food items looked at by others. This is in line with the hypothesis that the gaze-following response is the result of an evolutionary adaptation, particularly during foraging as it allows one to orient the attention to discoveries made by others (Tomasello and Carpenter, 2007;Zuberbühler, 2008;Shepherd, 2010). However, it should be noted that the caudateangular gyrus connectivity might indicate that the activity patterns in these brain areas were opposite to each other and not a mere result from facilitation because of the gaze-cue training. Indeed, it has been suggested that although greater functional connectivity might mediate the integration of information between different brain areas, smaller connectivity might dissociate neuronal processes mediating different goals (Fox et al., 2005;Wagner et al., 2020). Thus, the negative interaction between the caudate and the angular gyrus (which was modulated by the increased change in WTP for looked-at food items) might suggest increased value processes during the test phase, potentially segregating a brain region encoding value for rewards (such as the caudate) from activations centered around the angular gyrus that might code for spatial attention. Future research is needed to confirm this finding. Further, our results suggest that the caudate activity not only reflects the change of WTP but also the value representation itself (Delgado et al., 2004;Tricomi and Lempert, 2015;Schultz, 2016). Another brain region emerged in our study and involved in value representation is the MFG. It may be possible that this area was particularly involved in the processing of the logos of the snacks (Bruce et al., 2013), as the food items were presented in their packaging. Because we used unfamiliar food items in our task (Korean snacks), future research is needed to replicate our findings by using other packaging information.
Interestingly, regarding the neural correlates of the gaze-cuing training per se, we found also during this phase an increased activity in the a priori caudate ROI for food items presented in congruent trials (compared with that for items presented in control trials). This result suggests that the left caudate was already sensitive in processing the value of gazed-at food items during the gaze-cuing training.
However, we did not find significant activations in MTG, IFG, or any other brain region during this phase.
In conclusion, the present study shows that a gaze-cuing training can modify the value of food as shown both by shift in preference and by orchestrated brain activity and striatal-angular connectivity changes in healthy humans. Our results not only strongly propose novel potential intervention strategy by providing insights into social cue affecting decision processes but also provide evidence of how using gaze-cuing to influence food value can be applied as a powerful tool to actively promote healthy food choices. | 2022-07-08T06:15:53.993Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "c309a39a23520832ad6ae2ce72dc6f7b91e16532",
"oa_license": "CCBY",
"oa_url": "https://psyarxiv.com/t6z8e/download",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "19ca9f1a6b7fdb6da5296fd96f1fb74693e3e224",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220045063 | pes2o/s2orc | v3-fos-license | The influence of malocclusion, sucking habits and dental caries in the masticatory function of preschool children.
The aim of this study was to evaluate the association of malocclusion, nutritive and non-nutritive sucking habits and dental caries in the masticatory function of preschool children. A cross-sectional study was conducted with a sample of 384 children aged 3-5 years. A single examiner calibrated for oral clinical examinations performed all the evaluations (kappa > 0.82). Presence of malocclusion was recorded using Foster and Hamilton criteria. The number of masticatory units and of posterior teeth cavitated by dental caries was also recorded. The parents answered a questionnaire in the form of an interview, addressing questions about the child's nutritive and non-nutritive sucking habits. The masticatory function was evaluated using Optocal test material, and was based on the median particle size in the masticatory performance, on the swallowing threshold, and on the number of masticatory cycles during the swallowing threshold. Data analysis involved simple and multiple linear regression analyses, and the confidence level adopted was 95%. The sample consisted of 206 children in the malocclusion group and 178 in the non-malocclusion group. In the multiple regression analysis, the masticatory performance was associated with age (p = 0.025), bottle feeding (p = 0.004), presence of malocclusion (p = 0.048) and number of cavitated posterior teeth (p = 0.030). The swallowing threshold was associated with age (p = 0.025), bottle feeding (p = 0.001) and posterior malocclusion (p = 0.017). The number of masticatory cycles during the swallowing threshold was associated with the number of cavitated posterior teeth (p = 0.001). In conclusion, posterior malocclusion, bottle feeding and dental caries may interfere in the masticatory function of preschool children.
Introduction
The shredding of foods during mastication aids in the enzymatic action of the digestive system and facilitates the absorption of nutrients, which is fundamental for a child's growth and development. 1 The masticatory function is frequently evaluated by analyzing the masticatory performance Declaration of Interests: The authors certify that they have no commercial or associative interest that represents a conflict of interest in connection with the manuscript. and the swallowing threshold. 2,3 The masticatory performance measures how much the food was crushed after a standardized number of masticatory cycles. 4 The swallowing threshold enables evaluating the number of masticatory cycles required for swallowing, and the size of the particles to be swallowed. 3,5 Studies have shown that a higher body mass index, dental caries, a higher frequency of pasty food ingestion, a lower number of masticatory units, 3,5 lower mandibular movement 6 and malocclusion 7 are associated with poor masticatory function. Comparing these factors, the literature shows a controversy among the results related to malocclusion. A study carried out with preschool children observed a better food shredding ability among the group of children with no malocclusion, compared with those with open bite and posterior crossbite. 7 More recently, another study 5 also conducted with a sample of preschoolers did not find this association. In schoolchildren, a greater number of studies have indicated the existence of an association between malocclusion and masticatory function. 8,9,10,11 This association is explainable given the lower inter-occlusal contact with malocclusion in individuals. Despite the frequent association between malocclusion and masticatory function, this oral condition does not seem to affect the number of chewing cycles. 10 The prevalence of malocclusion in preschoolers may reach 87.0%. 12 However, it ranges according to the parameters used for its diagnosis. Bear in mind that other studies have reported lower rates. 13,14 A high prevalence of malocclusion in preschoolers may be associated with the presence of non-nutritive sucking habits. 15 Therefore, the presence of these habits may also interfere in the masticatory function. In addition to non-nutritive sucking habits, it is important to consider the use of a feeding bottle. Prolonged use of a feeding bottle has been associated with excessive milk intake. 16,17 This higher consumption of milk may contribute to the child's lower predilection to more consistent foods, such as fruits and meat. 18 This is particularly important, because the consumption of less consistent food is associated with poorer masticatory performance. 5 Thus, the objective of this study was to evaluate the association of malocclusion, nutritive and non-nutritive sucking habits and dental caries in the masticatory function of preschool children.
Study design and sample size
This was a cross-sectional study carried out with a sample of three-to five-year-old children enrolled in daycare centers and schools in the city of Diamantina, Brazil, who were called to attend the pediatric dentistry clinic of the UFVJM. The sample size was calculated based on the results of a pilot study of 30 children similar to those composing the main study sample. Sample calculations were performed for each dependent variable in relation to malocclusion and sucking habit, and the calculation that provided the largest sample size was chosen. This calculation used a standard deviation of +1.262 for the median particle size in the masticatory performance among children with malocclusion, and + 1,646 for that among children with no malocclusion. An average difference of 0.44 mm in the median particle size among children with and without malocclusion was considered clinically relevant. Considering a statistical power of 80% and standard error of 5%, the minimum sample size was 175 children per group (groups of children with and without malocclusion). Thirty-four children were added to each group to offset the losses. Performance of this pilot study enabled not having to change the initially proposed methodology.
The sample was recruited by convenience at six preschools in the city. Children with systemic or neurological disorders, and those who used drugs that could affect muscle activity (antidepressants, muscle relaxants or sedatives) were excluded. Children who used any type of orthodontic appliance were also excluded.
Clinical data collection
The oral clinical examination was performed by a single dentist who had undergone a training and calibration exercise for all the evaluated oral clinical conditions. The interexaminer (compared with an expert) and intraexaminer kappa concordance coefficients were greater than 0.82 for all oral conditions evaluated. The training was conducted with images, and the calibration was done with children from the institution's pediatric dentistry clinic, who were the same children as those of the pilot study. During the examination, the child remained lying on a portable reclining stretcher. Each preschool child's teeth were brushed before the oral evaluation.
The presence of malocclusion was defined according to the criteria proposed by Foster and Hamilton, 19 and all the evaluations were performed with the teeth in occlusion. The presence of anterior or posterior cross bite, anterior open bite and/or overjet equal to or greater than 3 mm were the clinical parameters adopted to determine the presence of malocclusion. These evaluations were performed using a millimeter probe. The children were divided into anterior and posterior malocclusion groups; a child who had both conditions was assigned to both groups.
The number of masticatory units was determined by occlusal pairs (posterior opposing teeth in occlusion). Therefore, a child with eight occlusal molars had four occlusal units 5 . Caries lesions were evaluated using the International Caries Detection and Assessment System (ICDAS) criteria. 20 The presence of cavitated caries lesions was considered when ICDAS codes 3, 5 and 6 were recorded. The data analysis took into account the number of cavitated posterior teeth.
Collection of non-clinical data
Those responsible for the children answered a questionnaire with questions regarding the child's characteristics, such as age, sex, and both nutritive and non-nutritive sucking habits. The habits were recorded in a questionnaire applied in the form of an interview with the person responsible for the child, who was asked whether or not he/she was currently bottle-fed, about past and present history of digital sucking habit and about the use of a pacifier. Responses for finger and pacifier suction were given for current and non-current use. The exams for oral clinical assessments were performed with a head lamp (PETZL, Tikka XP, Crolles, France), mouth mirrors (PRISMA, São Paulo, Brazil), WHO millimeter probes (Golgran, São Paulo, Brazil) and dental gauze to dry the teeth.
Masticatory function
The masticatory function was evaluated by observing the masticatory performance and the swallowing threshold. The material chosen for the chewing test was Optocal. 4,5 It was manipulated and inserted into molds to form cubes measuring 5.6 mm 3 . The cubes were then placed in an electric oven at 60°C for 16 h to ensure complete polymerization. Portions of 17 cubes measuring approximately 3 cm 3 and weighting 3.2 g were separated and stored in plastic containers until the tests were performed. 21 The masticatory performance was evaluated according to the median particle size (MP X 50 ) triturated after 20 masticatory cycles. The children were submitted to a training session to become familiarized with the taste and consistency of the artificial food tested. 21 A trained examiner instructed the children to chew on 17 cubes of the material and told them when to expel the cubes into the collector. The child's mouth was rinsed with filtered water to remove all the particles, which were also expelled into the collector. All particles remaining in the oral cavity were removed with a clinical clamp and placed in the collector.
The swallowing threshold was determined by the mean particle size (ST X 50 ) expelled when the children felt the desire to swallow, and by the number of masticatory cycles performed up to that point (ST cycles). The children were instructed to chew 17 Optocal cubes and raise their hand when they felt like swallowing. The examiner counted the masticatory cycles visually, 22 and the children raised their hand when they were ready for the collector to help expel the particles.
The following steps were the same as those performed in evaluating masticatory performance. The samples from each collector were deposited on a paper filter, disinfected with 70% alcohol spray and dried at room temperature for three days. The particles were then weighed and placed in the first of a set of nine sieves (Bertel Ltda, Caieiras, Brazil) with a mesh size decreasing from 5.6 mm to 0.60 mm. The sieves were coupled to a machine (Bertel Ltda, Caieiras, Brazil) that vibrated each sample for 20 minutes. The particles retained in each sieve were removed and weighed using an analytical balance with an accuracy of 0.001 g (AD500, Marte, São Paulo, Brazil). The accumulated weight of the particles in each sieve was determined, and the median particle size (X 50 ) was calculated for each child using the Rosin-Rammler equation 23 obtained with the Statistical Package for the Social Sciences (SPSS ® 22.0). The X 50 was calculated for the masticatory performance and swallowing threshold of each child.
Statistical analysis
SPSS ® 22.0 software was used to perform all the analyses. The descriptive analysis was performed. Data normality was tested using the Kolmogorov-Smirnov test. Non-normal distribution data were logarithmized to the power of 10 (log 10 ) to adjust the normality of the data to its use in linear regression. Simple and multiple linear regression analyses were performed to determine the strength and direction of the associations. The explanatory variables were selected to perform the multiple linear regression model using the backward method to determine which independent variables remained associated with the masticatory function. Values were considered significant when p < 0.05.
Results
A total of 384 (90.8%) children participated up to the end of the study, 206 of whom had and 178 did not have malocclusion. The main reason for the losses was the child's lack of cooperation during the evaluations. A total of 58.3% of the children had malocclusion and 41.7% did not. Among those with malocclusion, 7.6% had malocclusion affecting their posterior teeth. The mean age was 4.19 years. In relation to sucking habits, 16.9% were being bottle-fed, 10.4% had the habit of digital suction either in the present or the past, and 34.1% used pacifiers in the present or the past. The number of masticatory units ranged from 1 to 4 (mean 3.92 + 0.36). The prevalence of dental caries in posterior teeth was 37%, and the mean number of decayed posterior teeth was 1.06 (+ 1.84). The masticatory function was evaluated with a mean particle size of 5.06 + 1.94 mm in the masticatory performance, swallowing threshold of 4.25 + 2.10 mm, and mean number of masticatory cycles performed until the swallowing threshold of 30.73 cycles (+ 14.66). Table 1 shows the characterization of the sample according to each parameter of the masticatory function evaluation.
Simple linear regression analysis showed that a larger median particle size, masticatory performance and swallowing threshold were associated with a lower age ( Table 2).
In the final multiple regression model, the median particle size in the masticatory performance and the swallowing threshold were associated with age (MP X 50 : B -0.294, Beta -0.116, 95%CI -0. Table 3).
Discussion
The present study demonstrated that younger children with a history of bottle-feeding and subsequent malocclusion had greater difficulty breaking down the test food into smaller particles, according to their masticatory performance and swallowing threshold. In addition, children with a higher number of cavitated posterior teeth performed a greater number of masticatory cycles before attaining the swallowing threshold, and had worse masticatory performance.
In the present sample, younger children failed to break down the test food into smaller particles based on their masticatory performance up to the swallowing threshold, compared to the older ones. This is a common finding in other studies 5,24 and seems to be associated with older children having larger masticatory muscles, and with chewing being a function that develops and matures over time. 25 Posterior malocclusion, represented in this crossbite study, was responsible for worse trituration of the test material during the masticatory performance test up to the swallowing threshold. This finding is similar to that reported by Gavião et al., 7 who investigated Brazilian children in the same age range as that of this study. Inadequate contact of the teeth during mastication decreased the available area for trituration of foods that should be chewed with adequate fitting of the cuspids. 26,27 For this reason, the food is not ground efficiently, and results in larger particles. 27 According to Henrikson et al., 8 30% of the change in masticatory efficiency can be explained by inadequate occlusal contact and accentuated prominence. Hence, future studies should investigate whether the correction of posterior crossbite has an impact on the improvement of masticatory performance and swallowing threshold.
Conversely, Soares et al. 5 found no such association. What might explain the divergence in results is the calculation of the sample size. In the study mentioned, the sample calculation was performed to detect the difference in masticatory performance among overweight / obese, low weight and normal weight children. In this respect, they may have found no association, because the sample size was too small to enable such an association.
To date, no evidence has been identified that addresses the association between sucking habits and median particle size in determining masticatory performance and swallowing threshold in preschoolers. Children who were bottle-fed up to the time of data collection had a worse masticatory function. A Chinese study 18 conducted with a sample of 649 children aged 18 to 48 months showed that those who adopted bottle feeding for milk intake up to 24 months of age consumed a smaller amount of meat. Moreover, children who were bottle-fed up to 48 months of age consumed less fruit. Thus, bottle feeding was associated with a lower consumption of consistent foods. The preference for fewer consistent foods may lead to less f exercising of chewing muscles, resulting in a worse masticatory function. 5 In the present investigation, after making the statistical adjustment, only the number of cavitated teeth remained associated with the number of masticatory cycles at the swallowing threshold. Thus, the greater the number of cavitated teeth, the more chewing cycles a child performed until he felt the urge to swallow. Cavitated caries lesions contribute to reducing the occlusal contact area, and can also lead to pain, resulting in worse masticatory performance. 28 However, in this study, children with cavitated lesions were able to perform more masticatory cycles to obtain better trituration of food, thus offsetting a worse result in masticatory performance. In addition, since the occlusal surface was cavitated, the other surfaces may have also been affected.
The association between number of masticatory units and masticatory performance of preschoolers has been reported in the literature. 5 When there are no masticatory units, there are also no surfaces available for trituration of the food, thus damaging both masticatory performance and swallowing threshold 29 . In this study, the results did not confirm this association, because there was a low frequency of tooth loss. The mean occlusal units of the study sample were 3.98, considering a maximum value of 4.00.
This study may have some limitations, such as the lack of food consistency and prolonged breastfeeding data. We believe that the interference of using or not using a bottle is related to consuming a liquid and pasty diet. Unfortunately, we have not yet investigated breastfeeding as a protective factor for malocclusion, or prolonged bottle use as a risk factor. This was a limitation of the study; therefore, we cannot clarify if there is indeed an association. Currently, our team has been collecting data to minimize these limitations. In addition, the results have limited external validity, since the sample was recruited by convenience. Despite these limitations, our intent was to demonstrate the oral conditions that may interfere with masticatory function, and consequent growth and development. It is also important to highlight how evaluation of chewing can be done objectively to reduce the risk of bias, compared with subjective evaluation using a self-administered questionnaire. Longitudinal studies are encouraged to observe the effects of breastfeeding duration on masticatory function.
In conclusion, posterior malocclusion, bottle feeding and dental caries interfered with the masticatory function of the preschool children evaluated in this study. Posterior malocclusion was associated with poor masticatory performance and a worse swallowing threshold, as was bottle feeding. Children with a higher number of cavitated caries lesions in posterior teeth performed a greater number of masticatory cycles until they felt comfortable to swallow. | 2020-06-25T09:08:00.039Z | 2020-06-19T00:00:00.000 | {
"year": 2020,
"sha1": "44f6dc211a52099d2dd716a01d78a0a0784f48da",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/bor/v34/1807-3107-bor-34-e059.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c92fb0b24c44bdbae7412dd71b55af5d6d91e8c8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21149953 | pes2o/s2orc | v3-fos-license | Half of rifampicin-resistant Mycobacterium tuberculosis complex isolated from tuberculosis patients in Sub-Saharan Africa have concomitant resistance to pyrazinamide
Background Besides inclusion in 1st line regimens against tuberculosis (TB), pyrazinamide (PZA) is used in 2nd line anti-TB regimens, including in the short regimen for multidrug-resistant TB (MDR-TB) patients. Guidelines and expert opinions are contradictory about inclusion of PZA in case of resistance. Moreover, drug susceptibility testing (DST) for PZA is not often applied in routine testing, and the prevalence of resistance is unknown in several regions, including in most African countries. Methods Six hundred and twenty-three culture isolates from rifampicin-resistant (RR) patients were collected in twelve Sub-Saharan African countries. Among those isolates, 71% were from patients included in the study on the Union short-course regimen for MDR-TB in Benin, Burkina Faso, Burundi, Cameroon, Central Africa Republic, the Democratic Republic of the Congo, Ivory Coast, Niger, and Rwanda PZA resistance, and the rest (29%) were consecutive isolates systematically stored from 2014–2015 in Mali, Rwanda, Senegal, and Togo. Besides national guidelines, the isolates were tested for PZA resistance through pncA gene sequencing. Results Over half of these RR-TB isolates (54%) showed a mutation in the pncA gene, with a significant heterogeneity between countries. Isolates with fluoroquinolone resistance (but not with injectable resistance or XDR) were more likely to have concurrent PZA resistance. The pattern of mutations in the pncA gene was quite diverse, although some isolates with an identical pattern of mutations in pncA and other drug-related genes were isolated from the same reference center, suggesting possible transmission of these strains. Conclusion Similar to findings in other regions, more than half of the patients having RR-TB in West and Central Africa present concomitant resistance to PZA. Further investigations are needed to understand the relation between resistance to PZA and resistance to fluoroquinolones, and whether continued use of PZA in the face of PZA resistance provides clinical benefit to the patients.
Introduction
Although efficient chemotherapy has been available since 1965, tuberculosis (TB) remains one of the most threatening infectious diseases [1]. TB control and management is challenged by the ongoing spread of drug-resistant TB isolates. Each year, an estimated 480,000 new cases of multi-drug resistant (MDR) TB occur worldwide, with more than 10% of these cases being extensively drug-resistant (XDR) TB [1]. Since May 2016, the World Health Organization (WHO) recommends a 9-month regimen developed by the International Union against Tuberculosis and Lung disease after successful experiences in Bangladesh [2,3]. The short regimen consists of a minimum of 4 months of kanamycin (KM), clofazimine (CFZ), moxifloxacin (MFX), ethambutol (EMB), high-dose isoniazid (INH), pyrazinamide (PZA) and prothionamide (PTH) followed by 5 months CFZ, MFX, EMB and PZA [4].
PZA is an indispensable drug for the treatment of both drug susceptible and MDR-TB cases. This sterilizing drug plays a key role in reducing TB relapse rates and shortening the standard course of 1 st line TB treatment from 9-12 months to 6 months [5]. In addition, PZA is the only 1 st line anti-TB drug most likely to be maintained in all new regimens, including those aiming at reducing the treatment duration for rifampicin susceptible-, MDR-and XDR-TB [6]. However, several studies have shown a clear association between resistance to PZA and to rifampicin (RMP), with the proportion of PZA resistance among RMP resistant (RR) greater than 40% in different settings [7][8][9].
Although resistance to PZA in RR-TB has been associated with poor treatment outcome in some settings, the drug susceptibility test (DST) is rarely done [10,11]. Phenotypic detection of PZA resistance is difficult and often unreliable, as the drug is active only at pH 5.5 acidity, which affects in vitro growth of Mycobacterium tuberculosis, causing both false-susceptible and false-resistant results [12,13]. PZA is a pro-drug that is converted to pyrazinoic acid by the pyrazinamidase/nicotinamidase enzyme encoded by the pncA gene of M. tuberculosis. A mutation in pncA can confer PZA resistance by decreasing the enzyme activity, either by impeding the function (mutation in the coding sequence) or by reducing protein production (mutation in the promoter region) [14]. A variety of different pncA mutations are found in 72-97% of phenotypically resistant clinical isolates [15]. Studies have reported that, within an MDR-TB population pncA sequencing has a good diagnostic accuracy of 89.5-98.8% for PZA resistance [16][17][18][19]. Hence, pncA gene sequencing has been proposed as a quick and reproducible method for PZA resistance detection from either microscopy-positive clinical samples or bacterial isolates [9,11]. Sequencing of pncA is an acceptable and reliable approach to detect PZA resistance particularly when considering that conventional phenotypic PZA susceptibility testing is difficult and prone to false resistance [20,21]. Given the increased availability of sequencing technology, genotypic-based testing for PZA resistance will thus likely become the method of choice for PZA DST.
Unfortunately, PZA resistance has not been widely investigated in most African countries, which bear most of the TB and HIV burden globally. Knowing the prevalence of PZA resistance and the variety of pncA mutations would help to inform an optimized diagnostic and treatment strategy in Africa.
In this multi-center study, we analyzed the pncA sequences of the RR isolates from 12 Sub-Saharan African countries. Among the RR isolates considered for this study, 448 (72%) were from the MDR-TB clinical study coordinated by the Union conducted from 2012 to 2015 in nine countries, which had pncA sequenced by the respective TB Supranational reference laboratories (SRL) [3], and 175 (28%) isolates were collected from consecutive RR patients from 2014 to 2015 in four countries as part of routine drug resistance surveillance and/or for research purposes, with sequencing analysis in the own country
Study population
Samples were collected in twelve countries in Sub-Saharan Africa, through two different studies: one prospective on patients starting the short-course regimen for MDR-TB [3], and one retrospective on RR isolates stored from routine surveillance in 2014 and 2015 (Fig 1)
Sampling coverage
RR -TB patients identified over the course of the routine drug resistance surveillance conducted in Rwanda, Senegal, and Togo were representative of the whole country. For Rwanda and Togo, they included both new TB cases and patients with a previous history of TB (retreatment cases), while for Senegal, only retreatment patients were sampled.
Samples from the Center for TB and AIDS Research in Mali within the University Clinical Research Center (SEREFO/UCRC) were not representative of the global MDR-TB population in Mali.
For the Union short course study, consecutive patients with RR were systematically invited to participate. Children, pregnant women, patients having received second-line TB drugs for more than one month, patients with medical and/or social contra-indication, or refusing to give informed consent, were excluded. Samples were sent to the respective SRLs in Cotonou, Milan or Antwerp for laboratory testing not available on site.
Resistance testing
Sputum samples were cultured in the respective national reference laboratories using Mycobacterial growth indicator tube (MGIT, Middlebrook 7H9) (BD Biosciences, Erembodegem, Belgium) and/or in-house prepared Löwenstein Jensen slants. Positive cultures were identified for M. tuberculosis complex based on MPT64 (BD Biosciences, or SD Bioline, Gyeonggi-do, Republic of Korea), or Capilia™ TB-Neo Assay, Tauns Laboratories, Japan. As confirmed RR-TB patients were enrolled, they all had 1 st line DST done as part of screening, according to national regulations, using GeneXpert (Cepheid, Sunnyvale, CA), MGIT, LJ, and/or MTBDRplus (Hain Lifescience, Nehren, Germany), and most isolates were tested for 2 nd line DST using LJ, MTBDRsl (Hain Lifescience, Nehren, Germany), and/or target gene sequencing. The critical concentrations used to determine the phenotypic susceptibility to the 1 st and 2 nd line drugs tested are captured in the S2 File. HIV status was also determined per the national HIV testing algorithm. Spoligotyping was performed on samples from the same country with the same pncA mutation, based on previously described procedures [22].
Sanger sequencing was performed on DNA from cultured isolates (Primer list in S2). DNA from the Union study isolates was amplified at respective SRLs (Institute of Tropical Medicine, Antwerp, Belgium, and San Raffaele Scientific Institute, Milan, Italy), and sequencing outsourced respectively to BaseClear (Leiden, The Netherlands) and Eurofins Genomics (Ebersberg, Germany), with analysis at SRL by comparing obtained sequences to the reference pncA sequence from M. tuberculosis H37Rv (GenBank accession no. NC_000962.3) using BioEdit software (Ibis Biosciences, Abbott company, Carlsbad, CA), or CLC Sequence Viewer (CLCbio, Qiagen, Redwood City, CA), including at least 41 nucleotides from the promoter region. DNA from retrospective isolates was amplified in in-country reference laboratories, and sequencing was outsourced to Macrogen Europe (Amsterdam, the Netherlands). After organizing several trainings on generation and analysis of sequences, the pncA sequences were analyzed at the incountry reference laboratories using MEGA6 [23]. The full procedures are available in S2 File.
Samples received in Milan were also analyzed by whole genome sequencing (WGS). For WGS, genomic DNA was extracted from cultured isolates using the QIAamp DNA Mini kit (Qiagen, Hilden Germany) according to manufacturer's instructions. Prior to submission for WGS, DNA was quality assessed and quantified using the Qubit dsDNA BR assay (Life Technologies, Paisley, UK). Paired-end libraries of 100 bp read length were prepared using the Nextera XT DNA Sample Preparation kit (Illumina Inc., San Diego, CA, USA) and sequenced on an Illumina HiSeq 2500 platform according to the manufacturer's instructions.
Downstream analysis was performed using available online analysis pipeline [24] including quality control checks. A mean read coverage depth of at least >30x. was considered acceptable. The reads obtained were then aligned to M. tuberculosis H37Rv reference genome, and total variant calling was performed with the pipeline of the PhyResSE web-tool. Isolates showing a coverage of at least 20× were considered for SNP analysis.
Data management and analysis
Data from the Union study were merged with retrospective data into a single Excel sheet database. All samples with poor quality pncA sequences (non-analyzable) were excluded from analysis. Descriptive statistics and calculation of different proportions were done for aggregated data in different categories and sub-categories. The Pearson's chi-square and Fisher's exact tests were used to test for associations of PZA resistance with potential risk factors. Data analysis was performed using STATA version 14.2 software (College Station, TX: STATA Corp.), and a p-value <0.05 was considered significant.
Study population and sampling coverage
Among 623 isolates considered for this study, 50 (8%) were excluded from analysis due to non-analyzable pncA sequences, due to poor quality of the sequence. Of the 573 (92%) included for analysis, 408 (71%) originated from the Union study of whom 342 (84%) were isolated from previously treated TB patients. For the remaining 165 isolates collected from routine surveillance of drug resistance, the treatment history of the patient was unknown for 56 (34%), and 72 of 109 (66%) isolates with documented treatment history were collected from new TB patients, with the majority (42 of 72, 58%) of new TB patients originating from Rwanda.
Most samples were tested for INH, with majority of them being resistant (proven MDR-TB) (Fig 1). However, as some were INH-sensitive (proven RR-TB), and some samples had no data for INH, we considered our sampling as RR-TB rather than MDR-TB.
Considering the resistance profile to 2nd line anti-TB drugs, 8 (1.4%) isolates were XDR, and 36 (6.3%) were pre-XDR with resistance to either fluoroquinolones (FQs) (n = 28) or second line injectable (SLI) (n = 8). However, the FQ and SLI susceptibility was not known for 141 (25%) isolates (Table 1). There was neither a difference in successfully versus failed sequenced samples, nor in the proportion of new versus previously treated patients, nor between the Union study versus the retrospective collection (p > 0.05).
The male to female sex ratio in our sequenced population was 1.8, and the median age was 33 years (Interquartile range [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. The HIV status was known for 521 (91%) patients, with 111 (21.3%) patients being HIV-TB co-infected. The rate of HIV co-infection among participants was different between countries (p < 0.001) ( Table 2). Number of samples stored versus numbers diagnosed per the WHO report for the same period for the whole country. 3 Number of isolates with analyzable sequences as proportion of total samples stored. 4 The proportion of pre-XDR or XDR in these columns corresponds to the ratio of pre-XDR or XDR to the total isolates included in this analysis, it therefore does not reflect the general prevalence of pre-XDR or XDR TB in the respective countries. 5 FQ was not known. 6 FQ was not known for all isolates (partial). https://doi.org/10.1371/journal.pone.0187211.t001
Resistance testing
Among the 573 isolates successfully sequenced, 311 (54%) presented mutations in the pncA gene, of which 248 (80%) were previously reported in the literature [5,9,15,[25][26][27][28][29][30][31][32]. Of the reported mutations, 12 (4.8%) were clearly defined as not associated with PZA resistance. Excluding those "non-resistance conferring" mutations for the resistance prevalence estimates, 299 (52%) isolates were PZA resistant. The prevalence of PZA resistance was similar in the Union study and among the retrospective isolates (p = 0.347), yet significantly differed between countries (p<0.001, Table 3). Among these RR patients, the prevalence of PZA resistance was similar in new versus previously treated patients (p = 0.167), and in HIV-infected versus uninfected patients (p = 0.946). However, the prevalence was higher among isolates showing resistance to FQ (p < 0.001) but not for those resistant to SLI (p = 0.174) or resistant to both FQ and SLI (p = 0.399) ( Table 4). Some mutations were found in more than one patient from the same country (defined as the same NRL or research center). Considering also the resistance profile for other genes, we observed 40 clusters, with 2 to 14 samples in each cluster (Table 5). In five countries, more than 25% of the retreatment cases were included in different clusters. Also, two large crossborder clusters were identified in neighboring countries; a group of 21 isolates in Rwanda and Burundi, and a group of 13 isolates in Rwanda and DRC. Spoligotyping of 11/13 isolates included in the two larger national clusters showed that most isolates in each cluster were identical (one isolate had a different spoligotype in each group), the cluster in Burundi from the LAM11 family, and the one in Cameroon from M. africanum (L5).
Discussion
Our findings show that, in West and Central Africa, about half of RR-TB patients face concurrent PZA resistance. With this work, we fill a geographically large knowledge gap, as data from this high burden TB region were previously very limited. While the average rate of PZA resistance was 52% (95% CI 48-56%) in RR-TB patients in this region, this rate varied from 21% in Togo to 80% in Burkina Faso, albeit with wide confidence intervals, and indeed differed significantly between countries (p<0.001). This level of PZA resistance, as well as the variations between countries, are in line with other regions [8,9]. Most patients had been previously treated (retreatment patients), although previous treatment was not associated with higher rates of PZA resistance. In Rwanda, the proportion of new TB patients was higher than in other settings, consistent with the national policy of testing all TB patients with GeneXpert. Similarly, HIV co-infection was not associated with PZA resistance. In contrast, PZA resistance was higher in isolates with concurrent resistance to FQ without SLI resistance (pre-XDR). Resistance to SLI had no significant influence, although numbers were low, yet this concurs with other studies [33,34]. A large proportion of the mutations observed had previously been reported, most reported to be associated with resistance, and a few as not associated with resistance. Still, 3.8% (95%CI, 2-6.6) were previously described mutations but with an unclear role in PZA resistance development. As the large majority of mutations reported in the literature are associated with resistance, and discordance may partially be due to non-reproducible phenotypic testing, we considered those as associated with resistance. Similarly, we considered all new (non-synonymous) mutations as associated with resistance. In addition, 31% of those new mutations cause a frame shift, which increases the likelihood that these confer resistance.
The exact impact of resistance to PZA on patient outcome is unclear. As the prevalence of PZA resistance is high in RR-TB patients, it is important to better understand whether any remaining activity warrants to continue PZA in the treatment regimen. A formal comparison in which patients are randomized to have PZA resistance results 'ignored' (and continue to receive PZA as part of their regimen), versus receiving a non-PZA regimen, would resolve this question yet may not be considered a priority for clinical study. In addition, laboratory testing should be strengthened if clinicians have to take PZA resistance results into account in selecting a regimen.
Also, the association with FQ resistance requires further study. FQs are a core class of drugs for RR-TB treatment and in the context of FQ resistance, additional resistance to other drugs may develop [30]. We hypothesize that PZA resistance is likely accumulated in RR/MDR TB patients who were not recognized as resistant and received multiple rounds of 1st line therapy containing PZA (including Category 2, with streptomycin). The association with FQ resistance suggests that PZA may protect against FQ resistance in PZA susceptible strains, although conversely, FQ resistance may also predispose to acquiring PZA resistance. With scaling up of earlier/ universal DST for rifampicin, already implemented in Rwanda, we expect that patients with RR/MDR would be diagnosed before PZA resistance develops, and that the prevalence of PZA resistance among RR/MDR would decline over time.
In addition to resistance testing itself, the high heterogeneity of pncA mutations also permits the use of pncA diversity to identify circulating MDR strains that are possibly actively transmitted, as the chance that the same mutation appears due to independent evolutionary events in different strains from the same region is low. In phylogenetic terms, pncA suffers less convergent evolution under drug pressure than other genes, such as rpoB and katG. Therefore, the pncA sequence has been proposed to be added to spoligotyping to identify clusters of transmission [35]. In our study, we observed several clusters of isolates presenting the same pncA mutation, and confirmed that the vast majority also shared the same spoligotype pattern. Those clusters likely reflect ongoing transmission, rather than reactivation (in new cases) or relapse (in retreatment cases), concurrent with reports that most MDR-TB is transmitted rather than acquired [36]. The overall low clustering rate in our study may reflect a low sampling fraction in each of the countries, or also be in line with the observations from den Hertog et al., suggesting that pncA mutations may confer a fitness loss preventing strains from wide dissemination [37]. Interestingly, one of the strains included in such a cluster was from lineage 5, known as M. africanum type 1, which is known to be less prone to tuberculosis disease progression [38]. Implementing pncA based DST for PZA may thus also assist the national program in recognizing whether predominant MDR-TB clusters reflect ongoing transmission in the communities, justifying targeted active case finding and other interventions.
Besides results collected for this analysis, the participating laboratories implemented the use of Sanger sequencing to identify mutations in TB-related genes. Staff from several NRL and research laboratories were indeed trained on sample preparation for (outsourced) sequencing and data analysis in 2013-2015, by the Institute of Tropical Medicine, Antwerp Belgium, in collaboration with the ITM TB network, WANETAM Plus and OFID Pasteur network. Even if sequencing facilities are expensive and scarcely available, we proved that a sequencing approach is possible for NRLs in Africa when outsourcing is used for the sequencing itself, and the staff may focus on the preparation of samples by PCR, and on the informatics' analysis. As a resource for laboratory managers who would like to implement this approach in their own setting, our detailed procedures are added as S2 File. Furthermore, preliminary results at ITM indicate that the PCR can start directly from DNA extracted from smear-positive sputum samples.
We acknowledge the fact that pncA sequencing may not identify all existing resistant strains and, as a consequence, the overall levels of PZA resistance might be slightly underestimated. The analysis of additional genomic regions such as rpsA, panD, hadC, fas1, and the efflux pumps could increase the prediction of PZA resistance. However, as documented by several studies, we are convinced that further studies are needed to better understand the role of these targets and their real contribution to PZA resistance [39][40][41][42]. Methods including several genes in a single experiment based on next generation sequencing, like whole genome sequencing or multiple targeted sequencing, starting from culture or sputum, may be a more cost-effective method in the future for labs implementing the required skills, quality control, and material [43][44][45][46]. Whole genome sequencing will also allow the analysis of panD and other minor genes described in rare resistance to the active form of PZA [47][48][49].
In conclusion, this report on pncA mutations in West and Central Africa shows that the prevalence of mutations is similar to other regions in the world: half of RR-TB patients are also resistant to PZA. Further investigation will be needed to understand the relation between pncA mutations associated with PZA resistance and FQ resistance as well as its potential impact on the treatment outcome, in order to optimize treatment for those patients. | 2018-04-03T01:02:00.278Z | 2017-10-31T00:00:00.000 | {
"year": 2017,
"sha1": "443f373c8e8cbcc98ee3efe98c208c2445b8a0da",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0187211&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "443f373c8e8cbcc98ee3efe98c208c2445b8a0da",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
45689053 | pes2o/s2orc | v3-fos-license | 1 Estimation of PM 10 Health Impacts on Human within Urban Areas of Makkah city , KSA
1 The Custodian of the Two Holy Mosques Institute for Hajj and Umrah Research, Umm Al -Qura University, Makkah, Saudi Arabia. 2 Air pollution Department, Nat ional Research Center, Cairo, Egypt. *Corresponding author: ateffathy2006@yahoo.com Mob.: 00201151143456 ; ORCID: 0000-0002-2930-0168 Abstract The current study aimed to: i) Monitor levels of PM10, at Shebika, Haram, Masfala, Azizia, Awali and Mina in Makkah city, KSA during the period of 01 Shawwal 1436H – 27 Rabi Al-Awwal 1437H, by using LVS instruments; and; 2) assess health risk (non-cancer and cancer risks) on humans (children and adult) exposed to PM10 in ambient air of Makkah city. The results showed that: the high PM10 levels were found in Haram site, while the lower levels were found in Awali site. These levels were lower than that set for PM 10 by PME (Daily limit of 340 μg/m 3 ). Vehicles emissions and constructions sources may be the main source of PM 10 levels in Makkah city. The human health risk assessments showed that: the daily exposure doses of PM 10 were ranked in the order: Ding > Ddermal > Dinh for children and adult in Makkah city. Ingestion of PM10 particles was the main exposure pathway for both children and adults. The HIs and cancer risk values were within the safe level, indicating that (non-carcinogenic and carcinogenic) risks for humans exposed to PM 10 in Makkah city were negligible.
Introduction
Respirable particulate matter (PM 10 ) was particles with a diameter ≤ 10.PM 10 was a complex mixture of dust, salt, acids, organic matter and metals; which varied in levels, composition and sources (Pope and Dockey, 2006).Human Exposed to PM 10 might be affected by definite (cancer and non cancer) risks (Hnizdo andVallyathan, 2003, Davies, 2012).The health impact of children and adults exposed to PM 10 in ambient air were reported in several studies like: ATS, 1997;Batterman et al., 2011;Thabethe et al., 2014.PM 10 in ambient air were easily enter the human body via three primary pathways: inhalation (through mouth and nose, which may reach the alveolar in the lung), ingestion (through mouth and nose), and Dermal absorption of particles adhered to exposed skin contac t (US EPA, 1989;2004;Darquenne et al., 2000;Ferreira and De Miguel, 2005;Ahmed and Ishiga, 2006;Zheng et al., 2010aZheng et al., , 2010b;;Lu et al., 2014;Xu et al., 2015).The levels and health risks of PM 10 exposed to humans were evaluated in different previous s tudies such as: Apeagyei et al., 2011;Tang et al., 2013;Kexin et al., 2015.However, the most previous studies focused on PM 10 in capital cities or mega-cities, which characterized by traffic density and over population.There was lake in that studies in Makkah city and the other cities in Kingdom Saudi Arabia (KSA).The major sources of PM 10 in ambient air of Makkah city include: vehicles emissions, construction; paved roads; unpaved roads , power plant emissions and windblown dust from open lands.Given these air pollution sources, it was deemed necessary to estimate the likely human health risks posed by PM 10 to the residents of Makkah city, an activity never done before for this geographic region, for future planning and management of air pollution in the area.The main objectives of this study were to: i) Monitor levels of PM 10 , mainly emitted in urban sites at Makkah city, KSA; and 2) evaluate health risk (non-cancer and cancer risks) associated with PM 10 exposure for children and adult in Makkah city.
Study area and sampling sites description
The current study; PM 10 samples were collected fro m ambient air at 6 sites located in Makkah city, KSA (Fig. 1).Makkah city is located 70 km inland fro m Jeddah in a narrow valley at a height of 277 m above sea level.Makkah city lies in a corridor between mountains.This mountainous location has defined the contemporary expansion of the city.The total area of Makkah stands over 1,200 km 2 .Its resident populations in 2014 were roughly (1,919,909 people ≈ 2 million), although visitors (pilgrims) more than 6,000,000 people every year during the hajj and Umraa seasons (Statistical Yearbook, 2014).The brief descriptions of the sampling sites were given in Table 1.
Aziziah
Lower south squares of the Holy Mosque. Likewise a busy location and had a high traffic density during sampling period.
Awali
New area.
Quiet residential area famous by villas and some facilit ies such as petrol stations and supermarkets. Near to Taif Road and the mountains.
Mi na
Desert area about 5 km east of Makkah. The area is known for its important role for pilg rims, where many pilgrims stay on a temporary basis during hajj.
Sampling Method
LVS (Low Vo lu me Samp ler) were used to collect respirable particulate matter (PM 10 ) in 6 sites.Samples were collected weekly during the period from 1 Shawwal 1436 AH -27 Rabi al-A wwal 1437 AH, collecting a total o f 30 samp les, the sampling time was 24 h.LVS manufactured by the German Beko (Beco R300), wh ich calibrated before device used.LVS were use nitrate cellu lose filters with size of 0.45 microns for dust least 10 micro meters (Fig. 2).Filtration method was used for estimating the total concentration of PM 10 .Where filter paper is weighted in the laboratory before sampling, and then transported carefully to sampling holder.After sampling the loaded filter is carefully transfer to the laboratory, where it is weighed to constant weight.The difference in weight before and after sampling is equal to weight of PM 10 collected.PM 10 concentrations can be calculating by using the sample air volu mes and the weight of PM 10 collected and expressed as µg/m 3 (JIS, 1992).1989; 2004).Exposure was expressed in terms of daily dose (mg/kg.day).The exposure factors for these models were shown in Table 2.
Risk Assessment
Hazard quotients (HQ) for non-carcinogenic effects were applied to each exposure pathway in the analysis to evaluate the non-carcinogenic risks due to PM 10 in ambient air of Makkah city, KSA.The daily exposure doses calculated for each exposure pathway were subsequently divided by the reference dose of PM 10 (RfD = 1.1x10 -2 mg/kg.day,Zou et al., 2009) to yield a hazard quotients (HQs) according to equation 5 (US EPA, 1989, 1996): The hazard index (HI) was then the sum of HQs and was used to represent the total potential noncarcinogenic risks of PM 10 in ambient air of Makkah city, KSA.If (HI < 1), there was no significant risk of non-carcinogenic effects.If (HI > 1), then there was a chance that non-carcinogenic effects may occur (US EPA, 1989; US EPA, 2001;Ferreira and Miguel, 2005;Lim et al., 2008;Zheng et al., 2010a,b;Xu et al., 2015).In the case of carcinogenic risks, the dose was multiplied by the slope factor of PM 10 (SF = 2 x 10 -6 (mg/kg.day) - , Vallero, 2014) to produce a level of cancer risk according to equation 6: R = ADD x S F …………………….Eq.5 Carcinogenic risk is the probability of an individual developing any type of cancer fro m lifetime exposure to carcinogenic hazards.EPA considered cancer risks between 10 -6 (i.e., 1 in 1,000,000 people) and 10 -4 (i.e., 1 in 10,000 people) to be generally acceptable (US EPA, 1991b).Cancer risks higher than 10 -4 might not be considered sufficiently protective and many warrant remed ial action ( Lim et al., 2008;US EPA, 2009a).
Results and Discussions PM 10 concentrations
The daily average respirab le particu late matter (PM 10 ) concentrations in amb ient air of 6 sampling sites in Makkah city were measured during period from 1 Shawwal 1436 AH -27 Rabi al-A wwal 1437 AH (as shown in Fig. 3).LVS instruments was used to collect samples.The PM 10 concentrations measured were generally higher in Haram site (343 µg/ m 3 ) than the PM 10 concentrations measured in Awali site (64 µg/ m 3 ).These concentrations were lower than the PME Daily limit of 340 µg/ m 3 on most of the days, except in Haram site (PM E, 2001).Which means that even if the population of Makkah was exposed to that levels of PM 10 , negative health impacts may be unlikely, as concentrations were below the daily average limit of PME, although some individuals may still be sensitive to relatively low PM 10 concentrations (WHO, 2006).The current study attributed concentrations obtained of PM 10 to greatest efforts exerted by the Govern ment of the Kingdom o f Saudi Arab ia represented by the executive authorities in the city of Makkah Al Mukarramah (such as the Holy City, the Holy City Secretariat and the General Presidency of the Holy Mosque) in providing clean air and environment free fro m harmful air pollutants to take care pilgrims.Moreover, Presence of sprayers located in the squares, which he lps to deposit and reduce the dust suspended in the air.Furthermore, The presence of hotels surrounding the areas, which act as windbreakers loaded with airborne dust, and helps to re-establish and reduce its concentrations in the squares.In addition, Ra infall on the city of Mecca during the measurement period, which helped to purify amb ient air and deposit dust.In Makkah the major factor responsible for the high emission of PM 10 is probably the higher number of visitors to the Holy Mosque during Hajj season (Zul-Qaadah -Zul-Hijjah), that leads to higher traffic flo w on roads around the Holy Mosque in Makkah city (elevated vehicle emission levels ).Table 3 showed comparison between levels of PM 10 in the current study and levels found in other cities around the world.5).In the current study, the daily doses of PM 10 were ranked in the order: D ing > D dermal > D inh for children and adult for non-cancer risk and cancer risk at all sampling sites.
Non-carcinogenic risk assessment
The results of the hazard quotient (HQ) values of different exposure pathways (D ing , D inh , and D dermal ), hazard index (HI) for both children and adults in sampling sites at Makkah city were calculated in Table 6.Among three different exposure pathways, the HQ ing values were the highest and contributed the most to HIs for both children and adults at all sites, indicating that ingestion of PM 10 appears to be the most threatening exposure way to human health in Makkah city (Fig. 4).The HQs for all studied sites were ranked in the order: HQ ing > HQ dermal > HQ inh for children and adult.Results also showed that the inhalation of PM 10 had the lowest contribution to health risks for children and adults, indicating that the non-cancer risks posed by the inhalation of PM 10 might be negligible compared with ingestion and dermal contact.Similar results were obtained by previous studies (Ferreira and De Miguel, 2005;Zheng et al., 2010a;OSHA, 2013).Additionally, children were found to experience higher health risks through ingestion compared with adults.The values of HQ ing for children were 9.3 times higher than those for adults.This result may be partially attributed to the special behavior patterns of children, particularly frequent hand-to-mouth contact.Similar results were obtained by (Kexin et al., 2015) where they reported The values of HQ ing for children were 9.33 times higher than those for adults.The HIs for all studied sites were ranked in the order: HI ing > HI dermal > HI inh for children and adult.
(Table 6).The integrated HI values in Makkah city were 8E-04 for children and 9E-05 for adults, indicating children are likely to experience significantly higher non -cancer risks.The HI values for all sampling sites in this study were within the safe level (HI < 1) , this results indicated that there was no significant risk of non-carcinogenic to children and adults from exposure to PM 10 levels in Makkah city.Similar results were obtained by previous studies (EPA, 1989;USEPA, 2001;Ferreira and Miguel, 2005;Lim et al., 2008;Zheng et al., 2010a,b;Xu et al., 2015).
Carcinogenic risk assessment
The results of the cancer risks according to different exposure pathways (D ing , D inh , and D dermal ), for both children and adults in sampling sites at Makkah city were presented in Table 7.The R ing values were the highest and contributed the most to cancer risk for both children and adults at all sites, indicating that ingestion of PM 10 appears to be the most exposure way to human cancer risk in Makkah city (Fig. 5).The cancer risk for all sampling sites were ranked in the order: R ing > R dermal > R inh for children and adult.Results also showed that the inhalation of PM 10 had the lowest contribution to health risks for children and adults, indicating that the cancer risks posed by the inhalation of PM 10 might be negligible compared with ingestion and dermal contact.Similar results were obtained by previous studies (Ferreira and De Miguel, 2005;Zheng et al., 2010a).The cancer risk values in Makkah city were 1E-12 (i.e., 1 in 1,000,000,000,000 people) for children and 6E-13 (i.e., 6 in 10,000,000,000,000 people) for adults, indicating children and likely to experience significantly negligible cancer risks.The cancer risk values for all sampling sites in this study were within the safe level (10 -6 (i.e.) 1 in 1,000,000 people and 10 -4 (i.e.) 1 in 10,000 people; US EPA, 1991b
Conclusions
A total of 30 respirable particulate matter (PM 10 ) samples were collected from 6 sites (Shebika, Haram, Masfala, Azizia, Awali and Mina) in Makkah city, KSA during the period of 01 Shawwal 1436H -27 Rabi Al-Awwal 1437H, by using LVS instruments .The concentration of PM 10 were analyzed.Human health risks for PM 10 were assessed using health risk assessment model.Results showed that: i) The maximum PM 10 concentrations were found in Haram site (343 µg/m 3 ) and the minimum in Awali site (64 µg/m 3 ).These concentrations were lower than the PME Daily limit of 340 µg/m 3 , except in Haram site.ii)Vehicles emissions and constructions sources may contribute mostly to the levels of PM 10 in Makkah city.iii) The health risks analysis showed that children and adult were at nearly equal risk from exposed to the same levels of PM 10 for the same duration.The daily exposure doses of PM 10 were ranked in the order: D ing > D dermal > D inh for children and adult for non-cancer risk and cancer risk at all sampling sites.Ingestion was the dominant exposure pathway for both children and adults.The inhalation of PM 10 had the lowest contribution to health risks for children and adults, indicating that the inhalation of PM 10 might be negligible compared with ingestion and dermal contact.The values of HQ ing for children were 9.3 times higher than those for adults.The HIs values for all sampling sites were within the safe level (HI < 1).The cancer risk values for all sampling sites were within the safe level (10 -6 (i.e.) 1 in 1,000,000 people and 10 -4 (i.e.) 1 in 10,000 people).The cancer risk values for PM 10 in Makkah city were within the acceptable range, implying negligible carcinogenic risk.More studies should be conducted to assess the indoor exposure to air pollution focussing on the more vulnerable groups such as infants, students, wo men, and the elderly and those suffering fro m other respiratory and cardiovascular diseases.
Figure 1 :Haram
Figure 1: Sampling sites in Makkah city, Ki ng dom of Saudi Arabia (KS A).Table 1: Sampling sites description Site Name Brief descripti on
Fig. 2 :
Fig. 2: Low Vol ume Sampler (LVS) and PM 10 hol der.Health Risk Assessment The Daily Expos ure dose (D) In the current study, the health risk assessment model developed by the Environmental Protection Agency of the United States (US EPA) was used to evaluate the health risks of PM 10 in Makkah city, KSA.Peoples in Makkah city (local residents and pilgrims) were divided into adults and children.Human were exposed to PM 10 in ambient air via three primary pathways: i) Inhalation of suspended particles through mouth and nose (D inh ); ii) Ingestion of dust particles through mouth (D ing ); and iii) Dermal absorption of PM 10 particles adhered to exposed skin (D dermal ).The daily exposure dose (D) of PM 10 calculated separately for each exposure pathway according to equations (1, 2, and 3) (US EPA,1989; 2004).Exposure was expressed in terms of daily dose (mg/kg.day).The exposure factors for these models were shown in Table2.
Figure 4 :
Figure 4: Non-carcinogenic risk distri bution of di fferent exposure ways for chil dren and adults in Makkah.
Figure 5 :
Figure 5: Cancer risks distribution of different exposure ways for chil dren and adults in Makkah.
).This results indicated that cancer risk values for Makkah city in this study were within the acceptable range, Preprints (www. | 2017-12-25T10:29:44.512Z | 2017-12-14T00:00:00.000 | {
"year": 2017,
"sha1": "61026a750f654f3fa548e28431aec5710f6dca09",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/201712.0089/v1/download",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "61026a750f654f3fa548e28431aec5710f6dca09",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
251286687 | pes2o/s2orc | v3-fos-license | Decentralized Online Scheduling of Malleable NP-hard Jobs
. In this work, we address an online job scheduling problem in a large distributed computing environment. Each job has a priority and a demand of resources, takes an unknown amount of time, and is malleable, i.e., the number of allotted workers can fluctuate during its execution. We subdivide the problem into (a) determining a fair amount of resources for each job and (b) assigning each job to an according number of processing elements. Our approach is fully decentralized, uses lightweight communication, and arranges each job as a binary tree of workers which can grow and shrink as necessary. Using the NP-complete problem of propositional satisfiability (SAT) as a case study, we experimentally show on up to 128 machines (6144 cores) that our approach leads to near-optimal utilization, imposes minimal computational overhead, and performs fair scheduling of incoming jobs within a few milliseconds.
Introduction
A parallel task is called malleable if it can handle a fluctuating number of workers during its execution. In the field of distributed computing, malleability has long been recognized as a powerful paradigm which opens up vast possibilities for fair and flexible scheduling and load balancing [13,17]. While most previous research on malleable job scheduling has steered towards iterative data-driven applications, we want to shed light on malleability in a very different context, namely for NP-hard tasks with unknown processing times. For instance, the problem of propositional satisfiability (SAT) is of high practical relevance and an important building block for many applications including automated planning [26], formal verification [18], and cryptography [20]. We consider malleable scheduling of such tasks highly promising: On the one hand, the description of a job can be relatively small even for very difficult problems, and the successful approach of employing many combinatorial search strategies in parallel can be made malleable without redistribution of data [27]. On the other hand, the limited scalability of these parallel algorithms calls for careful distribution of computational resources. We believe that a cloud-like on-demand system for resolving NP-hard problems has the potential to drastically improve efficiency and productivity for many organizations and environments. Using malleable job scheduling, we can schedule new jobs within a few milliseconds, resolve trivial jobs in a fraction of second, and rapidly resize more difficult jobs to a fair share of all resources -as far as the job can make efficient use of these resources.
To meet these objectives, we propose a fully decentralized scheduling approach which guarantees fast, fair, and bottleneck-free scheduling of resources without any knowledge on processing times. In previous work [27], we briefly outlined initial algorithms for this purpose while focusing on our award-winning scalable SAT solving engine which we embedded into our system. In this work, we shed more light on our earlier scheduling algorithms and proceed to propose significant improvements both in theory and in practice.
We address two subproblems. The first problem is to let m workers compute a fair number of workers v j for each active job j, accounting for its priority and maximum demand, which result in optimal system utilization. In previous work [27] we outlined this problem and employed a black box algorithm to solve it. The second problem is to assign v j workers to each job j while keeping the assignment as stable as possible over time. Previously [27], we proposed to arrange each job j as a binary tree of workers which grows and shrinks depending on v j , and we described and implemented a worker assignment strategy which routes request messages randomly through the system. When aiming for optimal utilization, this protocol leads to high worst-case scheduling latencies.
In this work, we describe fully distributed and bottleneck-free algorithms for both of the above problems. Our algorithms have O(log m) span and are designed to consistently achieve optimal utilization. Furthermore, we introduce new measures to preferably reuse existing (suspended) workers for a certain job rather than initializing new workers. We then present our scheduling platform Mallob 1 which features simplified yet highly practical implementations of our approaches. Experiments on up to 128 nodes (6144 cores) show that our system leads to near-optimal utilization and schedules jobs with a fair share of resources within tens of milliseconds. We consider our theoretical as well as practical results to be promising contributions towards processing malleable NP-hard tasks in a more scalable and resource-efficient manner.
Preliminaries
We now establish important preliminaries and discuss work related to ours.
Malleable Job Scheduling
We use the following definitions [10]: A rigid task requires a fixed number of workers. A moldable task can be scaled to a number of workers at the time of its scheduling but then remains rigid. Finally, a malleable task is able to adapt to a fluctuating number of workers during its execution. Malleability can be a highly desirable property of tasks because it allows to balance tasks continuously to warrant fair and optimal utilization of the system at hand [17]. For instance, if an easy job arrives in a fully utilized system, malleable scheduling allows to shrink an active job in order to schedule the new job immediately, significantly decreasing its response time. Due to the appeal of malleable job scheduling, there has been ongoing research to exploit malleability, from shared-memory systems [13] to HPC environments [6,9], even to improve energy efficiency [25].
The effort required to transform a moldable (or rigid) algorithm into a malleable algorithm depends on the application at hand. For iterative data-driven applications, redistribution of data is necessary if a task is expanded or shrunk [9]. In contrast, we demonstrated in previous work [27] for the use case of propositional satisfiability (SAT) that basic malleability is simple to achieve if the parallel algorithm is composed of many independent search strategies: The abrupt suspension and/or termination of individual workers can imply the loss of progress, but preserves completeness. Moreover, if workers periodically exchange knowledge, the progress made on a worker can benefit the job even if the worker is removed. For these reasons, we have not yet considered the full migration of application processes as is done in adaptive middlewares [9,16] but instead hold the application itself responsible to react to workers being added or removed.
Most prior approaches rely on known processing times of jobs and on an accurate model for their execution time relative to the degree of parallelism [5,24] whereas we do not rely on such knowledge. Furthermore, most approaches employ a centralized scheduler, which implies a potential bottleneck and a single point of failure. Our approach is fully decentralized and uses a small part of each processes' CPU time to perform distributed scheduling, which also opens up the possibility to add more general fault-tolerance to our work in the future. For instance, this may include continuing to schedule and process jobs correctly even in case of network-partitioning faults [2], i.e., failures where sub-networks in the distributed environment are disconnected from each another. Other important aspects of fault-tolerance include mitigation of simple node failures (i.e., a machine suddenly goes out of service) and of Byzantine failures [7] (i.e., a machine exhibits arbitrary behavior, potentially due to a malicious attack).
Scalable SAT Solving
The propositional satisfiability (SAT) problem poses the question whether a given propositional formula F = k i=1 ci j=1 l i,j is satisfiable, i.e., whether there is an assignment to all Boolean variables in F such that F evaluates to true. SAT is the archetypical NP-complete problem [8] and, as such, a notoriously difficult problem to solve. SAT solving is a crucial building block for a plethora of applications such as automated planning [26], formal verification [18], and cryptography [20]. State-of-the-art SAT solvers are highly optimized: The most popular algorithm named Conflict-Driven Clause Learning (CDCL) performs depth-first search on the space of possible assignments, backtracks and restarts its search frequently, and derives redundant conflict clauses when encountering a dead end in its search [19]. As these clauses prune search space and can help to derive unsatisfiability, remembering important derived clauses is crucial for modern SAT solvers' performance [3].
The empirically best performing approach to parallel SAT solving is a socalled portfolio of different solver configurations [14] which all work on the original problem and periodically exchange learned clauses. In previous work, we presented a highly competitive portfolio solver with clause sharing [27] and demonstrated that careful periodic clause sharing can lead to respectable speedups for thousands of cores. The malleable environment of this solver is the system which we present here. Other recent works on decentralized SAT solving [15,21] rely on a different parallelization which generates many independent subproblems and tends to be outperformed by parallel portfolios for most practical inputs [11].
Problem Statement
We consider a homogeneous computing environment with a number of interconnected machines on which a total of m processing elements, or PEs in short, are distributed. Each PE has a rank x ∈ {0, . . . , m − 1} and runs exclusively on c ≥ 1 cores of its local machine. PEs can only communicate via message passing.
Jobs are introduced over an interface connecting to some of the PEs. Each job j has a job description, a priority p j ∈ R + , a demand d j ∈ N + , and a budget b j (in terms of wallclock time or CPU time). If a PE participates in processing a job j, it runs an execution environment of j named a worker. A job's demand d j indicates the maximum number of parallel workers it can currently employ: d j is initialized to 1 and can then be adjusted by the job after an initial worker has been scheduled. A job's priority p j may be set, e.g., depending on who submitted j and on how important they deem j relative to an average job of theirs. In a simple setting where all jobs are equally important, assume p j = 1 ∀j. A job is cancelled if it spends its budget b j before finishing. We assume for the active jobs J in the system that the number n = |J| of active jobs is no higher than m and that each PE employs at most one worker at any given time. However, a PE can preempt its current worker, run a worker of another job, and possibly resume the former worker at a later point.
Let T j be the set of active workers of j ∈ J. We call v j := |T j | the volume of j. Our aim is to continuously assign each j ∈ J to a set T j of PEs subject to: (C1) (Optimal utilization) Either all job demands are fully met or all m PEs are working on a job: (C2) (Individual job constraints) Each job must have at least one worker and is limited to d j workers: Due to rounding, in C3 we allow for job volumes to deviate by a single unit (see ε ≤ 1) from a fair distribution as long as the job of higher priority is favored.
Approach
We subdivide the problem at hand into two subproblems: First, find fair volumes v j for all currently active jobs j ∈ J subject to C1-C3. Secondly, identify pairwise disjoint sets T j with |T j | = v j for each j ∈ J. In this section, we present fully decentralized and highly scalable algorithms for both subproblems. In Section 4.1 we describe how our practical implementation differs from these algorithms.
To assess our algorithms, we consider two important measures from parallel processing. Given a distributed algorithm, consider the dependency graph which is induced by the necessary communication among all PEs. The span (or depth) of the algorithm is the length of a critical path through this graph. The local work is the complexity of local computations summed up over all PEs.
Calculation of Fair Volumes
Given jobs J with individual priorities and demands, we want to find a fair volume v j for each job j such that constraints C1-C3 are met. Volumes are recomputed periodically taking into account new jobs, departing jobs, and changed demands. In the following, assume that each job has a single worker which represents this (and only this) job. We elaborate on these representants in Section 3.2.
We defined our problem such that n = |J| ≤ m. Similarly, we assume j∈J d j > m since otherwise we can trivially set v j = d j for all jobs j. Assuming real-valued job volumes for now, we can observe that for any parameter α ≥ 0, constraints C2-C3 are fulfilled if we set v j = v j (α) := max(1, min(d j , αp j )). By appropriately choosing α, we can also meet the utilization constraint C1: Consider the function ξ(α) := m − j∈J v j (α) which expresses the unused resources for a particular value of α. Function ξ is a continuous, monotonically decreasing, and piece-wise linear function (see Fig. 1). Moreover, ξ(0) = m − n ≥ 0 and ξ(max j∈J d j /p j ) = m − j∈J d j < 0. Hence ξ(α) = 0 has a solution α 0 which Fig. 1. Volume calculation example with four jobs and m = 7. Five of the eight points where ξ(α) is evaluated are depicted, three more (d3/p3, d1/p1, and d2/p2) are omitted.
represents the desired choice of α that exploits all resources, i.e., it also fulfills constraint C1. Once α 0 is found, we need to round each v j (α 0 ) to an integer. Due to C1 and C3, we propose to round down all volumes and then increment the volume of the k := m − j v j (α 0 ) jobs of highest priority: We identify J := {j ∈ J : v j (α 0 ) < d j }, sort J by job priority, and select the first k jobs. We now outline a fully distributed algorithm which finds α 0 in logarithmic span. We exploit that ξ , the gradient of ξ, changes at no more than 2n values of α, namely when αp j = 1 or αp j = d j for some j ∈ J. Since we have m ≥ n PEs available, we can try these O(n) values of ξ(α) in parallel. We then find the two points with smallest positive value and largest negative value using a parallel reduction operation. Lastly, we interpolate ξ between these points to find α 0 .
The parallel evaluation of ξ is still nontrivial since a naive implementation would incur quadratic work -O(n) for each value of α. We now explain how to accelerate the evaluation of ξ. For this, we rewrite ξ(α) = m − j∈J v j (α) as: Intuitively, R sums up all resources which are assigned due to raising a job volume to 1 (if αp j < 1) and due to capping a job volume at d j (if αp j > d j ); and αP sums up all resources assigned as This new representation only features two unknown variables, R and P , which can be computed efficiently. At α = 0, we have R = n and P = 0 since all job volumes are raised to one. If we then successively increase α, we pass 2n events where R and P are modified, namely whenever αp j = 1 or αp j = d j for some job j. Since each such event modifies R and P by a fixed amount, we can use a single prefix sum calculation to obtain all intermediate values of R and P .
Each event e = (α e , r e , p e ) occurs at point α e and adds r e to R and p e to P . Each job j causes two events: e j = (1/p j , −1, p j ) for the point αp j = 1 where v j stops being raised to 1, and e j = (d j /p j , d j , −p j ) for the point αp j = d j where v j begins to be capped at d j . We sort all events by α e and then compute a prefix sum over r e and p e : (R e , P e ) = ( e e r e , e e r p ), where "≺" denotes the ordering of events after sorting. We can now compute ξ(α e ) = m − (n + R e ) − α e P e at each event e. 2 The value of n can be obtained with a parallel reduction.
Overall, our algorithm has O(log m) span and takes O(m log m) work: Sorting O(n) elements in parallel on m ≥ n PEs is possible in logarithmic time, 3 as is computing reductions and prefix sums. Selecting the k jobs to receive additional volume after rounding down all volumes can be reduced to sorting as well.
Assignment of Jobs to PEs
We now describe how the fair volumes computed as in the previous section translate to an actual assignment of jobs to PEs.
Basic Approach. We begin with our basic approach as introduced in [27].
For each job j, we address the k current workers in T j as w 0 j , w 1 j , . . . , w k−1 j . These workers can be scattered throughout the system, i.e., their job indices 0, . . . , k − 1 within T j are not to be confused with their ranks. The k workers form a communication structure in the shape of a binary tree (Fig. 2). Worker w 0 j is the root of this tree and represents j for the calculation of its volume (Section 3.1). Workers w 2i+1 j and w 2i+2 j are the left and right children of w i j . Jobs are made malleable by letting T j grow and shrink dynamically. Specifically, we enforce that T j consists of exactly k = v j workers. If v j is updated, all workers w i j for which i ≥ v j are suspended and the corresponding PEs turn idle. Likewise, workers without a left (right) child for which 2i + 1 < v j (2i + 2 < v j ) attempt to find a child worker w 2i+1 j (w 2i+2 j ). New workers are found via request messages: A request message r = (j, i, x) holds index i of the requested worker w i j as well as rank x of the requesting worker. If a new job is introduced at some PE, then this PE emits a request for the root node w 0 j of T j . All requests for w i j , i > 0 are emitted by the designated parent node w (i−1)/2 j of the desired worker. In [27], we proposed that each request performs a random walk through a regular graph of all PEs and is resolved as soon as it hits an idle PE. While this strategy resolves most requests quickly, some requests can require a large number of hops. If we assume a fully connected graph of PEs and a small share of workers is idle, then each hop of a request corresponds to a Bernoulli process with success probability , and a request takes an expected 1/ hops until an idle PE is hit. Consequently, to improve worst-case latencies, a small ratio of workers should be kept idle [27]. By contrast, our following algorithm with logarithmic span does not depend on suboptimal utilization.
Matching Requests and Idle PEs. In a first phase, our improved algorithm (see Fig. 3) computes two prefix sums with one collective operation: the number of requests q i being emitted by PEs of rank < i, and the number o i of idle PEs of rank < i. We also compute the total sums, q m and o m , and communicate them to all PEs. The q i and o i provide an implicit global numbering of all requests and all idle PEs. In a second phase, the i-th request and the i-th token are both sent to rank i. In the third and final phase, each PE which received both a request and an idle token sends the request to the idle PE referenced by the token.
If the request for a worker w i j is only emitted by its designated parent w (i−1)/2 j , then our algorithm so far may need to be repeated O(log m) times: Repetition l activates a worker which then emits requests for repetition l + 1. Instead, we can let a worker emit requests not only for its direct children, but for all transitive children it deserves. Each worker w i j can compute the number k of desired transitive children from v j and i. The worker then contributes k to q i . In the second phase, the k requests can be distributed communication-efficiently to a range of ranks {x, . . . , x + k − 1}: w i j sends requests for workers w 2i+1 j and w 2i+2 j to ranks x and x + 1, which send requests for corresponding child workers to ranks x + 2 through x + 5, and so on, until worker index v j − 1 is reached. To enable this distribution, we append to each request the values x, v j , and the rank of the PE where the respective parent worker will be initialized. As such, each child knows its parent within T j (Fig. 3) for job-internal communication.
We now outline how our algorithm can be executed in a fully asynchronous manner. We compute the prefix sums within an In-Order binary tree of PEs [22,Chapter 13.3], that is, all children in the left subtree of rank i have a rank < i and all children in the right subtree have a rank > i. This prefix sum computation can be made sparse and asynchronous: Only non-zero contributions to a prefix sum are sent upwards explicitly, and there is a minimum delay in between sending contributions to a parent. Furthermore, we extend our prefix sums to also include inclusive prefix sums q i , o i which denote the number of requests (tokens) at PEs of rank ≤ i. As such, every PE can see from the difference q i − q i (o i − o i ) how many of its local requests (tokens) took part in the prefix sum. Last but not least, the number of tokens and the number of requests may not always match -a PE which receives either a request or an idle token (but not both) knows of this imbalance due to the total sums q m , o m . The unmatched message is sent to its origin and can re-participate in the next iteration.
Our matching algorithm has O(log m) span and takes O(m) local work. The maximum local work of any given PE is in O(log m) (to compute the above k), which is amortized by other PEs because at most m requests are emitted.
Reuse of Suspended Workers
Each PE remembers up to C most recently used workers (for a small constant C) and deletes older workers. Therefore, if a worker w i j is suspended, it may be resumed at a later time. Our algorithms so far may choose different PEs and hence create new workers whenever T j shrinks and then re-grows. We now outline how we can increase the reuse of suspended workers.
In our previous approach [27], each worker remembers a limited number of ranks of its past (direct) children. A worker which desires a child queries them for reactivation one after the other until success or until all past children have been queried unsuccessfully, at which point a normal job request is emitted.
We make two improvements to this strategy. First, we remember past workers in a distributed fashion. More precisely, whenever a worker joins or leaves T j , we distribute information along T j to maintain the following invariant: Each current leaf w i j in T j remembers the past workers which were located in a subtree below index i. As such, past workers can be remembered and reused even if T j shrinks by multiple layers and re-grows differently.
Secondly, we adjust our scheduling to actively prioritize the reuse of existing workers over the initialization of new workers. In our implementation, each idle PE can infer from its local volume calculation (Section 4.1) which of its local suspended workers w i j are eligible for reuse, i.e., v j > i in the current volume assignment. If a PE has such a worker w i j , the PE will reject any job requests until it received a message regarding w i j . This message is either a query to resume w i j or a notification that w i j will not be reused. On the opposite side, a worker which desires a child begins to query past children according to a "most recently used" strategy. If a query succeeds, all remaining past children are notified that they will not be reused. If all queries failed, a normal job request is emitted.
The Mallob System
In the following, we outline the design and implementation of our platform named Mallob, short for Malleable Load Balancer. Mallob is a C++ application using the Message Passing Interface (MPI) [12]. Each PE can be configured to accept jobs and return responses, e.g., over the local file system or via an internal API. The application-specific worker running on each PE is defined via an interface with a small set of methods. These methods define the worker's behavior if it is started, suspended, resumed, or terminated, and allow it to send and receive application-specific messages at will. Note that we outlined some of Mallob's earlier features in previous work [27] with a focus on our malleable SAT engine.
Implementation of Algorithms
Our system features practical and simplified implementations solving the volume assignment problem and the request matching problem. We now explain how and why these implementations differ from the algorithms provided in Section 3.
Volume Assignment. Our implementation computes job volumes similar to the algorithm outlined in Section 3.1. However, each PE computes the desired change of root α 0 of ξ locally. All events in the system (job arrivals, departures, and changes in demands) are aggregated and broadcast periodically such that each PE can maintain a local image of all active jobs' demands and priorities [27]. The local search for α 0 is then done via bisection over the domain of ξ. This approach requires more local work than our fully distributed algorithm and features a broadcast of worst-case message length O(n). However, it only requires a single all-reduction. At the scale of our current implementation (n < 10 3 and m < 10 4 ), we expect that our simplified approach performs better than our asymptotically superior algorithm which features several stages of collective operations. When targeting much larger configurations in the future, it may be beneficial to implement and employ our fully distributed algorithm instead.
Request Matching. We did not yet implement asynchronous prefix sums as described in Section 3.2. Instead, we route requests directly along a communication tree R of PEs. Each PE keeps track of the idle count, i.e., the number of idle PEs, in each of its subtrees in R. This count is updated transitively whenever the idle status of a child changes. Emitted requests are routed upwards through R until hitting an idle PE or until a hit PE has a subtree with a non-zero idle count, at which point the request is routed down towards the idle PE. If a large number of requests (close to n) are emitted, the traffic along the root of R may constitute a bottleneck. However, we found that individual volume updates in the system typically result in a much smaller number of requests, hence we did not observe such a bottleneck in practice. We intend to include our bottleneck-free algorithm (Section 3.2) in a future version of our system.
Engineering
For good practical performance of our system, careful engineering was necessary. For instance, our system exclusively features asynchronous communication, i.e., a PE will never block for an I/O event when sending or receiving messages. As a result, our protocols are designed without explicit synchronization (barriers or similar). We only let the main thread of a PE issue MPI calls, which is the most widely supported mode of operation for multithreaded MPI programs.
As we aim for scheduling latencies in the range of milliseconds, each PE must frequently check its message queue and react to messages. For instance, if the main thread of a PE allocates space for a large job description, this can cause a prohibitively long period where no messages are processed. For this reason, we use a separate thread pool for all tasks which involve a risk of taking a long time. Furthermore, we split large messages into batches of smaller messages, e.g., when transferring large job descriptions to new workers.
Evaluation
We now present our experimental evaluation. All experiments have been conducted on the supercomputer SuperMUC-NG. If not specified otherwise, we used 128 compute nodes, each with an Intel Skylake Xeon Platinum 8174 processor clocked at 2.7 GHz with 48 physical cores (96 hardware threads) and 96 GB of main memory. SuperMUC-NG is running Linux (SLES) with kernel version 4.12 at the time of running our experiments. We compiled Mallob with GCC 9 and with Intel MPI 2019. We launch twelve PEs per machine, assign eight hardware threads to each PE, and let a worker on a PE use four parallel worker threads. Our system can use the four remaining hardware threads on each PE in order to keep disturbance of the actual computation at a minimum. Our software and experimental data are available at https://github.com/domschrei/mallob.
Uniform Jobs
In a first set of experiments, we analyze the base performance of our system by introducing a stream of jobs in such a way that exactly n par jobs are in the system at any time. We limit each job j to a CPU time budget B inversely proportional to n par . Each job corresponds to a difficult SAT formula which cannot be solved within the given budget. As such, we emulate jobs of fixed size.
We chose m and the values of n par in such a way that m/n par ∈ N for all runs. We compare our runs against a hypothetical rigid scheduler which functions as follows: Exactly m/n par PEs are allotted for each job, starting with the first n par jobs at t = 0. At periodic points in time, all jobs finish and each set of PEs instantly receives the next job. This leads to perfect utilization and maximizes throughput. We neglect any kind of overhead for this scheduler.
For a modest number of parallel jobs n par in the system (n par ≤ 192), our scheduler reaches 99% of the optimal rigid scheduler's throughput (Table 1). This efficiency decreases to 97.6% for the largest n par where v j = 2 for each job. As the CPU time of each job is calculated in terms of its assigned volume and as the allocation of workers takes some time, each job uses slightly less CPU time than advertised: Dividing the time for which each job's workers have been active by its advertised CPU time, we obtained a work efficiency of η ≥ 99%. Lastly, we measured the CPU utilization of all worker threads as reported by the operating system, which averages at 98% or more. In terms of overall work efficiency η × u, we observed an optimum of 98% at n par = 192, a point where neither n par nor the size of individual job trees is close to m. Table 1. Scheduling uniform jobs on 1536 PEs (6144 cores) compared to a hypothetical optimal rigid scheduler. From left to right: Max. number npar of parallel jobs; max. measured throughput θ, optimal throughput θopt (in jobs per second), throughput efficiency θ/θopt; work efficiency η; mean measured CPU utilization u of worker threads.
Impact of Priorities
In the following we evaluate the impact of job priorities. We use 32 nodes (1536 cores, 384 PEs) and introduce nine streams of jobs, each stream with a different job priority p ∈ [0.01, 1] (see Fig. 4 right) and with a wallclock limit of 300 s per job. As such, the system processes nine jobs with nine different priorities at a time. Each stream is a permutation of 80 diverse SAT instances [27].
As expected, we observed a proportional relationship between priority and assigned volume, with small variations due to rounding (Fig. 4). By contrast, response times appear to decrease exponentially towards a certain lower bound, which is in line with the NP-hardness of SAT and the diminishing returns of parallel SAT solving [27]. The modest margin by which average response times decrease is due to the difficulty of the chosen SAT benchmarks, many of which cannot be solved within the imposed time limit at either scale.
Realistic Job Arrivals
In the next set of experiments, we analyze the properties of our system in a more realistic scenario. Four PEs introduce batches of jobs with poisson-distributed arrivals (inter-arrival time of 1/λ ∈ {2.5 s, 5 s, 10 s}) and between one and eight jobs per batch. As such, we simulate users which arrive at independent times and submit a number of jobs at once. We also sample a priority p j ∈ [0.01, 1], a maximum demand d j ∈ 1, . . . , 1536, and a wallclock limit b j ∈ [1, 600] s for each job. We ran this experiment with our current request matching (Section 4.1) and with each request message performing up to h random hops (as in [27]) until our request matching is employed, for varying values of h. In addition, we ran the experiment with three different suspended worker reuse strategies: No deliberate reuse at all, the basic approach from [27], and our current approach. Fig. 5 (left) shows the number of active jobs in the system over time for our default configuration (our reuse strategy and immediate matching of requests). For all tested interarrival times, considerable changes in the system load can be observed during a job's average life time which justify the employment of a malleable scheduling strategy. Fig. 5 (right) illustrates for 1/λ = 5 s that system utilization is at around 99.8% on average and almost always above 99.5%. We also measured the ratio of time for which each PE has been idle: The median PE was busy 99.08% of all time for the least frequent job arrivals (1/λ = 10 s), 99.77% for 1/λ = 5 s, and 99.85% for 1/λ = 2.5 s. Also note that j d j < m for the first seconds of each run, hence not all PEs can be utilized immediately.
In the following, we focus on the experiment with 1/λ = 5 s. The latency of our volume calculation, i.e., the latency until a PE received an updated volume for an updated job, reached a median of 1 ms and a maximum of 34 ms for our default configuration. For the scheduling of an arriving job, Fig. 6 (left) shows that the lowest latencies were achieved by our request matching (h = 0). For increasing values of h, the variance of latencies increases and high latencies (≥ 50 ms) become more and more likely. Note that jobs normally enter a fully utilized system, and have d j = 1. Therefore, the triggered balancing calculation may render only a single PE idle, which heavily disfavors performing a random walk. Regarding the latency of expanding a job tree by another layer, Fig. 6 (right) indicates that requests performing random walks have a high chance to succeed quickly but can otherwise result in high latencies (> 10 ms).
To compare strategies for reusing suspended workers, we divided the number of created workers for a job j by its maximum assigned volume v j . This Worker Creation Ratio (WCR) is ideally 1 and becomes larger the more often a worker is suspended and then re-created at a different PE. We computed the WCR for each job and in total: As Tab. 2 shows, our approach reduces a WCR of 2.14 down to 1.8 (-15.9%). Context switches (i.e., how many times a PE changed its affiliation) and average response times are improved marginally compared to the naive approach. Last but not least, we counted on how many distinct PEs each w i j has been created: Our strategy initializes 89% of all workers only once, and 94% of workers have been created at most five times. We conclude that most jobs only feature a small number of workers which are rescheduled frequently.
Conclusion
We have presented a decentralized and highly scalable approach to online job scheduling of malleable NP-hard jobs with unknown processing times. We split our problem into two subproblems, namely the computation of fair job volumes Table 2. Comparison of worker reuse strategies in terms of worker creation ratio (WCR, per job -median, maximum -and in total), context switches (CS, median per PE and mean), the number of processed jobs within 1 h (Pr.), their mean response time (RT), and the fraction of workers created on at most {1, 2, 5, 10, 25} distinct PEs. and the assignment of jobs to PEs, and proposed scalable distributed algorithms with O(log m) span for both of them. We presented a practical implementation and experimentally showed that it schedules incoming jobs within tens of milliseconds, distributes resources proportional to each job's priority, and leads to near-optimal utilization of resources.
For future work, we intend to add engines for applications beyond SAT into our system. Furthermore, we want to generalize our approach to heterogeneous computing environments and add fault tolerance to our distributed algorithms. | 2022-08-04T13:06:35.811Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "31d482e60e9ce0a15df0742a30bd0ad5a45c0378",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-031-12597-3_8.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "32a006ea479e819d929c0fc76b8ca51275d5c870",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
190875731 | pes2o/s2orc | v3-fos-license | Sino-nasal T/NK lymphomas (Medio-Facial Malignant Granulomas)
TNK Lymphoma is a Rare Pathological Entity Characterized by a Necrotic Process Starting in The Nasal Cavity, Extending to Facial Bones and Destroying Them. World Health Organization’s Classification Helped to Standardize Different Reviews of This Pathology. As Many Other Diseases, TNK Lymphoma Is Related to EBV Infection. It Keeps, Still, A Bad Prognosis Even If Current Treatments Allowed Long Remissions. Evolution Is Often Difficult to Predict and Must Search for Other Prognostic Markers Such as Cytogenetics. The Aim of Our Work Was to Describe the Epidemiological, Clinical and Therapeutic Characteristics of TNK Lymphoma Through A Retrospective Study Of 18 Cases Collected in Our Department Over A Period Of 17 Years (2000). -2016).
Results
The mean age of our patients was 51 years old (35-81 years). A frequency peak was noted in the age group 40 -50 years old. A male predominance was, also, noted with a sex ratio of five (15 males / 3 females). The mean diagnosis delay was 6 months (1-18 months). A history of hypertension was found in two patients and diabetes in three patients. Nasal obstruction was the main complaint reported by 15 patients: unilateral in 11 cases and bilateral in four cases. Other symptoms were purulent rhinorrhea: 11 cases; spontaneous epistaxis: 11 cases; cacosmia: five cases; otalgia, sensation of ear fullness and unilateral hearing impairment: four patients. Ophthalmologic complaints were reduced visual acuity in two cases and blindness in one case. Headaches, impaired general condition and prolonged fever were reported in eight, seven and four cases respectively. Physical examination found facial deformities in 11 patients: mandibular and jugal swelling in one and four cases respectively, hemi-face edema in four cases and nasal pyramid deformity in four cases. Unilateral exophthalmia was also found in four cases associated to chemosis in one case.
All patients have had a nasal endoscopy, which objectified a tumor located in the nasal cavity in 17 cases and in the cavum in one case. Examination of the oral cavity showed necrotic ulceration of the hard palate mucosa in three cases performing an oro-nasal communication in one case. Neck examination found adenopathies in five cases: unilateral in two cases and bilateral in three cases. They were hard and fixed in three cases. They were located in the lymph node groups II, IV and V in three cases, group II in one case and groups II and V in one case. Neurological examination revealed a paralysis of cranial nerve pairs II, III, IV and V in two patients, paralysis of left VI in one patient and of V cranial nerve in one case. A facial bone CT scan was performed for all patients ( Figure 1). It objectified a sino-nasal tumoral process and evoked the diagnosis of TNK lymphoma in two cases. This tumor was located in nasal cavity in 17 cases with extension to ethmoid in seven cases, maxillary sinus in six cases, sphenoid sinus in one case, cavum in two cases, oral cavity in two cases and to oropharynx in three cases. It was located in the cavum in one case. An orbital extension through an orbital lamina break in was found in four cases. A cerebral CT scan, performed in two cases, showed intracranial extension with mass effect on frontal lobe. Extension to soft tissues was observed in six cases including buccal mucosa in four cases, pre-mandibular and frontal soft tissues in one case for each one. A total body CT scan was performed in nine cases as part of distant extension assessment. It was normal in five cases. It revealed the presence of mediastinal lymphadenopathies in four cases. All patients have had a tumoral biopsy. Histologic description showed a chorion occupied by lymphomatous proliferation made of medium to large cells with clear cytoplasmic. In places, the tumor cells were arranged in perivascular sheaths (angiocentric) destroying the vascular wall (angio-destructive) with hemorrhagic suffusions and tumor necrosis. Immuno-histochemical study showed in all cases an intense and diffuse cytoplasmic positivity of tumor cells to anti-CD3 antibody and their negativity to anti-CD20 antibodies and to cytokeratin.
The CD56 antibody, performed for all patients, was positive while CD30 (achieved in 12 cases) was positive in only seven cases. Clinical and para-clinical assessment allowed to classify the tumor as follows (according to Ann Arbor classification): stage IE in 12 cases (67%) and stage IIE in four cases (22%) and IV in two cases (11%). Two patients were lost to sight and five died before any therapeutic approach. Treatment was based on exclusive radiotherapy for four patients classified IE stage. Combined treatment (radiotherapy and chemotherapy) was performed in five cases classified as diffuse IE stage and in one patient with stage IV disease. Chemotherapy was performed in four patients classified stage IIE in three cases and stage IE diffuse in one case. The mean follow-up of our patients was 8 months (2-42 months). Only two patients were in clinical remission. The evolution was marked by the death of 12 patients and four patients were lost during follow-up. Seven patients underwent an imaging control (CT scan): four were in clinical remission and the scanner was without abnormalities. Two of these patients were, later, lost to follow-up.
Discussion
Sino-nasal TNK lymphomas are rare. They are more common in Asia, Mexico and South America. In Tunisia, their frequency is unknown [1]. TNK lymphoma accounts for 45% of primary nasal lymphomas [2]. It associates ulcerative-necrotic lesions preferentially starting in the nasal cavities and sinuses (70%). It can also develop in the Waldeyer's ring (38%), oral cavity (14%), larynx, hypopharynx (10%) and even in the mandible or cheek [3]. T / NK lymphoma can be seen at any age but mainly affects subjects in the fourth and fifth decade [4]. The average age of our patients was 51 years old. It often affects male subjects [2,5]. In our study, the sex ratio was five. T / NK lymphoma has an unknown pathogenesis. However, it is strongly associated with EBV infection [1,6]. This EBV infection is associated with a poor prognosis with frequent local relapses, possibility of extension outside lymph nodes territories and appearance of macrophage activation syndrome [5]. Detection of EBV in almost all tumor cells allows for tumor proliferation of cytotoxic phenotype to make the diagnosis in cases of unusual clinical presentation [7].
Some studies have linked these lymphomas to overexpression of the P53 protein, associated or not with P53 gene mutation (often induced by the presence of EBV) [8,9]. The mean diagnosis delay is often prolonged because of the chronicity of lesions and absence of specific revelation mode [10]. The majority of patients presented with localized lesions but often with invasion of nearby structures such as: face sinuses, palate and nasal cavities [11]. Patients often present with unilateral nasal obstruction, which is initially intermittent, and then permanent associated with fetid rhinorrhea. Smell disorders can also be present [1]. In 20 to 40%
Am J Biomed Sci & Res
Copyright@ Jbali Souheil 21 of cases, clinical presentation may be distorted by a generalized granulomatosis affecting skin, subcutaneous tissues, eyes, gut, lungs and nervous system [12]. A large number of biopsy specimens are necessary in order to diagnose nasal T / NK lymphoma. In fact, this tumor is often site of necrotic and hemorrhagic rearrangements and biopsies may only cover areas of remodeling [13]. Histological examination shows layers of atypical cells of variable size. Mitoses are frequent. What characterizes the nasal T / NK lymphoma is the presence of vascular lesions with tumor cells placed in perivascular cuffs (angiocentrism), with penetration of these cells into the vascular wall and lumen forming, sometimes, vascular thrombi (angiodestructive lesions).
Areas of necrosis and fibrosis are observed, with pseudoepitheliomatous hyperplasia of the nasal mucosa [14,15]. Immunophenotypic study reveals the expression of cellular markers of T lymphocytes and of NK lymphocytes, hence the name of this lymphoma: TNK lymphoma. Typical immune-phenotype of this lymphoma is CD2+, CD56+ (which is the NK-specific marker), expression of intracytoplasmic anti-CD3 antibody with surface CD3 negativity [16]. EBV can be found in tumor cells in the vast majority of cases as confirmed by several immuno-labeling and molecular biology studies [17]. According to Rodriguez, therapeutic innovations must be tried, including immunotherapy, which targets the expression of EBV; EBV infection is associated with 90-100% of TNK lymphoma cases inducing a poor prognosis [18]. As all sino-nasal tumors, CT scan remains the first complementary exploration [1]. It allows to precise tumor localization, extension to around structures and other informations such as bone lysis. It is also essential for evaluation of therapeutic response and followup thereafter [19]. MRI allows evaluating tumor relation with cerebro-meningeal structures. In fact, it can differentiate between inflammation, soft tissues edema and tumor infiltration. Tumor appears iso-intense compared to muscles in T1 sequence and moderately hyper-intense in T2 sequence.
After gadolinium injection enhancement is also moderate and heterogeneous [1]. Actually, positron emission tomography (PET)-scan is an important diagnostic tool in TNK lymphomas. A meta-analysis conducted in 2014 within 135 patients with TNK lymphoma found a sensibility and specificity of 95% of PETscan in the diagnosis and staging of TNK lymphomas [20]. Once diagnosis is made, an extension assessment is necessary before any treatment and includes a clinical examination of lymph nodes areas, hepatomegaly and splenomegaly research, chest X-ray, thoracoabdominal CT scan, osteo-medullary biopsy, digestive fibroscopy and possibly a lumbar puncture in case of intracranial extension [21]. The treatment is not yet subject of consensus and is based on the same principles of treatment of any aggressive lymphoma.
This treatment is based on radiotherapy and chemotherapy. Their use, separately or in combination, has been subject of several studies. Indications depend on the stage of the disease and therefore on the extension assessment. For localized stages (stages I and II), external radiotherapy with a minimum dose of about 52 Gy (conventional fractionation) is recommended. It gives a complete remission in 40 to 80% of cases and an overall survival at five years between 40 and 59% [22]. For advanced stages greater than IIE and stage B, radio-chemotherapy combination could significantly improve survival [23]. According to Mikhaeels and Spittle, intensive chemotherapy is necessary even in localized stages given the aggressiveness of this type of lymphoma [24]. This tumor has a poor prognosis: five-year overall survival varies between 10 and 45% [25].
Conclusion
T/NK lymphoma has benefited from advances in diagnostic and therapeutic means. It keeps, however a bad prognosis. The current challenge is to standardize therapeutic protocols to optimize survival rates and improve management. As future perspectives are discussed immunotherapy and targeted molecular therapy focused on EBV infection frequently observed.
Conflicts of Interests
No | 2019-06-14T13:46:36.725Z | 2019-05-15T00:00:00.000 | {
"year": 2019,
"sha1": "adab31ab6add8811a5af5029db5fc4842fcd79e1",
"oa_license": "CCBY",
"oa_url": "https://biomedgrid.com/pdf/AJBSR.MS.ID.000627.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e9b17b227abc72911cb8e0327451add0a484a1af",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252023612 | pes2o/s2orc | v3-fos-license | The Utilization of Lactation Rooms by Health Workers in Medan City
Exclusive breastfeeding for infants with working mothers still becomes a significant problem. The provision of on-site lactation rooms has not substantially impacted the utilization of lactation rooms. This study analyzes the factors that influence the utilization of breastfeeding rooms by health workers in all public health centers in Medan City. This study is a quantitative study using a cross-sectional design. The study involved 78 health workers who breastfed children under two years old. The determination of the sample used the total sampling technique. The results show knowledge (0.000), attitudes (0.002), practice (0.008), support from health workers (0.000), the availability of lactation rooms (0.000), and formula feeding (0.000) affect the utilization of breastfeeding rooms. The regression test results show that knowledge is the most influential factor in the utilization of the lactation room. Respondents with good knowledge have 9.477 times more opportunities to use the lactation room than respondents with poor knowledge. It can be concluded that the use of lactation rooms is influenced by factors such as knowledge, attitudes, practice, support from health workers, availability of lactation rooms, and formula feeding. It is recommended for local governments to provide adequate and comfortable facilities at each institution's offices to increase the utilization of breastfeeding rooms. The provision of a comfortable lactation
INTRODUCTION
Breastfeeding the baby exclusively until sixmonth old is recommended by WHO. Although a lot of literature references recommend the benefits that babies get if they are exclusively breastfed, the prevalence and duration of exclusive breastfeeding are lower than the recommendations for breastfeeding for the first six months in many countries. 1 Rate of Exclusive Breastfeeding (EBF) up to 6 months in most lowand middle-income countries is far below the standard. 2,3 In Indonesia, there is only 1 in 2 infants under six months are exclusively breastfed, and only slightly more than 5% of children are still breastfed at 23 months. This means that almost half of all Indonesian children do not receive the nutrition they need during the first two years. More than 40% of infants are introduced to complementary foods too early, i.e., before they reach the age of 6 months and the food which provided does not meet the nutritional needs of infants. 4 Indonesian Basic Health Research (Riset Kesehatan Dasar) results in 2018 show the proportion of exclusive breastfeeding patterns for infants aged 0-5 months is 37.3%. This percentage is still far from the target set by WHO which is 50%. The proportion of breastfeeding patterns for infants aged 0-5 months in 2018, in North Sumatra province reached 48%, those who gave exclusive breastfeeding was in the last 24 hours only consuming breast milk and not consuming food or beverage during the period. 5 Most women spend a lot of time working after the first year of birth, so they are distant from their children. 6,7 Friendly Lactation rooms in workplaces are indispensable for working mothers. This particular room allows female employees to work productively but does not forget the activity of pumping their breast milk to be given to their babies at home. Thus, breastfeeding is not interrupted even though the mother goes to work every day. The most common reasons why mothers stop breastfeeding are work demands, limited time and distance. 8 Several studies have shown that providing a lactation room contributes to improved performance, reduced absenteeism and commitment. 3,9 Many factors are related to the practice of breastfeeding can be viewed from various perspectives and classified according to individual and social roles. Studies have shown that high intensity of work, uncomfortable and hygienic lactation rooms in the workplace, and social conditions cause low breastfeeding practices for working mothers. 10,11 Knowledge and attitudes also contribute to improve breastfeeding practices, but they need support from their husbands and coworkers as well as adequate working conditions for mothers. 12 The formula feeding practice can also inhibit exclusive breastfeeding for working women. 6 The discomfort and embarrassment in expressing breast milk at work lead many women to give formula milk or stop breastfeeding altogether. 13 In this study, the researchers focused on the population group of women working in health care institutions such as health center and hospital because there are still few study results for this population. Previous studies in Indonesia have primarily focused on women working in the industrial, agricultural, and informal sectors. [14][15][16][17] In Indonesia, the provision of lactation room (rooms for breastfeeding babies, expressing breast milk storing expressed breast milk, and breastfeeding counseling) is regulated in Minister of Health Regulation No. 15 of 2013 concerning Procedures to Provide Special Facilities for Breastfeeding and Expressing Mother's Milk. The provision of this lactation room protects mothers in providing exclusive breastfeeding and fulfilling children's rights to get exclusive breastfeeding.
Among the 41 primary healthcares in Medan City, 9 primary healthcares do not provide a lactation room for their employees. The long-lasting COVID-19 pandemic has caused the lactation room to change its function into a storage area for goods for preventing COVID-19. Even in some primary healthcares, there is no longer a lactation room. This condition makes employees feel disturbed and uncomfortable to practice breastfeeding. This study intends to examine matters related to the usage of lactation rooms in primary healthcare. The study results will contribute to a better understanding for employers and supervisors about barriers to working mothers to breastfeed so that they can implement a policy of providing a friendly and comfortable lactation room.
MATERIAL AND METHOD
This research is quantitative research using a cross-sectional design. It analyzes the factors that influence the utilization of lactation rooms by health workers in all primary healthcare services in Medan City. This research was conducted from November 2020 to January 2021. This study involved 78 health workers with children under two years old including those who breastfeed or express breast milk at the office. The determination of the sample used the total sampling technique.
Data were collected using a validated questionnaire. Researchers also used a checklist in data collection. The data was collected in the form of respondents' characteristics, knowledge, attitudes, practice, support from health workers, availability of lactation room, use of formula milk and utilization of lactation room. In measuring the variables of knowledge, mother's practice, and support from health workers, subjects were given a questionnaire consisting of 5 statements on each variable. The number of respondents' answer scores were categorized into three categories which are good (76-100), moderate (56-75), and poor (0-55). In the attitude questionnaire, respondents were given five statements. If the respondent's answer score is > 50, it is categorized as positive, while a score of 50 is categorized as negative.
The researchers would use the Chi-square statistical test to analyze the collected data, provided that there was no expected value less than 5. If the Chi-square test conditions are not met, an alternative test is used, namely the Fisher's Exact Test. The tested variables are said to have a significant relation if the p-value is less than 0.05. Furthermore, the researchers conducted a logistic regression test with predictive modeling. The interaction test was carried out on variables that were suspected to have interaction in substance. If the p-value is less than 0.05, there is an interaction between the independent variables and vice versa. If there is interaction, then the final modeling used is multivariate model with interaction. If there is no interaction, the final model used is a multivariate model without interaction.
RESULTS
The calculation of the frequency distribution among 78 respondents showed that the majority have diploma education (70.51%), and aged between 20 and 45 years (66.67%). Most respondents have sufficient knowledge (44.87%) and have a positive attitude (70.51%). In the practice parameters, the majority are in a good category (39.74%), while most respondents stated that the support they received is good (44.87%). Respondents said that the availability of a lactation room is still inadequate (88.46%), and respondents also stated that they gave formula milk to their babies. Finally, 62.82% said that they used the lactation room. Table 2 shows that the results of the Chisquare test on knowledge with the use of a lactation room shows a significant effect (p=0.000). Likewise, the test results between attitudes and the use of the lactation room show a significant relation (0.002). The statistical test on the parameters of the mother's practice with the use of the lactation room obtains a p-value = 0.008, meaning that there is an effect of the mother's practice with the use of the lactation room. Likewise, the support obtained by the mother also correlates significantly with the utilization of the lactation room (0.000).
The results in Table 2, shows statistical tests on the variable availability of lactation room and utilization of lactation room show a positive correlation (0.003), meaning that there is an influence between the two variables. Respondents who stated the availability of a lactation room have 3.450 times more chance to use a lactation room. Formula feeding also affects the utilization of the lactation room (0.000).
A logistic regression test was performed as a follow-up to the Chi-square test. All variables were eligible to be analyzed using logistic regression. Among the six independent variables tested, there are still variables with a significant value above 0.25 which is the availability of lactation room and formula feeding. Only variables with a value of < 0.25 can be included in stage two of logistic regression test (knowledge, attitudes, practice of mothers, and support of health workers).
The results on the second stage of the logistic regression test show that respondents with good knowledge have 9,477 times more opportunities to use the lactation room than respondents with poor knowledge. Because B is positive, knowledge positively influences health of workers. utilization of the breastfeeding corner (B=2.273). Then, respondents who have a positive attitude are 5.934 times more likely to use the lactation room than those who have negative attitude. Because B is positive, the attitude positively influences the utilization of the lactation room (B = 1.781) ( Table 3). Respondents who have good practice have 6.324 times more opportunities to use the lactation room than respondents who have poor practice. Because B is positive, the mother's practice positively affects the utilization of the lactation room (B=2.273). Furthermore, respondents who received good support from health workers are 9.705 times more likely to use the breastfeeding corner than respondents who did not receive support from health workers. Because B has a positive value, the support of health workers positively influences the utilization of the lactation room (B=2.273). It can be concluded that knowledge is the most dominant variable influencing the utilization of the lactation room. This can be seen from the logistic regression equation which shows the regression coefficient value of 2.969.
DISCUSSION
This study provides holistic information about the aspects affecting breastfeeding health workers in utilizing the lactation rooms. The lactation room is beneficial for breastfeeding mothers who are working. The existence of a lactation room will help mothers to express breast milk so that even though they have to work, mothers can still provide breast milk to their babies. Expressed breast milk can be stored temporarily in the refrigerator or freezer. Lactation rooms in the workplace can provide comfort and privacy for mothers, reducing stress and impacting the quantity of breast milk expressed.
Referring to KAP theory in this study, three variables (Knowledge, Attitudes, and Practice) affect the utilization of lactation rooms. The better mother's KAP score, the more likely she uses the breastfeeding room at work. The literature shows that many women mistakenly think they cannot breastfeed if they plan to return to work after giving birth. They also do not know that breastfeeding can be done in the workplace. 18 Studies in Ghana and Jakarta reported that working mothers with good knowledge are more likely to breastfeed their babies exclusively. 19,20 The study by Jara-Palacios et al. showed that primigravida mothers are more at risk of exclusive breastfeeding for less than six months due to lack of experience and knowledge about the benefits of exclusive breastfeeding. 21 Information held by mothers about the benefits of exclusive breastfeeding and government's regulations on breastfeeding practices in the workplace encourage mothers to provide exclusive breastfeeding. 22 Furthermore, a study in Surakarta reported that the knowledge, experience, and motivation of working mothers affect the mother's perception on the availability of lactation corner facilities. 23 A strong attitude is owned by working mothers to continue to breastfeed well when working by utilizing the lactation room. 12 Studies in Kenya show that working mothers have a good attitude towards achieving exclusive breastfeeding through breastfeeding. 24 Practice is an activity carried out by mothers through the decision making for the successful implementation of exclusive breastfeeding. A person's practice of behaving is the main determinant of the individual's behavior. Mothers who want to give breast milk will tend to use the lactation room. 25,26 In line with the previous studies, support from health workers also affects the utilization of lactation rooms. Professional health workers can be a supporting factor for mothers in breastfeeding. Female health workers are part of working mothers who usually marry and have children naturally. Thus, breastfeeding is an integral part of the process. However, the fact is poor because health workers themselves have a lower percentage of success in breastfeeding their babies. 27 The support of health workers is necessary for the physical and psychological aspects of the mother during breastfeeding. The better support of health workers for breastfeeding mothers, the better mothers will give breast milk to their children. 28 The support of health workers is expected to be able in realizing the process of mutual development of love and affection between mothers and their children. Therefore, mothers can exclusively breastfeed with the support of other health workers by using the lactation room. The lack of encouragement from health workers makes people not get information or encouragement about the benefits of breastfeeding. 29 The wrong explanation comes from the health workers who recommend replacing breast milk with formula milk. 30 Health workers who assist mothers in childbirth and provide postnatal guidance to mothers affect maternal compliance in exclusive breastfeeding. 31 The results of studies obtained from the field indicate that there is an influence of the availability of lactation rooms and the use of lactation rooms. The literature shows that the unavailability of lactation facilities in the workplace is associated with a decrease in breastfeeding initiation of mothers who return to work. 32 The lactation room is one of the government's programs to increase exclusive breastfeeding and support high maternal mobility. Article 30 of the Indonesian Government Regulation Number 33 of 2012 states that workplace administrators and organizers of public facilities must provide special facilities for breastfeeding and expressing breast milk following the company's capacity. Public facilities are required to provide lactation booths in health service facilities, hotels and inns, recreation areas, land transportation terminals, train stations, airports, seaports, shopping centers, sports buildings, refugee shelter locations, and other public facilities.
An ideal lactation room is equipped with breastfeeding and expressing milk facilities used for breastfeeding babies, expressing breast milk, storing expressed breast milk, and breastfeeding counseling. Every workplace and public places should provide facilities and infrastructure for lactation room following minimum standards and as needed. The purpose of providing a lactation room is to protect mothers in exclusive breastfeeding, fulfill children's rights to exclusive breastfeeding, and increase the role and support of families, communities, and government for exclusive breastfeeding.
Mothers who work outside the home need support from their workplace. It is stated in the Regulation of the Minister of Health of the Republic of Indonesia Number 15 of 2013 concerning the Provision of Special Facilities for Breastfeeding and Expressing Breast Milk: the workplace provides opportunities for mothers to work indoor and outdoor to breastfeed and express breast milk at work. 33 The provision of opportunities for mothers who work indoor and outdoor, as referred to in the Permenkes above provides a lactation room according to standards that meet health requirements. Still, under the same regulation of the Minister of Health, the lactation room in every office must have a person in charge who can act as a breastfeeding counselor. The person in charge is appointed by the workplace administrator. In the case of a lactation room that does not have a breastfeeding counselor, the workplace administrator can work closely with health service facilities or coordinate with the provincial, district, or city health offices to provide breastfeeding counseling training. The type and number of health and non-health workers as trained personnel in breastfeeding are adjusted to the needs and types of services provided in the lactation room.
In this study, there is an effect between formula milk and the utilization of the lactation room. Formula feeding can inhibit exclusive breastfeeding. Giving formula milk to newborns shows a lack of knowledge of mothers about exclusive breastfeeding and the dangers of giving formula milk to babies. Sadly, formula feeding is given when the baby is born. The main reason is that the milk has not come out, and the baby still has trouble at suckling. They are worried that the baby will cry if left alone. Working mothers have reasons to give formula milk instead of breast milk. 34,35 The place of giving birth influences exclusive breastfeeding for babies because it is the starting point for mothers to choose whether to continue to exclusively breastfeed their babies or formula feeding given by health or non-health workers. Even though there is an international code of ethics regarding breast milk substitutes (formula milk), the marketing of formula milk is getting more intense. It seriously disrupts the success of exclusive breastfeeding programs. Perpetrators of violations of the international code of ethics are now shifting from baby food companies to health workers and health care facilities. Hospitals or maternity hospitals distribute formula milk products as gifts for mothers after giving birth. In addition, some health workers subtly encourage mothers not to provide breast milk but formula milk to their babies. [36][37][38] Formula feeding as prelacteal is adjusted in private practice of midwives and maternity homes. The main reason is that milk has not come out, and the baby still has difficulty at suckling, so the baby will cry if left alone. Usually, the midwife will advise on formula feeding first. In fact, formula milk is made by midwives or nurses themselves. They even provide a bottle sterilizer. This will negatively influence the mother's beliefs. The mother will think that formula milk is the most effective medicine to stop the baby's crying. The mother's lack of confidence in producing a lot of breast milk encourages mothers to give bottle-feeding. Children who do not use bottle with pacifier are more likely to be exclusively breastfed. 39,40
CONCLUSION AND RECOMMENDATION
This study concludes that knowledge (0.000), attitude (0.002), practice (0.008), support from health workers (0.000), availability of lactation room (0.000), and formula feeding (0.000) affect the utilization of lactation room. The regression test results show that knowledge is the most influential factor in the utilization of the lactation room. Respondents with good knowledge have 9.477 times more opportunities to use the lactation room than respondents with poor knowledge. To increase the utilization of lactation rooms, local governments should provide adequate and comfortable facilities at each institution's offices. The provision of a comfortable lactation room has implications for the mother's willingness to use the lactation room. It becomes difficult to realize without the support of colleagues. from:https://www.who.int/indonesia/new s/detail/03-08-2020-pekan-menyusuidunia-unicef-dan-who-menyerukanpemerintah-dan-pemangku-kepentinganagar-mendukung-semua-ibu-menyusui-diindonesia-selama-covid-19. Available from: https://www.sciencedirect.com/science/ar ticle/pii/S1053482221000279. | 2022-09-03T15:18:05.893Z | 2022-03-31T00:00:00.000 | {
"year": 2022,
"sha1": "29dfcb630e3a9655c682ba1a01691dc324673db1",
"oa_license": "CCBYNCSA",
"oa_url": "https://journal.unhas.ac.id/index.php/mkmi/article/download/18770/8594",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "846681e8c1c338978e56c6e20b9d4af9fc7e6b9c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
73433005 | pes2o/s2orc | v3-fos-license | Clinical Outcomes of Off-Label Dosing of Direct Oral Anticoagulant Therapy Among Japanese Patients With Atrial Fibrillation Identified From the SAKURA AF Registry
bleeding complications among patients with AF, compared with use of vitamin K antagonists.2–5 Review of real-world registries in Japan has indicated the incidence rate of stroke with DOAC therapy to be similar to that with warfarin but with a lower risk for major bleeding events for DOACs A trial fibrillation (AF) is the most common arrhythmia in elderly individuals, currently affecting approximately 0.6% of the Japanese population, and the prevalence of AF is expected to continue to rise in Japan, affecting an estimated 10 million people by 2030.1 AF is a strong risk factor for stroke and death. Randomized clinical trials (RCTs) have shown the benefit of direct oral anticoagulant (DOAC) therapy in reducing the risk of stroke and Editorial p 707
of at least 2 g/dL, transfusion of at least 2 units of blood or symptomatic bleeding in a critical area or organ, and was specified as the endpoint of safety. 2, 11 The net clinical events (composite of stroke or SE, major bleeding, or death from any cause) was also taken as a study endpoint.
Definitions
For the administration of DOACs, "appropriate standarddose" and "appropriate low-dose" were defined as administration according to a standard or low-dose regimen, respectively. The following low-dose regimens were considered to be appropriate: dabigatran, 110 mg (b.i.d.), for patients with a CrCl of 30-50 mL/min, age ≥70 years and a prior history of bleeding; 2,11 rivaroxaban, 10 mg (o.d.), for patients with a CrCl of 15-50 mL/min; 3,12 apixaban, 2.5 mg (b.i.d.), for patients with any 2 of the following characteristics: ≥80 years, body weight <60 kg and serum Cr level ≥1.5 mg/dL; 4,13 and edoxaban, 30 mg (o.d.), for patients with a CrCl of 15-50 mL/min or body weight is <60 kg. 5 Under-dosing (off-label low-dose) therapy was defined as administration of a low-dose of DOAC despite the standard dosage criteria being met. Over-dosing (off-label standarddose) therapy was defined as administration of a standarddose of DOAC despite the low-dose regimen criteria being met. Dabigatran was considered to be contraindicated for patients with a CrCl <30 mL/min, 2,11 with the other DOACs contraindicated for patients with a CrCl <15 mL/min. 3-5, [11][12][13] Of note, although under-dosing of dabigatran (to 110 mg, b.i.d.) was selected based on the rule previously described (namely, CrCl 30-50 mL/min, age ≥70 years; prior bleeding), rather than based on a standard dose, almost all physicians followed the defined rule for dabigatran under-dosing.
Statistical Analysis
Continuous variables are reported as the mean ± standard deviation (SD), with categorical variables reported as the percentage and number of patients. Differences in continuous variables between the 4 DOAC groups (over-dose vs. appropriate standard-dose vs. appropriate low-dose vs. under-dose) were evaluated using a one-way analysis of variance (ANOVA), with between-group differences in categorical variables evaluated using a chi-squared test. Kaplan-Meier curves of the cumulative incidence of stroke or SE, major bleeding, all-cause death, and composite net clinical events were constructed and compared between the 4 DOAC dose groups using a log-rank test. Considering the potential effects of antiplatelet use on clinical outcomes in the off-label under-dose group, we compared clinical outcomes between the off-label under-dose vs. standarddose groups using the Fisher exact test. The results of Cox proportional hazards modeling for clinical outcomes in the under-dose, appropriate low-dose and over-dose groups vs. the appropriate standard-dose groups are expressed as hazard ratios (HRs) and 95% confidence intervals (CIs). The models were adjusted by propensity score, calculated using sex, age, body weight, persistent AF, new use of DOAC, hypertension, diabetes mellitus, heart failure, history of stroke/TIA, vascular disease, history of AF ablation, CrCl, and antiplatelet drug use. All statistical analyses were performed using SPSS Statistics 24 (IBM Corp., Armonk, NY, USA), with a P-value <0.05 considered significant. than for warfarin. 6 As such, in clinical settings in Japan, physicians do inappropriately reduce the dose of any DOAC, despite knowledge of the criteria for a standarddose regimen, in an attempt to lower the risk of bleeding. Off-label reduced doses are most commonly prescribed for patients who are older, as well as those with low body weight and low renal function. 6,7 These individuals are also at risk for inappropriate over-dosing of DOAC if the standard dose is not appropriately reduced to consider the effects of older age, low body weight and low renal function.
Both off-label low and standard doses (namely underand over-dosing, respectively) are clinically problematic as both could, theoretically, increase the risk for stroke and major bleeding events. As such, clinical data regarding the outcomes of under-and over-dosing among DOAC users in Japan are needed. To address this issue, we established a multicenter registry, the SAKURA AF Registry (UMIN Clinical Trials Registry: UMIN000014420) (Supplementary Appendix), to support prospective observational research into the current status of anticoagulation therapy in Japan. In a previous study, we reported on our identification of over-and under-dosing among DOAC users in the registry. 6 Our aim in this study was to explore the clinical implications of off-label dosing of DOAC therapy among Japanese patients with AF.
Study Population
The study design, data collection processes and baseline characteristics of the study population have been reported. 8, 9 Patient eligibility for enrollment into the registry was as follows: a diagnosis of non-valvular AF, based on 12-lead ECG, 24-hour Holter ECG recording or event-activated ECG recording; age ≥20 years; and treatment (either just initiated or already in place) using any anticoagulant drug for stroke prophylaxis. Recruitment began in September 2013 and ended in December 2015. The registry includes 1,578 patients who were treated with warfarin at the time of enrollment, and 1,690 treated with any of the 4 available DOACs. A total of 3,268 patients were enrolled into the registry by 63 institutions (2 cardiovascular centers, 20 affiliated hospitals or community hospitals and 41 private clinics) in the Tokyo area. Analysis of the Registry data were approved by our institutional review board (IRB) and the individual hospital IRBs. All enrollees provided written informed consent for participation in the Registry.
Data Collection and Outcomes
A web-based registration system was created for the SAKURA AF Registry and is currently being used to collect relevant patient clinical data (comorbidities, medication use and laboratory test results) and to obtain follow-up information, including: PT-INR of warfarin users; creatinine clearance (CrCl) rate; and hemoglobin (Hb) concentration. These follow-up data are collected through a central registry office, twice a year (in March and September), for up to 4 years after enrollment. The time in therapeutic range was calculated using the Rosendaal method. 10 The primary endpoint of our study was an event of stroke (ischemic stroke, hemorrhagic stroke, or transient ischemic attack [TIA]) or systemic embolism (SE). Cardiovascular death or death from any cause was also recorded. Major bleeding was defined as a reduction in the Hb level Clinical Outcomes of Off-Label DOAC Dosing (22.2%) with an under-dose. Baseline characteristics of the study population are summarized in Table 1.
Compared with the mean age of the standard-dose group (66.9±9.0 years), patients were older in the appropriate low-dose (79.2±6.0 years), under-dose (71.2±8.2 years) and over-dose (76.0±5.0 years) groups (P<0.001). In addition, the proportion of women was higher in the appropriately low (38.8%), under-dose (29.0%) and over-dose (30.3%) groups than in the standard-dose (21.6%) group (P<0.001). With regard to clinical features, compared with the standard-dose group, body weight (P<0.001) and CrCl (P<0.001) were lower in the appropriately low, under-dose and over-dose group, and the CHADS2 and CHA2DS2-Vasc scores were higher (P<0.001). The consumption of alcohol was lower in each of these 3 dosing groups, compared with the standard-dose group (P<0.001). There were no differences in other clinical features, including antiplatelet use,
Baseline Characteristics
Of the 3,268 patients enrolled in the SAKURA AF Registry, 31 were lost to follow-up. Follow-up data at the 1-and 2-year endpoints after enrollment into the Registry for 3,157 (97.5%) and, 2,952 (91.2%), respectively, was available for the 3,237 remaining enrollees. Among these 3,237 patients, 48.8% (n=1,561) were warfarin users and 51.2% (n=1,676) DOAC users. Of these 3,237 patients, the use of DOACs was contraindicated in 6 patients and the dose was not reported in another 12 patients. The data from these patients were excluded, with our analysis being ultimately based on the data of 1,658 patients. DOAC were used at the appropriate standard-dose in 746 (45.0%) of the patients and at an appropriate low-dose in 477 (28.7%), with 66 (4.0%) being treated with an over-dose and 369 Hemorrhagic stroke
Discussion
The major findings of our study were as follows. First, over-dosing DOAC therapy was identified in 4.0% of our study group, with under-dosing of 22.2%. Patients in the off-label over-dose, under-dose and appropriate lowerdose groups tended to be older and at higher risk for a among the 4 groups.
Clinical Outcomes
At a median follow-up of 39.3 months (range, 28.5-43.6 months), after enrollment, a thromboembolic event was identified in 69 patients (4.2%) and a major bleeding event in 57 (3.4%); 89 (5.4%) patients died ( Table 2). The Kaplan-Meier curves for stroke or SE, major bleeding, all-cause death, and for the net clinical events (defined as the composite of stroke or SE, major bleeding or death from any cause) are shown in Figures 1 and 2 for the 4 different DOAC dose groups. The incidence of a stroke/SE event was higher in the over-dose and appropriate low-dose groups, compared with either the appropriate standard-dose or under-dose groups (P=0.010 log-rank test). The incidence of a major bleeding event was higher in the over-dose group than either the appropriate standard-or low-dose groups, but lower in the under-dose group (P=0.042, log-rank test). The incidence of all-cause death and composite net clinical events was higher in the over-dose and appropriate low-dose groups than in the appropriate standard-and under-dose groups (P<0.001, log-rank test for both). In the off-label under-dose group, we did not identify differences between patients using or not using concomitant antiplatelet drugs, with regard to the incidence of stroke in Asians, compared with non-Asians. 16-21 As we previously reported, patients in the under-dose group were older than those in the appropriate standard-dose group, with no history of alcohol abuse and only moderate renal impairment. Age >75 years and impaired renal function are known risk factors for stroke and bleeding. 22,23 It is, therefore, not surprising that physicians would favor under-dosing for older Asian patients with some renal dysfunction in order to lower the risk of a major bleeding event. The RELY and Engage AF trials reported on the effectiveness of low-dose dabigatran and edoxaban therapy. 2,5 By contrast, the ROCKET AF and ARISTOTLE trials were under-powered, with too few patients in the low-dose DOAC group to establish superiority, equivalence or non-inferiority of under-dosing, compared with appropriate standard dose. 3,4 Based on current evidence, underdosing of dabigatran and edoxaban is a possible option for DOAC therapy to lower the risk of bleeding in Japanese patients with AF. With regard to over-dosing of DOACs, patients in this group were older, more likely to be female, were prescribed dabigatran more often, had a higher CHA2DS2-VASc score, and lower renal function than patients in the appropriate standard-dose group. These findings are consistent with those of a previous study, using real-world data in the USA, reporting that patients who were receiving overdosing of DOACs were older, more likely to be women and had a higher CHA2DS2-VASc score. 14 It is possible that the over-dosing in these patients is not "intended", but rather occurs over time with a decrease in body weight and/or serum creatinine concentration, which might be overlooked in clinical practice. stroke than those in the appropriate standard-dose group. After multivariate adjustment, the risk for stroke/SE, death and the composite of net clinical events was equivalent between the appropriate standard-dose and the under-dose groups. However, the risk for a major bleeding event tended to be lower in the under-dose than in the appropriate standard-dose group. The composite of net clinical events in the over-dose group was higher than in the standarddose group. The incidence rate of stroke/SE, major bleeding event and composite event was equivalent between the appropriate low-and standard-dose groups, although the mortality rate was higher in the appropriate low-than standard-dose group.
Prevalence and Baseline Patients' Characteristics of Off-Label DOAC Therapy
Of the overall rate of off-label under-dosing of 22.2%, the distribution among the 4 DOACs used was as follows: dabigatran, 20.0%; rivaroxaban, 24.7%; apixaban, 19.7%; and edoxaban, 27.6%. The distribution of cases of overdosing was as follows: dabigatran, 7.7%; rivaroxaban, 3.4%; apixaban, 0.7%; and edoxaban, 6.7%. Previous real-world data from the ORBIT-AF2 registry revealed an incidence rate of under-dosing of DOAC therapy of only 9.4% (compared with our rate of 22.2%), with a rate of 3.4% of over-dosing. 14 In a real-world registry in Europe, the rate of inappropriate under-dose DOAC therapy was 18%. 15 In contrast, in Japan the Fushimi AF registry reported a rate of under-dosing of 29% for dabigatran, 21% for rivaroxaban and 26% for apixaban. 7 It is possible that the higher rates of under-dosing of DOACs in Japan, compared with other real life registries, reflect the higher risk for major bleeding with warfarin anticoagulant therapy and bleeding and the longevity of Japanese individuals compared with Europeans and other Westerners must be considered. However, the lower incidence rate of the composite of net clinical events in the under-dose DOAC group may be partially mediated by antiplatelet therapy, with these drugs lowering the risk of stroke and the occurrence of any hemorrhagic events. In fact, in an RCT and an observational study, anticoagulant therapy was shown to lower the risk of stroke 4 or increase the risk of bleeding. 7,11, 19 In our study sample, the use of antiplatelet therapy was equivalent between the under-dose and appropriate standard-dose groups, with no association observed between antiplatelet therapy and clinical adverse events in the under-dose group. Therefore, under-dosing of DOACs may not increase the risk of stroke but does reduce the risk of bleeding among Japanese patients with AF. As such, under-dosing provides a safe treatment for Japanese patients who are at greater risk for bleeding associated with anticoagulant therapy. It is important to note, however, that prescription of a standard-dose to patients in the under-dose group could lead to an increased incidence of a bleeding event, but decreased risk of a stroke. Evaluation of this possibility is difficult because patients in the standarddose group in the SAKURA AF Registry were younger than those in the under-dose group. As such, the risk for stroke/SE would naturally be lower in the standard-than in the under-dose group.
Off-label over-dosing of DOACs was independently associated with an increased incidence rate of net clinical events, with the appropriate low-dose being independently associated with an increase in all-cause death events compared with appropriate standard-dose users. Compared with the standard-dose group, the off-label over-dose and appropriate low-dose groups had a higher proportion of women, with patients being older, having a lower body weight and higher risk score for stroke and bleeding. These patient characteristics are well known to be strong risks for adverse clinical events, and more specifically, death. 2-7, 9 The effects of these factors may not have been completely controlled for in our analysis, even after multivariate adjustment. Therefore, the association between off-label DOAC over-dose or appropriate low-dose and a higher incidence rate of the composite net clinical events and death might reflect this inherent patient bias, rather than the specific dose used. Nonetheless, as over-dosing of DOACs is related to poor outcomes, including an increased incidence rate of stroke/SE, major bleeding event and all-cause death, 14 as per our findings, patients on an overdose of DOAC should be carefully and intensively followed up, and physicians should consider all risk factors prior to intentionally prescribing over-dosing of a DOAC.
Study Limitations
The present study has several limitations that should be acknowledged. First is the unavoidable possibility of a selection bias because of being a prospective observational study, despite our use of Cox proportional hazards models to minimize the influence of patients' background factors. As previously described, there is a likelihood of multiple residual confounders, which were not fully adjusted for in the multivariate models and may have biased our results. Second, the registry included patients only from selected institutions located in the capital city of Tokyo, or its suburbs. Therefore, the data, and our findings, may not be reflective of all areas of Japan. We note, however, that
Clinical Outcomes Among the 4 DOAC Dose Groups
Patients in the off-label under-dose DOAC group were significantly older and had a higher risk of stroke than patients in the appropriate standard-dose group. Based on patients' characteristics, we hypothesized that the stroke/ SE rates would be higher in the under-dose than in the appropriate standard-dose group. However, the incidence rate of stroke/SE was comparable between these groups, with lower dosing of DOACs reducing the incidence rate of major bleeding compared with the appropriate standarddose group. Prior to our study, there were only a few reports on the outcomes of off-label under-dose DOAC users in Japan. A single-center, retrospective study reported inappropriate low-dosing, especially when unintended (related to increased body weight or decreased serum creatinine concentration), to be a risk factor for recurrent ischemic stroke. 24 In agreement with our results, another study did not find under-dosing of DOACs to be associated with more severe stroke or poorer clinical outcomes, compared with the recommended dose. 25 These 2 Japanese studies, however, had small sample groups. Therefore, from a statistical standpoint, we deem that our registry data provided a more accurate representation of patients with AF in the clinical settings in Japan.
Our findings, however, do differ from those reported in 2 previously reported large-scale RCTs of DOAC therapy, namely, the RE-LY and ENGAGE AF TIMI-48 studies, 2,5, 11 and a previously reported observational study. 14 Both of the RCTs reported a higher stroke/SE rate but a lower rate of major bleeding events, in the low-dose compared with standard-dose DOAC groups, when patient characteristics were controlled for. 2,5,11,21 Analysis of the data in the ORBIT-AF2 registry in the USA did not reveal a statistical association between under-dosing of DOAC therapy and an increase in stroke/SE events or a decrease in major bleeding events, although the mortality rate tended to be higher among patients treated with an under-dose compared with a standard-dose. 14 Several possible explanations could account for observed differences between our results and those of the previous studies. Foremost, we must consider differences in the patients' background characteristics. Specifically, the previous studies drew their study samples from Western countries, with the average body weight of 80 kg being greater than the average body weight of our study sample. Body weight is an important factor to consider, as the risk of stroke/SE and bleeding during DOAC therapy is related to the dose of DOAC relative to a patient's body weight. The dose regimen of DOACs in Japan is the same as that used in the RCTs, and generally reported in other countries, with the exception of rivaroxaban, for which the standard dose is 15 mg and the reduced dose 10 mg, based on known pharmacokinetics of rivaroxaban in Japanese adults. 26 Therefore, the reduced dose might be sufficient for the prevention of stroke/SE among Japanese patients who have a relatively lower body weight compared with Europeans and other Westerners. The nationwide coverage of health benefits by public health insurance in Japan may also be an important factor to consider because this universal coverage would increase the availability (and use) of hypertensive and/or lipid-lowering drugs. 27 In fact, the composite of net clinical events has been reported to be lower among Japanese patients with AF 6,7,9 than among patients enrolled in the 2 previously published RCTs 2,5,11 and the observational study in the USA. 14 Certainly, the lower burden of vascular disease patient selection and regional enrollment biases are limitations of all prospective observational studies. Third, patients using p-glycoprotein inhibitors would have been considered as under-dosing candidates. However, this information is not specifically available in the SAKURA AF Registry and therefore could not be accounted for in our results. Fourth, the dose of rivaroxaban used in Japan differs from that used in other countries, with 15 mg being the standard dose in Japan, 20 mg being used as a high dose and 10 mg as a low dose. Therefore, clinical outcomes between the 4 different dose groups for rivaroxaban are specific to Japan. Nonetheless, our data on the effectiveness and safety of the Japan-specific dose may provide clinical insight into understanding AF treatment in some Western and Asian patients who have a body type similar to that of Japanese people. Fifth, we defined 'off-label low-dosing' of DOACs using the criteria for a low-dose regimen. As such, off-label low-dosing for dabigatran and edoxaban will be different than the off-label low-dosing for rivaroxaban and apixaban, with the effectiveness and safety of low doses of dabigatran and edoxaban having been well-established, even in patients on low-dose regimens in clinical trials. 2,5 Finally, our evaluation of the effects of the 4 different doses of DOACs should be considered with caution because of the relatively overall low rate of clinical events. Therefore, future studies are needed to evaluate dose-associated effects for the 4 DOACs on the rate of clinical adverse events.
Conclusions
Although patients administered off-label under-dose DOAC therapy were older and at a higher risk of stroke than patients in the appropriate standard-dose group, the rate of stroke was equivalent between the 2 groups, while the rate of major bleeding events tended to be lower in the under-than in the standard-dose group. Further studies are needed to clarify the safety and effectiveness of off-label under-dose DOAC therapy in Japanese patients. Over-dose DOAC users were at a significantly higher risk for all clinical events, with appropriate low-dose users being at significantly higher risk of death, than standard-dose users, requiring careful follow-up of these patients. | 2019-03-08T14:09:49.311Z | 2019-03-25T00:00:00.000 | {
"year": 2019,
"sha1": "08381df42e3ad89cb1d778367bbe6f5a70e2b818",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/circj/83/4/83_CJ-18-0991/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "06df474363f6368a563ee4b3db65dea4dafff698",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54871256 | pes2o/s2orc | v3-fos-license | Measurement-Based Analysis of Transmit Antenna Selection for In-Cabin Distributed MIMO System
Aircraft seems to be the last isolated island where the wireless access is still not available. In this paper, we consider the distributed multiple-input multiple-output (D-MIMO) system application based on measurements in aircraft cabin. The channel response matrices of in-cabin D-MIMO system are collected by using a wideband channel sounder. Channel capacities with optimum transmit antenna selections (TASs) are calculated from the measured data at different receiver positions. Then the optimum capacity results are compared to those without selection in different transmit SNR. It is shown that the TAS can lead obvious capacity gain, especially in the front and back of cabin. The capacity gain represents a decreasing trend with the transmit SNR increasing. The optimal transmit antenna subset is closely related to the transmit SNR. With the SNR increasing, more transmit antennas will be chosen for higher performance. The subset of those transmit antennas near the receiver is a reasonable choice in practical application of D-MIMO system.
Introduction
With the rapid development of wireless communications, subscribers require more convenient access service at any time.Aircraft seems to be the last remaining frontier where the wireless access is still not available [1,2].Passengers want to use mobile phones and laptops to meet their business and entertainment requirements in flight.To realize the in-cabin wireless access, one basic work is to investigate the propagation model in the aircraft.Some measurements have been carried out in cabin scenario to analyze the coverage and capacity of wireless systems [2][3][4][5][6].MIMO (multi-input-multi-output) technology, which can improve the capacity gain and frequency efficiency, has been widely studied.However, only a little research work has been conducted for the in-cabin application of MIMO technique.In [5,6], the capacity gain has been proved for the incabin centralized MIMO system.In this presented paper, we will focus on the distributed MIMO (D-MIMO) channel characteristics.Compared with traditional centralized MIMO system, D-MIMO system can provide higher energy efficiency and fairer coverage [7,8].Especially in cabin scenario, the D-MIMO system could offer the passengers equal wireless service with low power consumption.To our best knowledge, there is still no measurement-based research work on the in-cabin D-MIMO channel.
In practice, access point with more antennas has to employ more radio frequency (RF) units.Then the cost of RF equipments and the complexity of the signal processing will increase too, which limit the MIMO system application.To solve this problem, antenna selection technology is proposed.In MIMO systems, antenna selection is to choose an optimum subset of antennas for communication.It can be classified as transmit (Tx) and receive (Rx) antenna selections.The latter one can reduce receive antenna number and signal processing complexity.Meanwhile, some receive energy will be lost, and the channel rank will not be improved.Thus, the receive antenna selection is unable to lead capacity gain of total MIMO system [9].
In this presented paper, we will emphasize the transmit antenna selection (TAS) impacts on D-MIMO system's capacity in cabin scenario.For D-MIMO channel, because the International Journal of Antennas and Propagation transmit antennas are placed with distances, different links suffer diverse shades.Then the TAS can be useful with utilization of the macrodiversity gain.In order to validate the TAS's effect based on measured results, measurement campaigns are carried out in cabin to collect MIMO channel-impulseresponse (CIR) matrices.Based on the collected data, the capacities under different antenna selection schemes with fixed total transmit power are analyzed and compared.The results show that, besides the reduction of the complexity and cost, the TAS will lead capacity gain in cabin scenario.The relationship between the signal-to-noise ratio (SNR) and antenna selection scheme is also discussed.Then we present a simple near-optimum selecting way for practical application.
The rest of this paper is organized as follows.In Section 2, the channel sounder and measurement setup are introduced.The D-MIMO channel capacity with TAS is characterized in Section 3. In Section 4, the measurement results are shown to evaluate the TAS performance.Finally, our conclusions are presented in Section 5.
Experimental Setup
2.1.THU Channel Sounder.The Tsinghua University (THU) MIMO channel sounder [10] was used to collect raw measured data.It worked at the center frequency of 3.52 GHz with 40 MHz bandwidth, supporting both centralized and distributed MIMO channel measurements.During the incabin measurements, the transmitter employed a signal generator to periodically output a linear frequency modulated (LFM) sequence.A microwave switch was used to connect the signal generator with seven transmit antenna ports.Tx antennas were connected to the switch through cables and were distributed in the cabin.
At the receiver side, seven antennas constituted a uniform linear array (ULA) with half-wavelength interelement spacing.The received signal was input to one RF tunnel via another 7-way switch.Then a 7-input 7-output system was realized by adopting this fast time-division-multiplexed switching (TDMS) scheme.The microwave switches were controlled to scan all possible antenna combinations by a synchronization unit.
The major configurations of the THU channel sounder were shown in Table 1.The test signal length t p was 12.8 μs.A guard interval t p was also inserted between adjacent transmission to protect the test signal from the delay spread infection.Then one total snapshot interval was 7 × 7 × 2t p = 1254.4μs.The real-time measured data was stored in a server.To obtain the channel parameters of interest, the data processing was finished off line.
Measurement Environment. Our measurement campaigns are carried out in an MD-82 aircraft. The MD-82
[11] is a short-haul aircraft with 149 seats arranged in 33 rows.The first three rows are the business class and the other rows are the economy class.The dimensions of the cabin are 30.5 m length, 3.34 m width, and 2.05 m height, respectively.The aisle width is 0.5 m, and the distance between rows is 0.7 m.The seat height is 1.16 m above the floor.As illustrated in Figure 1, seven transmit antennas (Tx1-Tx7) were fixed at different positions.The Tx antenna height was 1.68 m above the floor, and the distance between adjacent Tx antennas was 2.9 m.
The cross-section of the measurement environment was shown in Figure 2. The receiver with centralized antenna array was placed on a dining car.The receiver height was 1.37 m.There are seven Rx antennas (Rx1-Rx7) at the receiver, but in the following analysis only three of them were used.Other elements were used as dummy elements, and their responses were discarded.
During the measurements, the receive array's position was changed along the aisle from the 4th row to the 27th row of the economic class, as the pentagrams shown in Figure 1.At each position, receiver was moved in an 8λ interval back and forth, in order to collect channel data with independent small-scale fading.
D-MIMO Capacity with Transmit Antenna Selection
3.1.Distributed MIMO Channel Model.In D-MIMO system, each base station has N distributed antenna ports, each port with V microdiversity antennas.The mobile station's antenna number is M, and then this D-MIMO system can be noted by (N, V , M) [12].This (N, V , M) D-MIMO channel can be described as where d i (i = 1, . . ., N) is the distance between the ith antenna port to the mobile station.
, it becomes a traditional centralized system, and if V = 1 a star-shaped D-MIMO.As introduced in Section 2, our channel sounder is a (7, 1, 7) star-shaped D-MIMO system.
D-MIMO Capacity with Transmit Antenna Selection.
Consider a channel is unknown at the transmitter.The capacity of a D-MIMO channel with equally allocated transmit power can be calculated as [9] Here, H is the M×N CIR matrix, whose elements include the pathloss, shade fading, and small-scale fading effects.
Tx number
Front Back Row number 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9 3 0 3 1 3 2 3 3 Then it is not a normalized matrix.N and M are the transmit and receive antenna numbers, respectively.I M is an M × M identity matrix.P t is the total transmit power and σ 2 is the noise power.Then ρ t = P t /σ 2 is the transmit SNR.Superscript (•) H denotes the Hermitian transpose.Two metrics including ergodic capacity and outage capacity are usually used to evaluate MIMO system's performance.The ergodic capacity is corresponding to the average capacity of random channel and will be considered in the following analysis.The ergodic capacity can be computed by E H (•) is the average operator.It means that independent channel realizations are needed to obtain the ergodic capacity.For each Rx position's data processing, we consider both the spatial realizations in 8λ and the frequency realizations in 40 MHz bandwidth.
The objective of TAS is to choose an optimum antenna subset including L antennas to maximize capacity given by where H is an M × L subblock matrix of H.The capacity gain with antenna selection can be defined as Here, C opt is the ergodic capacity with optimum selection subset.C no sel is the capacity without antenna selection; that is, all transmit antennas are employed to send equal power.
Antenna Selection Effect on Channel Capacity.
As mentioned above, the channel sounder is a star-shaped D-MIMO system with seven distributed transmit antennas.The receiver also employs seven Rx antennas as a centralized array.In future wireless systems, limited to the size and cost requirements, three or fewer antennas are practicable for most mobile terminals (e.g., mobile phones).So we only select three receive antennas (Rx3, Rx4, and Rx5) for following analysis.These three antennas are located at the center of the array, and one-wavelength array size is approximately a mobile phone's length.Then we will focus on a (3, 7, 1) D-MIMO system's performance with TAS.
According to (3), firstly the ergodic capacities without antenna selection are extracted from the measured data at 24 different positions.Then the max capacity with TAS is computed by searching all possible antenna selection schemes.To keep the comparison justice, the total transmit power of all Tx antennas is set to be equal for a fair comparison.If less antennas are used, each antenna will radiate more power.As illustrated in Figure 3, TAS can lead obvious capacity gain.Here the transmit SNR is 87 dB, which is corresponding to about 20 dB average receive SNR.For all receiver positions, the gains are between 3% and 14%.This capacity gain is related to the Rx locations.In this cabin scenario, the distributed Tx antennas are arranged in a line.When all transmit antennas work together with equal power, some of them are far from the receiver and energy from these antennas are small.Thus, the receive SNR is lower than that with TAS.For example, at the front and back of the cabin, antennas at the other side are far from the receiver, so the capacity with TAS can be 10% more than that without selection.In the middle of cabin, because the distances between receiver and different Tx antennas are similar, the capacity gain brought by TAS is relatively smaller.
Moreover, in the front or back of the cabin, the macro diversity is smaller than that in the middle.In our previous work [13], we have proved that when Rx is placed in the cabin back (e.g., the 25th row), the signals sent from Tx1 and Tx2 would undergo similar reflection and scattering, which leads to strong spatial correlation between Tx1 and Tx2.Similar thing occurs when receiver is located in the front of cabin.Then the microdiversity effects led by distributed Tx antennas are not large in the front or back of cabin.Comparatively, in the middle the Tx correlations are smaller.Selecting more Tx antennas will increase the channel rank obviously and make the eigenvalues more uniform.
Equation ( 2) can be rewritten as where rank(H) is the rank of the CIR matrix H. λ j is the jth eigenvalue.According to (6), the improvement of channel rank and min eigenvalue can partly counteract the receive energy loss.Thus, the gap between the two curves is not large when Rx is in the middle.
The capacities measured at the 26th and 27th rows are lower than others.It may be led by the energy leakage because In Figure 4, the capacity gains with TAS at different SNRs are illustrated.It can be seen that capacity gain falls with the increase of transmit SNR.Similarly, selecting more transmit antennas will improve the CIR matrix rank.According to (6), with high transmit SNR, larger CIR matrix rank and more uniform eigenvalues can increase the channel capacity without TAS.For practical communication systems, the receive SNR is usually from −5 dB to 20 dB, approximately corresponding to the transmit SNR from 60 dB to 85 dB.In this range, the capacity gain is visible.Also, it can be seen that the capacity gain at the middle is smaller than those at the edges of the cabin.
Optimum Transmit Antenna Selection Scheme.
With different transmit SNRs, the proportions of the optimal antenna number at all 24 positions are shown in Figure 5.It can be seen that with very low SNR, for example, lower than 50 dB, at most positions only one antenna is needed to reach the largest capacity.With the increase of transmit SNR, the smallest eigenvalue of CIR matrix becomes bigger, and then more antennas can provide better performance by leading larger multiplexing effect.In this situation, more antennas can do that.Then the capacity will become larger.However, even with very high SNR, the probability of selecting six or more antennas is very small.In the application of the D-MIMO system, it means that all transmit antennas are not necessary for one passenger's service.Considering the practical receive SNR, the number of the transmitter RF devices can be reduced.In most cases, receiver with three antennas only needs from 2 to 4 transmit antennas.
Then another question is which antenna subset is the optimum one.For seven transmit antennas, there are totally 127 selection schemes.Searching all combinations will lead high processing complexity.According to the former analysis, we notice that the access distance is quite important for D-MIMO system.Then one near-optimum scheme is to select transmit antennas near the receiver.As an example, in Figure 3 we give the results when using about four transmit antennas.The black dot-dash with triangle marker is corresponding to the ergodic capacity at this case.It can be seen that this TAS scheme's performance is quite close to the optimum one.Then in practical application, we can just select the nearest 2 to 4 antennas to serve the passengers in different rows.
Results with Larger Receive
Array.In the analysis above, we choose Rx3, Rx4, and Rx5 to form a small-size (onewavelength) receive array.Small array means that there exist high correlations among receive antennas in the long and narrow cabin.The CIR matrix has low rank, and the eigenvalues distribute nonuniformly, especially at the ends of the cabin.In (6), it means that the SNR is the deterministic factor to the channel.
If we consider a larger receive array, including Rx1, Rx4, and Rx7.This array is 3-wavelength long, which is corresponding to a laptop's size.Then Rx correlations are smaller.Choosing more transmit antennas will make the eigenvalues distribute more uniform, which can partly eliminate the loss of energy.Thus in this situation, the capacity gain with TAS will be lower.The measured results also prove that, as shown in Figure 6.Compared with Figure 4, the capacity gain is not larger than 40%.The differences between the middle and edges are not so large as that in Figure 4.
Conclusion
The TAS effects on channel capacity of in-cabin D-MIMO system were analyzed by using measured data.The optimum TAS subsets at different Rx positions were found out by searching all possible schemes.Then the ergodic capacity with optimum TAS was compared with that without selection.The results showed that the TAS could lead visible capacity gain, especially in the front and back of the cabin.With the increase of the transmit SNR, the capacity gain decreased.The selected antenna number in the optimum subset also depended on the transmit SNR.With low SNR only few antennas were needed to provide better performance, while in high SNR condition more antennas would lead larger capacity.And in most cases, the optimum selected number was smaller than 5.One practical selection scheme was to choose the Tx antennas near the receiver.For receiver with larger array size, the effects of TAS would become smaller.These results provided practical references to the future distributed MIMO system applications in cabin.
Figure 1 :Figure 2 :
Figure 1: Interior arrangement of the measurement environment.
Figure 3 :
Figure 3: Ergodic capacities with different TAS schemes at 24 Rx positions.
Figure 5 :
Figure 5: The antenna number of optimal TAS subset.
Table 1 :
The configurations of THU MIMO channel sounder. | 2018-12-11T01:04:53.505Z | 2012-01-16T00:00:00.000 | {
"year": 2012,
"sha1": "4639014fc1c6d57decf0a88ca6a950de0a4d0289",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijap/2012/598049.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4639014fc1c6d57decf0a88ca6a950de0a4d0289",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
86717229 | pes2o/s2orc | v3-fos-license | Association of Serum Levels of Calcitonin Gene-related Peptide and Cytokines during Migraine Attacks
Background: During a migraine attack, trigeminal activation results in the release of calcitonin gene-related peptide (CGRP), which stimulates the release of inflammatory cytokines playing an important role in migraine. Objective: We analyze the relation between CGRP and cytokines during attacks to explore the possible mechanism of migraine. Materials and Methods: Migraine patients and healthy control were recruited at the Department of Neurology, the Sixth People's Hospital of Fuyang City, between March 2018 and July 2018. The protein levels of interleukin (IL)-1β, IL-2, IL-6, IL-10, tumor necrosis factor-alpha (TNF-α), and CGRP were determined from the sera of patients with migraine and control subjects by enzyme-linked immunosorbent assay kits. Spearman's rank correlation coefficient was also determined to calculate the correlation between CGRP and inflammatory factors levels. Results: The level of IL-1β, IL-6, TNF-α, and CGRP in migraine group were significantly higher than normal group (P < 0.05). The level of CGRP was significantly correlated with IL-1 β (r = 0.30, P < 0.05) and IL-6 (r = 0.94, P < 0.05), but not significantly correlated with IL-2 (r =−0.047, P = 0.75), IL-10 (r = 0.12, P = 0.43), and TNF-α (r = 0.05, P = 0.72). Conclusions: In our study, we found migraine patients had a higher IL-6, IL-1β, and TNF level than healthy controls and the level of CGRP was related significantly with the level of IL-1β and IL-6. In conclusion, our results suggest that IL-1β and IL-6 may be involved in the pathogenesis of migraine attacks and CGRP related with the secretion of cytokines.
Patients
Patients with migraine were recruited at the Department of Neurology, the Sixth People's Hospital of Fuyang City between March 2018 and July 2018. The migraine patients were diagnosed by two neurologists, according to the 2013 Headache Classification Committee of International Headache Society migraine diagnosis standard. All patients in the migraine group had not received any prophylactic migraine medication in the previous week and had not taken antibiotics within the previous 3 months. The healthy persons who had no personal or family history of migraine were recruited as controls. All participants underwent a detailed neurological examination and tests of head computed tomography angiography, head magnetic resonance imaging, electrocardiogram, white blood cell count, C-reactive protein (CRP), blood sugar, and lipids. Patients who had nonmigraine headache, inflammatory or autoimmune diseases, abnormal CRP levels, hypertension, diabetes mellitus, obesity, metabolic syndrome, or ischemic cerebrovascular disease were excluded from this study. The study was approved by the Ethical Committee of the Sixth People's Hospital of Fuyang City. Written informed consent was obtained from each participant.
Samples
Blood samples were obtained within 2 h from the onset of migraine headache from the jugular venous. [17] Blood samples were also obtained from healthy individuals. Inflammatory factor concentration in the serum was analyzed. Two 10 ml blood samples were drawn into glass tubes containing 35 μmol of dipotassium-EDTA and 1500 kallikrein inactivator units of trasylol. The tubes were kept in an ice bath and then centrifuged at 2000×g for 15 min at 4°C. The plasma was separated from the cells, stored at −80°, and analyzed through commercially available enzyme-linked immunosorbent assay (ELISA).
Enzyme-linked immunosorbent assay
The protein levels of IL-1 β, IL-2, IL-6, IL-10, TNF-α, and CGRP were determined from the sera of patients with migraine and control subjects. Commercial ELISA kits were used in accordance with the manufacturers' instructions. The assays were performed in duplicate in 96-well plates, and the results were presented as picograms per milliliter. The collected samples from each of the two groups were analyzed on the same day on one ELISA plate for each protein. The following kits were used: human IL-1β ELISA kit (ab100562; Abcam, Cambridge, MA, USA; detection threshold of 1.5 pg/ml; intra-and inter-assay coefficient of variation of <11%); human IL-2 ELISA kit (ab174444; Abcam, Cambridge, MA, USA; detection threshold of 9 pg/ml; intra-and inter-assay coefficient of variation of <6%); human IL-6 ELISA kit (ab46027; Abcam, Cambridge, MA, USA; detection threshold of 2 pg/ml; intra-and inter-assay coefficient of variation of <8.6%); human IL-10 ELISA kit (ab100549; Abcam, Cambridge, MA, USA; detection threshold of 1 pg/ml; intra-and inter-assay coefficient of variation of <10%); human TNF-α ELISA kit (ab46087; Abcam, Cambridge, MA, USA; detection threshold of 15 pg/ml; intra-and inter-assay coefficient of variation of <10.3%); human CGRP ELISA kit (CEA876Hu; USCN Life Science Inc., Hubei, China; detection threshold of 5.35 pg/ml; intra-and inter-assay coefficient of variation of <12%).
Statistical analysis
The inflammatory factor and CGRP concentration are presented as mean ± SEM Differences between groups were statistically analyzed using the t-test depending on different variables with SAS 8.0 Software (SAS Institute INC, USA). Spearman rank correlation coefficient was also determined to calculate the correlation between CGRP and inflammatory factors levels. P < 0.05 was considered statistically significant.
results
Ultimately, 47 patients with migraine (26 females and 21 males, mean age = 35.2 ± 9.3 years) and 38 healthy volunteers (22 females and 16 males, mean age = 32.4 ± 6.1 years) were included. Among the migraine patients, 20 were without aura (MO) (11 females and 9 males, mean age = 34.1 ± 5.3 years) and 27 were with aura (MA) (15 females and 12 males, mean age = 36.3 ± 10.4 years). There was no significant difference between the migraine and control group or MO and MA groups in age and gender (P > 0.05). There were 16 patients with episodic migraine (EM) (fewer than 15 headache days per month) and 31 patients with chronic migraine (CM) (15 or more headache days per month) [ Table 1].
Among all the inflammatory factors and CGRP we detected, the level of IL-1β, IL-6, TNF-α, and CGRP in migraine group were significantly higher than normal group (P < 0.05) [ Table 2]. However, the levels of all inflammatory factors and CGRP were similar between MO and MA groups (P > 0.05) [ Table 3].
dIscussIon
Migraine is a primary headache disorder which seriously affects the daily life of patients. [1,18] As understanding of the neural pathways has advanced, CGRP-based mechanisms have arisen. CGRP is found in the trigeminovascular nociceptive system widely in the trigeminal ganglion and brain stem. CGRP may be involved in migraine at both a central and peripheral level. Previous studies showed that the stimulation of brainstem could activate the trigeminovascular system and cause releasing of CGRP. [19][20][21] Moreover, several studies showed the CGRP level in serum raised during migraine attack and fallen after effective treatment. The significance of CGRP in migraine pathophysiology has been supported by several lines of clinical evidence. [6,7] In our study, we found that the level of CGRP in migraine group was significantly higher than normal group, but it is similar between MO and MA groups. Similar results have been obtained measuring serum levels of CGRP in migraine patients during attacks. While the role of CGRP in migraine attack was still unclear, but several studies showed the CGRP may cause migraine by inducing secretion of cytokines.
Cytokines play an important role in several physiological courses, such as inflammation and pain. Pro-inflammatory cytokines, such as TNF-α, IL-1β, IL-6, and anti-inflammatory cytokines, such as IL-10 have been reported to play a significant role in the modulation of pain threshold and they could contribute to trigeminal nerve fibers sensitization. [22,23] Previous studies supported that cytokines might be considered in inflammation and hyperalgesia in migraine. [24,25] Similar to the earlier studies, we found migraine patients had a higher IL-6, IL-1β, and TNF level than healthy controls in our study, but we did not find a significant difference between migraine with aura and migraine without aura. The increasing IL-6, IL-1 β and TNF level support that cytokines may be related with the pathogenesis of migraine. The cause of elevated cytokine levels during attacks is unknown; however, CGRP may play a role in this event. It has been shown that CGRP stimulates the production and release of pro-inflammatory cytokines, such as TNF-α, IL-1β, and IL-6 from human lymphocytes "in vitro" through the link to specific receptors on these cells. [10,26] In addition, the jugular plasma levels of CGRP are increased in the 1 st h of migraine attacks, may cause neurogenic inflammation and vasodilation in animal models. Recent data have shown that CGRP possibly through stimulation of their selective receptors on T-cells, trigger the secretion of cytokines. [27,28] Therefore, it is possible that the pain of migraine attack is related to the release of CGRP and its ability to induce a secretion of cytokine from white cells, platelets, and endothelium. In our study, we found the level of CGRP was related significantly with the level of IL-1β (r = 0.30, P < 0.05) and IL-6 (r = 0.94, P < 0.05).
In conclusion, our results suggest that IL-1β and IL-6 may be involved in the pathogenesis of migraine attacks and CGRP related with the secretion of cytokines. Following this hypothesis, possible future drugs that prevent cytokine production or block CGRP release from trigeminal vascular endings may be a useful strategy in the treatment of migraine.
In our study, the IL-6 level was related with CGRP level most significantly. However, the role of IL-6 to induce migraine has not yet been explored. Several studies showed that IL-6 can activate the mitogen-activated protein kinase (MAPK) signaling pathway in trigeminal ganglion neurons. Moreover, the induction and maintenance of various pain conditions were activated by the ERK1/2 MAPK pathway. [29,30] Recent work showed that the voltage-gated sodium channels, such as Nav1.3 and Nav1.8, can been regulated by MAPK pathway in DRG neurons and the voltage-gated sodium channels play important role in neuralgia. [31,32] Consequently, we believe the migraine may be caused by cytokines, such as IL-6 and IL-1β which induced by CGRP.
Our study had limitations certainly, such as we just collected the blood from jugular venous because a large number of studies had shown that the level of CGRP in jugular venous but not cubital venous was higher in migraine patients than healthy volunteers. In the present study, we did not compare the CGRP and cytokines between CM and EM patients as well as between in and out of an attack, we will explore these points in the next studies.
conclusIons
In our study, we found migraine patients had a higher IL-6, IL-1β, and TNF level than healthy controls and the level of CGRP was related significantly with the level of IL-1β and IL-6. In conclusion, our results suggest that IL-1β and IL-6 may be involved in the pathogenesis of migraine attacks and CGRP related with the secretion of cytokines.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. reFerences Figure 1: (a and b) Correlation between interleukin-1β, interleukin-6, and calcitonin gene-related peptide. Spearman rank correlation analysis showed that the level of calcitonin gene-related peptide was significantly correlated with interleukin-1β (r = 0.30, P < 0.05) and interleukin-6 (r = 0.94, P < 0.05) b a | 2019-03-28T13:33:38.290Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "d1cc30fee48ea39d6dd6e9b8dd7f88bbebf56ee0",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/aian.aian_371_18",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "d1cc30fee48ea39d6dd6e9b8dd7f88bbebf56ee0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266596476 | pes2o/s2orc | v3-fos-license | Differentiation grade as a risk factor for lymph node metastasis in T1 colorectal cancer
Abstract Objectives Japanese guidelines include high‐grade (poorly differentiated) tumors as a risk factor for lymph node metastasis (LNM) in T1 colorectal cancer (CRC). However, whether the grading is based on the least or most predominant component when the lesion consists of two or more levels of differentiation varies among institutions. This study aimed to investigate which method is optimal for assessing the risk of LNM in T1 CRC. Methods We retrospectively evaluated 971 consecutive patients with T1 CRC who underwent initial or additional surgical resection from 2001 to 2021 at our institution. Tumor grading was divided into low‐grade (well‐ to moderately differentiated) and high‐grade based on the least or predominant differentiation analyses. We investigated the correlations between LNM and these two grading analyses. Results LNM was present in 9.8% of patients. High‐grade tumors, as determined by least differentiation analysis, accounted for 17.0%, compared to 0.8% identified by predominant differentiation analysis. A significant association with LNM was noted for the least differentiation method (p < 0.05), while no such association was found for predominant differentiation (p = 0.18). In multivariate logistic regression, grading based on least differentiation was an independent predictor of LNM (p = 0.04, odds ratio 1.68, 95% confidence interval 1.00–2.83). Sensitivity and specificity for detecting LNM were 27.4% and 84.1% for least differentiation, and 2.1% and 99.3% for predominant differentiation, respectively. Conclusions Tumor grading via least differentiation analysis proved to be a more reliable measure for assessing LNM risk in T1 CRC compared to grading by predominant differentiation.
INTRODUCTION
2][3][4] Lymph node metastasis (LNM) occurs in approximately 10% of T1 CRC cases, 5,6 so it is important to assess the need for additional surgical resection with lymph node dissection after the endoscopic resection of T1 CRCs based on the risk of LNM.Current Japanese guidelines include positive lymphovascular invasion, poorly differentiated adenocarcinoma (Por)/mucinous carcinoma (Muc)/signet-ring cell carcinoma (Sig), depth of invasion ≥ 1000 µm, and tumor budding grade 2 or 3 as established risk factors for LNM. 7[10] However, some issues with these guidelines need to be overcome to achieve more standardized criteria. 11ne of these is how to assess tumor differentiation grade when the lesion consists of two or more levels of histological differentiation because the classification of tumor grade differs between guidelines. 12umor grading is classified by the least differentiation according to the World Health Organization (WHO) classification, and by the predominant differentiation using Japanese guidelines (Figure 1). 13,146][17][18][19][20][21][22][23][24][25] Two recent expansive Japanese multicenter studies encompassing over 4000 T1 CRC cases diverged in their methodologies-one utilizing predominant differentiation analysis, the other least differentiation analysis (Table 1).In other words, the decision to perform additional surgical resection following endoscopic resection depends on whether the analysis is based on the least or most predominant differentiation.
This study aimed to determine which method of tumor grading classification, the least or predominant differentiation analyses, is optimal for retrospectively assessing the risk of LNM in T1 CRC.
Patients and study design
We included data from all patients with pathologically diagnosed T1 CRC who underwent primary or secondary surgical resection with lymph node dissection from April 2001 to October 2021 at Showa University
Endpoints
The endpoint of this study was to determine whether the least or the predominant differentiation grading classification was the better variable for predicting the risk of LNM in T1 CRC when the lesion consisted of two or more levels of differentiation.In our study, tumor grading was bifurcated into low-and high-grade categories.Adenocarcinomas with good differentiation (G1, characterized by >95% gland formation) and moderate differentiation (G2, with 50-95% gland formation) were designated as low-grade.Por (G3, exhibiting <50% gland formation) were categorized as high-grade.Furthermore, Muc and Sig were classified as high-grade because both are considered equivalent to high-grade due to their associated risk of LNM in Japanese guidelines (Table 2).The predominant grade was defined by the largest area among two or more components (Figure 1).
Mucinous carcinoma
Signet-ring cell carcinoma
Moderately differentiated adenocarcinoma
Papillary adenocarcinoma
Clinical and pathological data
Patient characteristics analyzed included age, sex, tumor location, tumor size, tumor morphology, lymphovascular invasion, tumor grade (the least and the predominant differentiation), tumor budding, depth of submucosal invasion, and the status of LNM.Rectum was defined as the area between the upper border of the anal canal and the lower border of the second sacral vertebra.Tumor size was measured after formalin fixation.Tumor morphology was classified as pedunculated or non-pedunculated according to the Paris classification. 26ll resected specimens were retrieved and immediately fixed in 10% buffered formalin.They were then cut at the point where the deepest invasion area could be exposed on the cut end surface with 2-3-mm-thick sections and stained with hematoxylin and eosin (H&E).All specimens were diagnosed by a single pathologist, adhering to the 2019 WHO Classification of Tumors and the prevailing guidelines of the Japanese Society for Cancer of the Colon and Rectum (JSCCR). 7,14Lymphatic invasion was diagnosed by H&E staining and immunostaining with the D2-40 antibody, and vascular invasion was diagnosed by double staining with H&E and Victoria Blue or Elastica van Gieson.Tumor bud-ding was defined as a cancer cell nest consisting of one or fewer than five cells infiltrating the interstitium at the invasive margin of the cancer.On selecting the region with the most tumor budding, the front of the tumor growth was observed at 200× magnification to count the number of tumor buds: BD1, 0-4; BD2, 5-9; and BD3, ≥ 10.The depth of submucosal invasion was classified according to JSCCR guidelines as <1000 µm (T1a) and ≥ 1000 µm (T1b).Criteria for subsequent bowel resection were applied when any of the following parameters were identified in the endoscopically resected specimen, in accordance with the JSCCR guidelines: (1) T1b (depth of submucosal invasion ≥1000 µm), (2) positive lymphovascular invasion, (3) poorly differentiated adenocarcinoma, signet-ring cell carcinoma, or mucinous carcinoma, or (4) a budding grade of BD2/3 at the point of deepest invasion.In our study, we utilized the least differentiation analysis to assess tumor grading for the criteria guiding secondary surgery.Operative specimens were used as the gold standard for the presence or absence of LNM.
Statistical analysis
Continuous variables were reported as the mean ± standard deviation.Dichotomous variables were compared using chi-squared or Fisher's exact tests, as appropriate.Multivariate logistic regression analysis regarding LNM was subsequently performed to calculate odds ratios (ORs) and 95% confidence intervals (CIs).McNemar's test was used to compare the sensitivity and specificity between the two methods.All statistical analyses were performed using R for Windows 4.0.3.All p-values were two-sided, and p < 0.05 was considered statistically significant.
Ethical considerations
This study was approved by the institutional review board of Showa University Northern Yokohama Hospital (approval no.20H022) and was registered with the University Hospital Medical Network Clinical Trials Registry (UMIN 000042622).Written informed consent was obtained from all patients before treatment.
Study cohort
Figure 2 shows the study flowchart.A total of 1038 patients with pT1 CRC underwent initial or additional surgical resection with lymph node dissection during the study period.Of these, 28 patients with synchronous invasive cancers at the time of resection, three with suspected Lynch syndrome, two with familial adenomatous polyposis, four with inflammaroty bowel disease, one with pre-operative chemo-and/or radiotherapy, and 29 with missing data from patient files and local registries for variables were excluded.Thus, 971 patients were eligible and included in the analyses.Table 3 presents the clinicopathological characteristics of the patients in this study.The LNM rate was 9.8% (95/971), and the mean number of dissected lymph nodes was 20 11 (median, 19).
The association between various factors and LNM, as well as the OR for clinicopathological determinants in both univariate and multivariate logistic regression analyses, are delineated in Tables 5 and 6.Univariate analysis revealed a significant correlation between LNM and tumor grade with the least differentiation analysis (p < 0.01), in contrast to the non-significant association with tumor grade via predominant differentiation analysis (p = 0.18).Multivariate logistic regression substantiated tumor grade via least differentiation analysis as an independent prognostic factor (p = 0.04, OR 1.68, 95% CI 1.00-2.83),whereas tumor grade based on predominant differentiation did not retain independent predictive value (p = 0.40, OR 2.07, 95% CI 0.38-11.2).
DISCUSSION
In this study, we investigated which method of assessing histological differentiation grade as a risk factor for LNM in patients with T1 CRC (the least or predominant differentiation analyses) was superior with respect to diagnostic performance.We concluded that tumor grade, when assessed through least differentiation analysis, emerged as an independent prognostic factor for LNM,unlike when evaluated via predominant differentiation.In addition, the least differentiation analysis showed higher sensitivity, reducing the risk of misclassifying patients with LNM as negative in T1 CRC, although the predominant differentiation analysis showed higher specificity, preventing potentially unnecessary surgeries.
The WHO classification uses the least differentiation analysis, while JSCCR guidelines adopt the predominant differentiation analysis.However, the assessment of tumor grade according to the least or predominant component actually depends on the institution in Japan (Table 1).][21][22][23]25 When classified by the predominant differentiation analysis, 0.8% (8/971) of the cohort was defined as high-grade in the present study which is equal to the 0.6%-5.0%found in the previous studies with the predominant differentiation analysis. 16,20,22,23It suggests that this method may lack the sensitivity to act as a risk factor.This study underscores and contributes to the discourse on this issue.When classified by the least differentiation analysis, 17.0% (165/971) of the cohort was defined as high-risk for LNM in the present study compared with 8.2%-16.4% in previous studies. 8,17,18,25ensitivity was higher when using the least differentiation analysis (27.4%) compared with the predominant differentiation analysis (2.1%).This suggests that clas-sifying tumor grade according to the least differentiation analysis is preferable to avoid missing an LNM-positive case.However, specificity was lower using the least differentiation analysis (84.1%) than the predominant differentiation analysis (99.3%), which could lead to an increase in unnecessary surgeries.Our dataset lacked LNM-positive cases that were deemed high risk through least differentiation analysis yet low risk by predominant differentiation analysis, thus precluding a direct assessment of least differentiation's efficacy.This could be attributable to the high incidence of T1b (88.7%) and the strong correlation between lymphovascular invasion and LNM.In this study, the assessment of lymphovascular invasion, one of the most reliable predictors for LNM, was supplemented with immunohistochemical staining techniques, including D2-40 for lymphatic invasion and Victoria Blue/Elastica van Gieson for vascular invasion, in addition to standard techniques.As a result, the positivity rate of lymphovascular invasion (55.2%) was higher than that reported in other studies (39%-42%), leading to a higher odds ratio for LNM of 11.5, compared to 4.2-8.1 reported in previous studies. 8,27his study had several limitations.First, it was a retrospective single-center study limited by a relatively small sample size compared with recent large-scale multicenter studies of over 4000 T1 CRC cases. 5,28Secondly, our analysis did not consider T1 CRCs managed solely through endoscopic treatment, where LNM status remained undetermined.Consequently, this exclusion encompassed 29 high-grade cases identified by least differentiation analysis and none by predominant differentiation analysis, potentially introducing selection bias into our study results.
In conclusion, tumor grading with the least differentiation analysis was an independent risk factor for LNM in T1 CRC.The least differentiation analysis of assessing tumor grading had a higher sensitivity for LNM in T1 CRC than the predominant differentiation analysis.
F I G U R E 2
Study flow chart.
TA B L E 1
Published studies reporting variables used to assess tumor grade in Japanese institutions (the least or predominant differentiation analyses).
(year) Design Location Least or pre- dominant N, total Rate of high-grade N (%)
TA B L E 2 Classification of tumor grading in this study.
Predictive value of tumor grade for lymph node metastasis as a single predictor (the least vs. predominant differentiation analyses).Relationships between clinicopathological factors and least differentiation analysis.Relationships between clinicopathological factors and lymph node metastasis (Predominant differentiation analysis). | 2023-12-30T05:08:00.521Z | 2023-12-28T00:00:00.000 | {
"year": 2023,
"sha1": "62a21abce52992d6dc159b4c409d89938b703046",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "62a21abce52992d6dc159b4c409d89938b703046",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216469498 | pes2o/s2orc | v3-fos-license | Study of result of postero medial soft tissue release (PMSTR) in congenital talipes equinus varus (CTEV) children with Pirani score
Specific Objectives: Congenital talipes Equinovarus (clubfoot) is most common congenital deformity of foot. It can be treated by PONSETI technique with high rate of success but surgery is indicated for rigid deformities that cannot be corrected by serial casting. In our study, our aim is to assess the outcome of Posteromedial Soft Tissue Release (PMSTR) in child with rigid CTEV by PIRANI score. Material and Methods: Between November 2018 to November 2019, we treated twenty (20) feet with idiopathic CTEV with PMSTR. All children with idiopathic clubfoot will be screened. First trial of CTEV cast will be given upto 3 casts. Child with clubfoot deformity which was not corrected with CTEV cast will be selected. Deformity will be assessed with pre-operative PIRANI score. Blood investigations and clinical examination will be done. Surgically fit child will be posted for POSTEROMEDIAL SOFT TISSUE RELEASE. After operative procedure, correction achieved is measured with immediate postoperative PIRANI score. Then above knee slab is given to patient up to stitch removal (12 days). Then after stitch is removed and above knee POP is applied to maintain correction. Patient is called at hospital at every 15 days. Old cast is removed and correction is assessed with PIRANI score. Below knee POP is applied at every 15 days and PIRANI score is assessed at every 15 days. Parents are encouraged to walk their child with cast. This procedure is followed until satisfactory correction is achieved (Usually 6-8 casts). Results: All 20 feets were treated between 7 months to 48 months age-group children with mean age of 19 months. Postoperatively the patients were followed up for every 15 days upto 6 month and everytime correction is noted with PIRANI score. According to these 20 feet evaluated. Among this, 12 children (60%) having excellent result with final PIRANI score <1. 4 children (20%) having good result with final PIRANI score <2. 4 children (20%) having fair result with final PIRANI score >2. Conclusion: From this study, we conclude that all children with Congenital Talipes Equino Varus (CTEV) deformity should be first treated with casting. All patients with a rigid idiopathic clubfoot should be given casts preoperatively, not with a view to achieve complete correction but only to stretch stretch the soft tissues and decrease the chances of post-operative skin breakdown. One stage Postero-medial soft tissue release is an excellent mode of treatment for those children with CTEV deformity which is not corrected by serial casting.
Introduction
Idiopathic congenital talipes equinovarus (clubfoot) is a common complex deformity that occurs in approximately one or two per 1000 newborns [1] . The long-term goal of treatment is a functional, pain-free, plantigrade foot with good mobility, without calluses and walking with comfortably with nomal shoes [2,3] . Treatment of congenital talipes equinorvarus (clubfoot) begins as soon as possible with serial casting techniques [4] with 20-95% of success rate [5] . However in case of failure of serial casting or reoccurence, or in whom parents seek medical intervention too late, surgical treatment can be performed. There are different types of surgical procedures according to the remaining deformities ranging from simple posterior release and tendon transfer to extensive procedures like postero-medial release and complete subtalar release [7] . Theoretically, as the child becomes older, soft tissues become more contracted and difficult to be a corrected because of long-standing deformity and secondary contractures.
Turco8 reported that the best results from the surgical treatment of congenital clubfoot were obtained in children operated on between ages of one and two year and the thereafter the number of the excellent result diminished as the age at operation increased.
Materials and Methods
This study was conducted in the department of orthopedic surgery, Baroda Medical College, Vadodara with twenty (20) feet with rigid idiopathic congenital talipes Equinovarus (CTEV) who were evaluated and operated below the age of 5 years between November 2018 to November 2019.
Exclusion criteria:
The exclusion criteria were clubfoot secondary to some other disorders such as cerebral palsy, arthrogryposis multiplex congenital, myelodysplasia.
Inclusion criteria: 1. Idiopathic clubfoot 2. Child with rigid deformity of foot which cannot be corrected by 3. corrective cast (Any one of deformity equines, cavus, forefoot adduction, inversion not corrected with consecutive 3 cast). 3. Neglected clubfoot which is not corrected by casting.
Operative technique
The skin was incised horizontally from the base of the first metatarsal to the lateral side of tendo achilles which was lengthened, as were the tendons of tibialis posteior, flexor hallucis longus & flexor digitorum longus using a z-technique. The posterior tibial neurovascular bundle identified & isolated along the entire length of incision. The posterior talofibular & calcaneofibular ligaments, the posterior third of deep deltoid ligament, the superficial deltoid & the talocalcaneal interosseous ligament, the spring ligament & the Y ligament ware all divided. Complete release by capsulotomies of ankle, subtalar, talonavicular & first tarsometatarsal joints until mobilization of the ankle, hind foot & mid foot was obtained. In all cases, irrespective of the presence of cavus deformity, a planter fascia release was performed near its origin flexor digitorum brevis & hallux abducter muscle were also release from their proximal insertion to allow forefoot correction. The ankle joint & subtalar joint, posterior capsule release was necessary to correct hindfoot equinus. Z-plasty done over tendo achillis according to the severity of equines deformity Suture were removed after 12 days followed by manipulation & cast application. Patients were followed post operatively at 15 days regular interval. Post operatively above knee fibre cast is given and patient is encouraged to walk with POP. Mobilisation with modified above knee fibre cast allows tarsal bone to be realigned at every 15 days old cast is removed and correction is recorded with PIRANI score and new cast is given and this sequence is followed upto 6 to 8 cast.
Clinical assessment pirani scoring
The Pirani score is a simple, easy to use tool for assessing the severity of each of the components of a clubfoot. It is extremely useful for assessing the severity of the clubfoot at presentation and for monitoring patients' progress. The Pirani score should be recorded at each visit the patient makes. If the Pirani score increases from one visit to the next it may indicate that a relapse of deformity is occurring. Information on how to manage relapse can be found here. The components are scored as follows:
Pirani scoring
Each component may score 0, 0.5 or 1 Hind foot contracture score (HCFS) 1. Posterior crease 2. Empty heel 3. Rigid equinus Residual deformity was appreciated clinically in 4 feet. Among the residual deformities, fore foot adduction was there. All feet were considered to have the normal range of ankle movement and subtalar movement. A normal event or power was observed is every feet NO one child had gait abnormality and limping. Among 20 feet, 12 feet had excellent result, 4 feet had good result and 4 feet had good result. Among complication, 4 children having-residual deformity and 5 children were having plaster sore. No any infection found among these children.
Discussion
Clubfoot is one of the most common congenital abnormalities of foot. Most of the clubfeet could be very well managed with conservatively using PONSETI technique. It is not still unusual to find untreated or partially treated clubfoot in the scenario of developing countries.
Limitations of conservative treatment are seen in older children when the deformity become rigid and secondary changes start to occur in skeleton of foot. Postero medial soft tissue release is used to treat those children upto 4 years of age. After 4 years of age patient may require bony procedure. We also conclude that for severe and resistant variety of clubfoot after initial casting by PONSETI method, one stage Posteromedial soft tissue release is good method of treatment for achieving better correction.
Conclusion
From this study, we conclude that all children with Congenital Talipes Equino Varus (CTEV) deformity should be first treated with casting. All patients with a rigid idiopathic clubfoot should be given casts preoperatively, not with a view to achieve complete correction but only to stretch stretch the soft tissues and decrease the chances of post-operative skin breakdown. One stage Postero-medial soft tissue release is an excellent mode of treatment for those children with CTEV deformity which is not corrected by serial casting. PMSTR should be done between age of 1 to 2 years for excellent result. After this result may worsen as age increases. Among complication, residual deformity is seen in those chidren who presented to hospital at late age. (3-4 years) without any prior casting.
At the end we can say that PMSTR is an excellent mode of treatment for rigid type of clubfoot which is not corrected by serial casting by PONSETI method. | 2020-03-12T10:57:42.497Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "cdc10a6e6ffb580fce2902d7128a3b4abe9233ba",
"oa_license": null,
"oa_url": "https://www.orthopaper.com/archives/2020/vol6issue1/PartK/6-1-67-634.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "707d789d9c1e4aa95ab7646dd42661aa0c9eec45",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25535776 | pes2o/s2orc | v3-fos-license | In vitro antimicrobial activity of five essential oils on multidrug resistant Gram-negative clinical isolates
Aim/Background: The emergence of drug-resistant pathogens has drawn attention on medicinal plants for potential antimicrobial properties. The objective of the present study was the investigation of the antimicrobial activity of five plant essential oils on multidrug resistant Gram-negative bacteria. Materials and Methods: Basil, chamomile blue, origanum, thyme, and tea tree oil were tested against clinical isolates of Acinetobacter baumannii (n = 6), Escherichia coli (n = 4), Klebsiella pneumoniae (n = 7), and Pseudomonas aeruginosa (n = 5) using the broth macrodilution method. Results: The tested essential oils produced variable antibacterial effect, while Chamomile blue oil demonstrated no antibacterial activity. Origanum, Thyme, and Basil oils were ineffective on P. aeruginosa isolates. The minimum inhibitory concentration (MIC) and minimum bactericidal concentration values ranged from 0.12% to 1.50% (v/v) for tea tree oil, 0.25-4% (v/v) for origanum and thyme oil, 0.50% to >4% for basil oil and >4% for chamomile blue oil. Compared to literature data on reference strains, the reported MIC values were different by 2SD, denoting less successful antimicrobial activity against multidrug resistant isolates. Conclusions: The antimicrobial activities of the essential oils are influenced by the strain origin (wild, reference, drug sensitive, or resistant) and it should be taken into consideration whenever investigating the plants’ potential for developing new antimicrobials.
INTRODUCTION
have become a worldwide major problem in the hospital environment and are the main causes of hospital-acquired infections or healthcare-associated infections, not excluding their potential of transmission in the community [5,6]. Such bacterial isolates can be resistant to all currently available antibiotics or may remain susceptible only to past agents such as the polymyxins [7].
One of the actions to mitigate the drug-resistance problem includes the development of new antimicrobials and in this sense essential oils are being investigated for potential antibacterial activities. Many plant oils or extracts have been reported to have antimicrobial properties and this is attributed to their ability to synthesize aromatic substances, most of which are phenols or oxygen-substituted derivatives [8,9]. However, most of the published studies deal with either nonpathogenic or reference bacterial strains and there is a scarcity of data about wild multidrug resistant isolates. The objective of this study was to determine the antimicrobial activity against multidrug resistant Gram-negative bacteria isolated from clinical samples, of plant essential oils, which are widely used in studies with non-pathogenic or reference strains but their actual effect against resistant pathogens is hardly addressed in the available literature.
Microorganisms
The bacterial strains used in this study were A. baumannii (n = 6), E. coli (n = 4), K. pneumoniae (n = 7) and P. aeruginosa (n = 5), isolated from blood cultures (n = 9), urine (n = 5), vascular catheters (n = 2), and wound swabs (n = 6) collected from equal number of hospitalized patients entering the University Hospital of Ioannina, Greece. Based on the susceptibility tests [ Table 1] all the K. pneumoniae isolates were carbapenemaseproducing and the E. coli isolates were producing ESBLs. Among the K. pneumoniae strains producing carbapenemases, four were resistant to all tested antibiotics, while the rest were sensitive only to colistin. Resistant bacteria to colistin and tigecycline were further confirmed by the E-test (BioMerieux SA, France). Identification to species level was performed using the VITEK ® 2 automated system (BioMerieux SA, France). This system uses advanced colorimetry, an identification technology enabling identification of routine clinical isolates (bacteria, yeast), and antibiotic susceptibility testing and resistance mechanism detection.
The selected Gram-negative isolates were stored at -70°C in Microbank ® beads (Prolab diagnostics, Canada), a ready-to-use system for storage and retrieval of bacterial isolates, which is comprised of cryovials incorporating treated beads and a special cryopreservative solution enhancing longer survival of the fastidious microorganisms and higher quantitative recoveries. Prior to any experimentation, the so cryopreserved isolates were revived by subculturing in appropriate culture media.
Essential Oils
The following five essential oils supplied by Sigma-Aldrich Co For the used commercial oils, the supplier provided no data about their contents or chemical analysis, which is presumed to be the company's copyright. However, a simple chemical analysis was performed in order to have a gross estimate of the components of the employed essential oils. For the identification of the components a QP 5000 Shimadzu instrument, equipped with a capillary column DB-5-MS, 30 × 0.32 mm, 0.25 μm, containing 5% phenyl-methylpolysiloxane (J&W Scientific, Folsom, CA, USA) was employed. The gas chromatography oven temperature was programmed as follows: initial temperature 55°C ramped at 5°C/min to 200°C, ramped 1°C/min to 210 (held for 2 min), and finally rampedto 270°C at 20°C/min and held for 3 min. The injector was set to 240°C in the splitless mode. The ion source and transfer were kept at 240°C and 290°C, respectively. In the full-scan mode, electronic ionization mass spectra at m/z of 50-450 were recorded at 70 eV. Helium was used as the carrier gas at 1.5 mL/min.
Determination of Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal Concentration (MBC)
Broth macrodilution assays were performed to determine the MIC and MBC for each essential oil, according to the Clinical and Laboratories Standards Institute (CLSI) protocol M7-A8 with some modifications [10].
Each essential oil was dispersed in a sterile tube containing Mueller-Hinton broth (MHB, Oxoid, UK) and was vortexed at room temperature, to obtain an initial stock solution of 8% (v/v). Subsequently, serial double-fold dilutions were prepared in sterile tubes containing MHB supplemented with 0.5% (v/v) Tween 20 (Serva, Germany). The final concentrations of each essential oil were 4, 2, 1, 0.5, 0.25, and 0.125 (v/v).
Overnight bacterial cultures on Mueller-Hinton agar (MHA, Oxoid, UK) were used to prepare the bacterial inocula. Each inoculum was adjusted with sterile saline to obtain the final suspension with turbidity analogous to that of 0.5 McFarland Standards, which equals to a concentration of 1-1.5 × 10 8 cfu/ml [11,12]. About 10 μl of the prepared bacterial inoculum were transferred to each tube containing the serial double-fold dilutions of the essential oil, giving a final bacterial concentration of 5 × 10 5 cfu/ml. The tubes were incubated aerobically at 37°C for 48 h. After the end of incubation, 10 μl of each dilution was inoculated onto MHA plates and incubated at 37°C for 24 and 48 h in order to determine the MIC and MBC, respectively. The MIC and MBC values were determined by viable counts in MHA, and the MIC was defined as the lowest concentration at which the inoculum viability was reduced up to 90% and MBC was defined as the lowest concentration at which the inoculum viability was reduced up to 99.9% or no apparent growth occurred [13].
Statistical Analysis
Statistical analysis was performed in SPSS (version 22.0. Armonk, NY: IBM Corp). The exhibited MICs and MBCs were grouped according to oil type and checked for normality by the Shapiro-Wilk test. Comparison between oil types was performed by one-way ANOVA, whereas differences between oil types were estimated by the Turkey's test.
RESULTS
The antimicrobial susceptibility of the tested clinical isolates is presented in Table 1, and the MIC and MBC values of the selected essential oils against the tested drug-resistant isolates are presented in Tables 2 and 3. Basil, origanum, tea tree, and thyme essential oils presented antibacterial activity, but chamomile blue oil demonstrated no antibacterial action at all. The tea tree oil demonstrated consistent antimicrobial activity against all the tested clinical isolates and all four oils inhibited growth of A. baumanii isolates. However, origanum, thyme, and basil oils antigrowth effect on P. aeruginosa was poor [ Tables 2 and 3].
Statistically significant differences between the tested essential oils were determined by one-way ANOVA (F [3,133] = 7.403, P = 0.002). The Basil oil's MIC and MBC values were significantly higher than the origanum and tea tree oil respective values (P < 0.05), but not statistically different than the thyme oil's values. The thyme oil MICs and MBCs were significantly higher than the tea tree oil relevant values (P < 0.05), but not statistically different than the origanum oil corresponding values. For the tea tree oil, the recorded MIC and MBC values were not statistically different (P < 0.05), than the origanum oil respective values.
Regarding the chemical composition of the essential oils used in this study, the most abundant component in the case of basil oil was estragole. Carvacrol and thymol were identified as main constituents of origanum oil. The composition of tea tree oil presented high contents of terpinen-4-ol and p-cymene. The prevailing molecules of thyme oil were thymol, p-cymene, and linalool. Chamomile blue oil was rich in bisabolol and trans-bfarnesene. Typical chromatograms of the essential oils examined are shown in Figure 1.
DISCUSSION
The rapid evolution and spread of resistance among clinically important bacterial species constitutes a significant issue of outmost importance for public health. The emergence of antimicrobial resistance is the consequence of selective pressure imposed to microorganisms by the excessive use of antimicrobials mostly in the medical and veterinary practices. The major issue of this important health problem is that the appearance of resistance to antibiotics reduces the currently available therapeutic options for the treatment of infectious diseases signifying the need for the development of new antibiotic compounds. Plants produce a vast variety of phytochemicals that demonstrate a diversity of medicinal properties including antimicrobial effects. The principal phytochemicals present in plants are essential oils, phenolic compounds, alkaloids, polypeptides, and polyacetylenes [9].
Essential oils have shown antimicrobial properties against a number of Gram-negative and Gram-positive bacteria and in overall, their activity against the microbial cells of the same genera and species determined under the same conditions appears to be similar. However, some bacterial isolates may show a different response in comparison to the type strains [14,15]. Hence, to reach a decision on the antimicrobial activities of essential oils, it is important to use strains from different origins in order to simulate a more realistic situation instead of just using reference strains that may not reflect the actual behavior of the strains that can be found in nature, particularly in the clinical practice. The majority of the available published studies make use of reference strains, not clinical multidrug resistant isolates, and variable findings are recorded due to the diversity of the used methodologies.
Pertaining to the specific essential oils and isolates used in the present study, there are only two publications concerning the antimicrobial effect against clinical isolates. According to da Costa et al. [16] origanum oil inhibited A. baumannii, E. coli, and K. pneumoniae clinical isolates at MIC 0.12% (v/v) and P. aeruoginosa at 0.5% (v/v), while in our study it was significantly less effective (by 2SD [3,[18][19][20][21]. In our study, the reported MIC values [ Table 3] were much different (by 2SD) denoting much less successful antimicrobial activity of the origanum and Basil oils against multidrug resistant clinical isolates.
The MIC values reported in the present study and the reciprocal values reported for K. pneumoniae reference strains [3] are different by 2SD and 1SD for origanum and thyme oils, respectively, indicating less successful antimicrobial effect against resistant clinical isolates. Concerning the basil and tea tree oil's activity [ Table 3] against K. pneumoniae no difference was observed between the tested resistant clinical isolates and the reference strains tested by Hammer et al. [3].
Regarding the P. aeruginosa strains and the activity of tea tree oil, a significant difference (by 2SD) was observed with this oil performing much better against the tested resistant clinical isolates than the reference strains tested by other researchers [3,[22][23][24]. The antimicrobial activity of the origanum and basil oil against P. aeruginosa was poor and our findings coincide with those reported on reference strains [3,25].
Based on the afore-mentioned literature data and the results of the present study, much different MIC values are recorded between the reference and clinical resistant isolates. Studies employing reference strains are showing more efficient performance of the tested essential oils; however, in the case of clinical isolates and particularly in the case of the multidrug resistant isolates used in the present study, essential oils are less efficacious. This finding can be attributed to the strain origin rather than to the methodological differences reported by other researchers [4,13,18,[26][27][28][29].
In the present study, we used multidrug resistant strains of Gram-negative bacteria isolated from hospitalized subjects. The Gram-negative bacteria are considered to be more resistant to essential oils than the Gram-positives [30]. This is largely attributed to the different structure of their cell wall which is more complex in Gram-negatives, not allowing the easy penetration of antibiotics and drugs, including the phenolic compounds (e.g., thymol, carvacrol, and eugenol) which are present in the essential oils [31,32]. Thus, the possible mechanism of action of the essential oils and their compounds is based on their ability to disrupt the bacterial cell wall and Figure 1: (a-e) Chromatograms of origanum oil (peak 1: Carvacrol, peak 2: Thymol), basil oil (peak 3: Estragole), tea tree oil (peak 4: Terpinen-4-ol, peak 5: p-cymene), thyme oil (peak 6: Thymol, peak 7: p-cymene, peak 8: Linalool), and chamomile blue oil (peak 9: Bisabolol, peak 10: Sakkas, et al.: In vitro antimicrobial activity of essential oils the cytoplasmic membrane; this mode of action consequently leads to cell lysis and leakage of intracellural compounds [33]. Considering that an intact external cell envelope is a prerequisite for the bacterium survival protecting the cell cytoplasm from the external environment, any changes in the permeability of the cell wall and cytoplasmic membrane can influence the bacterial growth. Whenever antibacterial compounds are present in the environment surrounding microorganisms, the bacteria are forced to react by altering the synthesis of fatty acids and membrane proteins to modify the permeability of the membrane [34,35]. The essential oils have the potential to alter both the permeability and the function of the membrane proteins, particularly the essential oils, which are rich in phenolics, can penetrate into the phospholipids layer of the bacterial cell wall, bind to proteins and block their normal functions. Because of their lipophilic nature, essential oils and their compounds can influence the percentage of unsaturated fatty acids and their structure [30,36]. However, because of the variety of molecules present in plant extracts, the antimicrobial activity of the essential oils cannot be attributed to a single mechanism but to a number of diverse biochemical and structural mechanisms at various sites of the bacterial cell outer and inner components affecting the functions of cell membrane, cytoplasm, enzymes, proteins, fatty acids, ions, and metabolites.
CONCLUSIONS
A detailed examination of all the factors potentially influencing the antimicrobial activity of the essential oils should be ideal, but it is rather difficult to implement as evidenced by the existing relevant literature. However, any additional data do contribute to the increase of knowledge in the field. Concerning our study significant differences were observed between our results and the results of other researchers who experimented with non-clinical/non-resistant isolates. Our findings indicate that the essential oils' antimicrobial activities are influenced by the strain origin (wild, reference, drug sensitive, or resistant) and this observation should be taken into consideration whenever investigating the plants' potential for developing new antimicrobials. Nevertheless, the identification of the exact compounds encompassing a true antimicrobial effect is a prerequisite in order to optimize their potential therapeutic use. Yet, microbes are very good survivors having a remarkable ability to adapt to hostile environments, such as being surrounded by antimicrobials, thus meticulous investigation of their resistance mechanisms is necessary in order to encounter successfully the emergence of antibiotic resistance. | 2018-04-03T02:11:56.109Z | 2016-05-30T00:00:00.000 | {
"year": 2016,
"sha1": "548f41ad95775bbb38843042a517f7b4f1822e59",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5455/jice.20160331064446",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cb93237b04b0feed954cafaeb6e9ef27002a40a1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247768587 | pes2o/s2orc | v3-fos-license | Parenting With a Kind Mind: Exploring Kindness as a Potentiator for Enhanced Brain Health
A growing body of research has suggested that high levels of family functioning—often measured as positive parent–child communication and low levels of parental stress—are associated with stronger cognitive development, higher levels of school engagement, and more successful peer relations as youth age. The COVID-19 pandemic has brought tremendous disruption to various aspects of daily life, especially for parents of young children, ages 3–5, who face isolation, disconnection, and unprecedented changes to how they engage and socialize. Fortunately, both youth and parent brains are plastic and receptive to change. Resilience research shows that factors such as engaging in acts of kindness, developing trusting relationships, and responding compassionately to the feelings of others can help lay new neural pathways and improve quality of life. Yet, little research has investigated the effects of brain healthy parental practices of kindness with pre-school aged children. The current study examines whether an interactive, parent–child kindness curriculum can serve as a potentiator for brain health as measured by resilience and child empathy levels. During a peak of the pandemic, mother participants between the ages of 26–46 (n = 38, completion rate 75%) completed questionnaires on parental resilience levels and parent-reported child empathic pro-social behaviors before and after engaging in a 4 weeks online, self-paced, kindness curriculum. Half of the group received additional brain health education explaining the principles of neuroplasticity, empathy, perspective taking, and resiliency. Mothers in both groups showed increased resilience ( p < 0.001) and reported higher levels of empathic behavior in their child ( p < 0.001) after completing the curriculum. There was no significant difference between groups. Comparison of mean resilience levels during COVID-19 to pre-pandemic general means indicated that mothers are reporting significantly lower levels of resilience as well as decreased empathetic behaviors in their children. These results support the notion that kindness is a powerful brain health booster that can increase resilience and empathy. This research study was timely and relevant for parents in light of the myriad of stresses brought about by the ongoing COVID-19 pandemic. There are broader public health implications for equipping individuals with tools to take a proactive and preventative approach to their brain health.
BACKGROUND/INTRODUCTION
The COVID-19 pandemic permeates family functioning and wellbeing, potentially leading to a significant negative impact on parents and their young children. Parents are facing imminent threats to their relationships, social support networks, and educational access for their children, leading to overwhelming feelings of worry, stress, and anxiety (Prime et al., 2020). Specifically, the parent-child relationship is of utmost concern. Recent studies suggest that parents experiencing pandemicrelated fears may have difficulty managing negative emotions, which in turn, affects daily life, family discord, and ultimately, the parent-child relationship (Daks et al., 2020;Di Crosta et al., 2020;Prikhidko et al., 2020;Saladino et al., 2020). As such, parenting young children can be challenging in and of itself, and now parents must combat additional stressors (i.e., financial, childcare, and health) due to the pandemic.
Studies indicate that, during crises, resilience (i.e., an individual's ability to positively adapt in the face of adversity; Herrman et al., 2011) reported by women with children is considerably lower than during non-crises times and that stress levels are reported to be exacerbated (Avery et al., 2021;Taylor et al., 2021). A recent study investigating the relationship of social stressors and parent-child engagement during the COVID-19 pandemic, found that mothers and fathers who reported more social stressors were less engaged with their children and their children exhibited more behavior problems compared to before the pandemic (He et al., 2021). Fortunately, it has been shown that when parents maintain positive, responsive styles of caregiving, they can prevent and even reverse toxic levels of stress in the home caused by adversity (Blair and Raver, 2016). Stern and Cassidy (2018) found that the parentchild bond can be strengthened through an acknowledgment of empathic pro-social behaviors such as care and concern for others, which involves the capacity to comprehend the minds of others, to feel emotions outside one's own, and to respond with kindness to others' suffering.
Furthermore, resilience research indicates that factors such as engaging in acts of kindness, developing trusting relationships, and responding compassionately to the feelings of others can help lay new neural pathways and improve quality of life (Haslip et al., 2019). Kindness, defined as actions intended to benefit others (Curry et al., 2018) and considered as a pro-social relational construct, supports an intra and interpersonal focus on how one treats others, takes care of oneself, and interacts with the world around them. As such, parent-driven kindness interventions may prove fruitful in promoting resilience, as parents have the influence and opportunity to become the first teachers and models for acts of kindness with their children.
The global pandemic has put a spotlight on brain health and the great need for resources, education, and training. Brain health is defined as a state of performing at your personal best and thriving in your life context-not simply the absence of disease. The term brain health as described by Chapman et al. (2021) holistically encompasses the brain's functions which includes aspects of cognition (ex: problem solving, innovation, processing speed, and memory), daily life (ex: responsibilities, sleep, nutrition, and exercise), wellbeing (ex: resilience, quality of life, and mood), social interaction (ex: empathy, kindness, and social support), and neural components (ex: brain blood flow and connectivity). In contrast, mental health is a term that more narrowly focuses on psychological and emotional wellbeing. Recent work has highlighted how the different components of brain health influence each other and by strengthening skills in one area, may also compensate for areas of weakness. A case study showed that after completing an online cognitive intervention, some of the outcomes were that the participant felt more satisfied with her social networks and saw improvement in measures of wellbeing, which included increased resilience and decreased stress (Chapman et al., 2021). Following this line of thinking, the current study seeks to understand how a kindness intervention may improve resilience.
With social distancing and stay-at-home mandates in effect, digital tools that are easily accessible and cost effective offer a solution to help families navigate the stresses of the COVID-19 pandemic. Studies have demonstrated the value of digital interventions in allowing various populations, including families, to access evidence-based guidance on demand and through a modality (web-based) that they are already comfortable using to seek mental, behavioral, and brain health guidance (Lund et al., 2018;Caulfield and George, 2020;O'Dell et al., 2021). Supplementary to intervention, other tools such as self-paced, at-home brain science education, could offer additional insight for parents seeking to better understand their own brain health. Yet, currently, there is limited data on the effects of brain science education on the resilience levels of parents with young children amid a pandemic. This study seeks to understand if an online kindness training may increase resilience in parents with preschool-aged children, promote empathic pro-social behavior in their children, and parents find the kindness activities relevant.
AIMS AND HYPOTHESIS
Given the timely need for at-home parenting programs that support the social, emotional, and relational emergence of developing young minds, collaborators from the University of Texas at Dallas Center for BrainHealth, alongside the Children's Kindness Network, based in TN, had a specific interest in the impact of Kind Minds with Moozie, an online kindness training for parents of preschoolers. The aim of this study is to understand if practicing the pro-social skills of kindness may (1) affect resilience in parents and (2) affect empathic pro-social behavior in preschool-aged children.
The hypothesis for this study was that (1) parents who engage with Kind Minds with Moozie will increase resilience and observe increased empathic pro-social behaviors in their child, (2) additional brain science education for the parents would contribute to greater gains in resilience, and (3) parents would find kindness activities relevant during interactions with their preschool-aged children.
Procedure and Study Design
Participants were randomized into either the Kindness Only condition, or the Kindness with Brain Science condition via a simple random sample process. All individuals provided written informed consent to participate, and all procedures were approved by and carried out in accordance with the University of Texas at Dallas Institutional Review Boards, number 21-104. The study was conducted entirely online from April to July 2021. Recruitment was open to both mothers and fathers; however, most participants who enrolled in the study were mothers. One father enrolled in the study but did not complete the online modules and was considered loss to follow-up and was not included in the analysis.
Participants
Participants were recruited for the study through professional networks and social media posts, primarily in online groups for parents. Parents with children (three to 5 years of age) were screened to determine if they qualified for the study. Participants who met all inclusion and failed to meet exclusion criteria were enrolled in the study. Inclusion criteria consisted of: the parent being 18 years of age or older, having access to the Internet (including access to a computer/smartphone/ tablet), identifying as the primary caregiver/parent within the target child age range, and being a proficient English speaker. If the parent agreed, they were provided with an electronic consent form explaining the procedures for the study and provided written consent. Thirty-eight mothers with children between the ages of 3 years, 0 month, and 5 years, 11 months (M = 3.97 years; male = 61%, female = 39%) participated in the study. The study included mothers between the ages of 26 and 46 (M = 36.35 years) who were relatively highly educated (25% up to Bachelor's, 55% Master's, and 15% Doctorate). See Table 1 for a breakdown of ethnicity and gender for the parent participants and their children.
MOOZIE TEACHES KINDNESS CURRICULUM
The Moozie Teaches Kindness curriculum for preschool-aged children, developed by the Children's Kindness Network, includes do-at-home kindness activities that utilize music, art, and creativity to move methodically from the center of the child's circle, him/herself, to the ever-widening rings of awareness of others, animals, the environment, and nature (Children's Kindness Network, 2013). Moozie, an ambassador of kindness, is presented as a lovable, gentle, digital cow to whom children can easily relate and from whom they learn valuable, lifelong lessons. The instructional design of the Moozie Teaches Kindness curriculum was developed to meet National Association for the Education of Young Children (NAEYC) standards for Social-Emotional and Cognitive Development with the target age group being 3 to 7 (NAEYC, 2019).
Researchers selected and adapted the Moozie Teaches Kindness curriculum for this study based on its applicability to parents of preschool-aged children and focus on pro-social behaviors using the four kindness pillars that are paramount to brain health: Kindness to Others, Kindness to Self, Kindness to Animals, and Kindness to Earth. Each kindness pillar teaches parents how they can contribute to the development of empathic pro-social behavior of their child through parent-led activities which promote recognizing and naming feelings of self and others, sharing, taking turns, helping others, saying kind words, interacting with pets and/or outdoor animals, and being kind to nature in positive (recycling and conserving) and negative ways (littering and wasting).
Kind Minds With Moozie Protocol
Kind Minds with Moozie was a randomized, pilot intervention trial designed to examine benefits of an online kindness training protocol for parents and their preschoolers. Accessed via parent's electronic device (laptop, phone, tablet, and desktop computer) parent participants completed five online kindness modules, each designed to take less than 10 min to complete. Parents were asked to click through a series of written and pictorial step-by-step kindness activities to be later implemented when interacting with their children (Tables 2, 3). Participants in the study were randomly assigned to one of two conditions and subsequently completed pre-test measures, online modules with kindness content and post-module surveys, and then post-test measures within 1 week of completion of the last online kindness module.
Kindness Only Condition
The first kindness only condition (n = 17) included an overview module introducing Moozie as the ambassador of kindness and setting a learn, do, and reflect pedagogy. This pedagogy introduced parents to the pillars of kindness (learn), described steps to and importance of including kindness in daily parenting activities (do), and prompted parents to consider the likelihood of integrating a kindness focus into their parenting style (reflect). Each of the modules provided graphics, clickables, and simple activities to engage parents. On average, it took parents 29.25 min to complete all five modules over a period of 4 weeks.
Kindness With Brain Science Condition
The second kindness with brain science condition (n = 21) included the same overview and online kindness modules as the first condition, as well as a brief brain science component during the learning stage. Each brain science learning component consisted of 2-3 additional paragraphs of reading material describing empathy, resilience, neuroplasticity, and flexibility. This additional brain science was provided to explain the importance, "the why" of each concept to overall parental brain health. Participants in this condition were not informed that they would be receiving this additional content. On average, it took parents 33.14 min to complete all five modules over a period of 4 weeks.
Measures
Resilience was measured using the self-report 25-item Connor-Davidson Resilience Scale (CD-RISC; Connor and Davidson, 2003). The scale has been developed and tested as a measure of the degree of resilience and has promise as a method to screen people for high, intermediate, or low resilience. The total score can range from 0 to 100 and the higher the score obtained, the greater the participants resilience. Each parent rated their own stress coping ability on a 5-point scale (0-4), with higher scores reflecting greater resilience in areas such as an individual's ability to adapt when changes occur, staying focused and thinking clearly when under pressure, and bouncing back after injury, illness, or hardship. The CD-RISC measure of resilience normative data indicates that the US general population median score is 82, with the first quartile (Q1: 0-73) describing the score range for the lowest group (lowest 25% of the population), i.e., the least resilient, the second (Q2: 74-82) and third (Q3: 83-90) the intermediate scores, and the fourth (Q4: 91-100) describing the highest or most resilient, i.e., above 75% of the population. This measure is found to have a very good internal consistency as measured by Cronbach's α (α = 0.93). Empathic pro-social behavior was measured using a National Institute of Health (NIH) Toolbox Empathic Behaviors Survey CAT Ages 3-13 v2.0 (EBS), a parent-report measure for children ages 3 through 12 that assesses parent perceptions of children's pro-social behaviors using a 10-item fixed length form. The EBS is a specific test within the NIH Toolbox-Emotion-Social Relationships-Positive Social Development (Salsman et al., 2013). This parental proxy scale was developed to assess early behavioral indicators of positive social development (i.e., empathic pro-social behaviors). Each item administered has a 5-point scale with options ranging from never to always. An example of a parent's perception of the child's empathic pro-social behavior would be "In the past month, please decide: How often your child offers to help other children who are having difficulty." Higher scores are indicative of more parent reported child pro-social behaviors, with a normative mean T-score of 50. This measure is found to have a very good internal consistency as measured by Cronbach's α (α = 0.90).
Relevancy of the program was measured using a 5-point Likert scale (1 = strongly disagree and 5 = strongly agree) that parents completed after each of the online kindness modules. These five, three-question surveys asked parents to reflect and rate their experience in terms of content comprehension, relevance to parenting style, and likelihood of implementing the kindness practice into daily life ( Table 4). This relevancy survey was developed by the Kind Minds researchers to examine the saliency of this training for parents. Examples of the relevance questions include "I understand how being kind to others plays a role in having a kind mind" (comprehension), "I find the concept of compassion relevant to my parenting style" (relevance), and "I will practice modeling and expressing empathy with my child" (likelihood).
RESULTS
To test the hypotheses that parents who engage with Kind Minds with Moozie would increase resilience and observe increased empathic pro-social behaviors in their child, a paired sample t-test was conducted. Secondly, a two-sample t-test was conducted to determine the effects of additional brain science education on resilience levels. Lastly, to test the hypothesis that parents would find kindness activities relevant during interactions with their preschoolaged children, post-training participant ratings were collected and averaged. All statistics were done in SPSS (IBM Corp., 2019).
Toward completion of the study activities, researchers recommended parents complete one online kindness module Kind minds online modules per week and activated a 5-day time lapse between the completion of one module and access to the next, thereby allowing sufficient time for practice of the kindness activities with their children and subsequent completion of the relevancy surveys. On average, participants took 34.7 days to complete the study from pre-test date to post-test date.
Resilience
At baseline (T1), both conditions rated low levels of resilience, with both groups falling within the first quartile (Q1: 0-73). Post-training (T2), mothers in both conditions increased their mean scores to an intermediate level of resilience, falling within the second quartile (Q2: 74-82); the kindness with brain science condition reported slightly higher levels of resilience than the kindness only condition ( Table 5). A paired sample t-test showed a whole group significant increase in resilience (p < 0.001) after completing the online kindness modules ( Table 6).
Empathic Pro-Social Behavior
Prior to the training (T1), mothers reported child empathic pro-social behaviors levels below expected norms (T < 50). Upon post-test (T2), mothers in both groups rated their perception of their child's empathic pro-social behaviors as significantly increased, with the kindness only condition outperforming the kindness with brain science condition ( Table 5). A paired t-test revealed mothers in both groups reported observing higher levels of empathic pro-social behavior in their child (p < 0.001) after completing the online kindness modules ( Table 6).
Brain Science
A two-sample t-test found no significant differences in CD-RISC between the kindness only and kindness with brain science conditions ( Table 7).
Relevancy
The mean relevancy scores in both groups revealed that mothers reported overall high relevancy after completing each of the online kindness modules ( Table 8). Responses ranged from 4.69 to 4.91 out of a 5-point Likert scale (1 = strongly disagree and 5 = strongly agree).
DISCUSSION
The Kind Minds with Moozie research study sought to understand if an online kindness curriculum could be a Share with your child that animals are kind to us, too, and we can be kind to them. Point out that our day always starts with birds singing in the morning. What is your favorite song?
Guide your child in placing a bowl(s) of water outside for our animal friends (on a fire escape, in the park, or in the yard). Try to remember to refill the bowl each day, then see what animal friends come to visit! Parents describe how when we love a family member, we often give them hugs. We like to hug, pet, and play with our pets, too! Choose a stuffed animal (or your pet!) and practice kindness by giving and receiving lots of love. Kindness to nature "Let us walk through a park or a backyard and find gifts from nature like a flower, a cloud, a blade of grass, or a unique rock." Play "Picture Perfect" with your child. Grab a camera or a sketchbook and look out the window sometime during the morning. Draw or take a picture of the world around you. Have your child tell a story about their picture.
Remind your child that nature is a gift. Invite your child to go on a "Trash or Treasure Hunt" with you and find all the special gifts outdoors that can easily be overlooked. Shift your perspective to see the magic happening all around you. Be kind to the earth by removing any real trash during your explorations.
Have your child interact with the digital garden. Be sure to remind your child that plants and flowers have special powers -they help take care of you, each other, animals, and our planet! Moo!
The parent-led activities to be completed with their preschoolers. I will practice resiliency with my child.
The participants were asked to rate their comprehension, the relevancy, and the likelihood of practicing the concept at the end of each online kindness module.
potentiator for resilience and empathic pro-social behavior during times of stress brought about by the COVID-19 pandemic. One aim of this study was to integrate easy-tofollow brain science education with kindness activities delivered digitally. We hypothesized that parents who engaged with the online kindness activities with their preschool-aged children would boost parental levels of resilience and parent reported child empathic pro-social behavior levels. Results showed a whole group increase in resilience levels of mothers and mother-reported empathic pro-social behaviors in their children. This study supports the notion that practicing kindness can be a useful tool to help mothers become more resilient. The ability to overcome difficulties and cope with stress is critical, especially during a global pandemic. As such, changes in resilience, a personality trait aimed at complying with environmental changes and stress, may be a beneficial factor to consider (Block and Block, 1980). There is a need for additional research and salient early interventions for parents, including both mothers and fathers, as resilience can be a potentiator for improved mental, physical, and brain health.
Given that the baseline resilience levels of mothers in this study fell within the bottom 25% of the population (m = 69), there was opportunity for growth and intervention. One possible explanation is that the pandemic contributed to feelings of worry and fear, which may then affect mothers' resilience levels. Similar to the findings of our study, Mariani Wigley et al. (2020) used the same measure of resilience and investigated the support role of parents during the COVID-19 emergency. Results showed that parents were also found to have a low parental resilience score (m = 63.78) when their children were on average 8 years of age. Compared to the mothers of preschoolers in this study, who reported higher resilience levels before and after the kindness training (m = 69, m = 75.9), our results suggest that maternal resilience levels may fluctuate not only due to environmental stressors, but also depending on child age. Therefore, implementing a kindness training during the earlier years of childhood may serve as a buffer against declining parental ability to adapt and bounce back in the face of stressful situations.
Prior to receiving this online kindness training, mothers in both the kindness only and kindness with brain science conditions reported child empathic pro-social behaviors at levels lower than expected norms (T < 50). Upon completion of the online kindness modules, a significant increase in whole group child empathic pro-social behaviors was reported (m = 48.30), although the scores increased, they were still slightly below the norm. One potential factor for consideration is that these low scores may be due to the isolating nature of the COVID-19 pandemic as children might be restricted from engaging in social-emotional learning activities outside of the home or have limited social engagement with peers in order for natural development of pro-social behavior through activities involving same-aged play, peer modeling, and social communication.
Regarding differences between the kindness only and the kindness with brain science conditions, the authors hypothesized that both groups would demonstrate gains in resilience and parent reported empathic pro-social behavior and that the participants in the kindness with brain science condition would show greater increases in parental resilience. Analysis revealed both groups did increase in resilience and parent reported empathic pro-social behavior in their children; however, there was no significant difference between groups. One potential reason for this finding may be that the measures were not well suited to capture the impact of the brain education provided. We did not include application questions for the brain science information provided. Research has shown that synthesis (gist reasoning) is an important process to abstract meaning from complex information and that gist reasoning can predict performance in daily function (Vas et al., 2015). Providing information alone may not have been enough to create measurable changes in resilience. Future studies should investigate the possibility of making the brain science educational aspect more thorough, with specific and direct applications. Mothers reported high relevancy upon completion of the online kindness modules. Study participants reported they found Kind Minds with Moozie to be comprehensible, relevant, and practical. Additionally, on average, parents in the kindness only condition spent 29.25 min and parents in the kindness with brain science condition took slightly longer at 33.14 min to complete the entire course over the course of 4 weeks. Given that the additional brain science education should have only resulted in a brief increase in the amount of time taken to complete each module, this small difference is expected. Nonetheless, with the time-consuming demands placed on parents during the pandemic, it is promising that this brief, online kindness training can be completed in less than 1 h. Furthermore, the results of this study suggest that mothers value practicing and instilling pro-social skills such as being kind to others, yourself, animals, and nature in their children and that kindness activities, which foster parent-child interaction, are well received.
Limitations and Future Directions
This research study has several strengths and limitations related to the study design. Due to the limitations of the study, the results must be interpreted with caution. The two conditions of the study allowed for examination of the added benefits The participants were asked to rate their comprehension, the relevancy, and the likelihood of practicing the concept at the end of each online kindness module. of brain science to online kindness activities; however, the study could benefit from a third condition including parents who would not receive the brain science. While a control group which would receive materials after the post-intervention measurement of resilience and empathic pro-social behaviors could have provided additional insights into the effectiveness of the online kindness training, the research team prioritized delivering the training in a timely manner due to the pandemic. This online kindness training was relevant for mothers considering the myriad of stresses and demands brought about by the ongoing COVID-19 pandemic. The digital design of the study was an efficient method for researchers to provide study activities to participants during a period of physical and social distancing, although participant feedback on the accessibility and ease of use of the technology was not collected. Additionally, the participant feedback surveys gathered insight regarding the comprehension, relevancy, and practicality of the kindness activities in daily life; however, the feedback did not address parent engagement levels or frequency of practicing the kindness activities with their children. These aspects could be assessed in future studies to gain additional information which would be useful for the implementation of the program and the evaluation of its feasibility. In regard to the participants, it was a homogeneous group, as more female participants enrolled in the study, and many came from similar educational and socioeconomic levels; therefore, the data collected were limited in representation to mothers of preschool-aged children. The study design could be strengthened by adding a follow-up time point to assess maintenance effects of gains in parent resilience and child pro-social behavior. Further exploration of how a more structured cognitive training combined with daily habits may affect greater change in parent resilience levels may be of interest in a larger-scale investigation. Continued effort to expand and enroll a third control group would lend itself to a more robust analysis of the impact and effects of brain science education on resilience, empathy, and cognition. Future recruitment processes should include a more focused diversification so that multiple demographics and both maternal and paternal figures are represented. Overall, study findings serve as a model for leveraging a neuroscience-based online kindness curriculum to empower parents with strategies to combat stress exacerbated by these unique times. There are broader public health implications for equipping individuals with tools to take a proactive and preventative approach to brain health, thereby influencing the social, academic, and neural development of the family unit (Feldman, 2015). The chronic and cumulative effects of stress on the brain can contribute to adverse childhood experiences and have been linked to parental resilience as a mediator. Borja et al. (2019) suggest that the resilience of some parents can prevent the heightened exposure of their children to adversities. Continued studies should further investigate specific methods and protocols utilizing kindness and resilience building activities that promote parent-child interaction and relational development as a foundation to creating happier and more brain healthy families.
CONCLUSION
Identifying effective ways to reduce stress and increase resilience has become a mandate for people from a myriad of life, age, professional, and socioeconomic backgrounds, and especially among parents and their young children. Kindness is a familiar construct that goes beyond educational, psycho-social, and cultural boundaries; however, many current practices do not involve a curriculum devised specifically for the implementation by parents of preschool-aged children. The developing mind is instrumental in instilling strong, neural pathways that promote resilience and empathic pro-social behavior. Kind Minds with Moozie resulted in a valuable tool to provide structured support and didactic instruction to assist parents in supporting and promoting child empathic pro-social behavior and proved to be useful in support interventions for families exposed to adverse events as well as public health crises. Specifically, Kind Minds with Moozie could be used to plan intervention for caregivers (e.g., teachers and parents) aimed at improving resources to cope with life stressors. Thus, the present results highlight the significance of designing digital therapeutic tools and kindness training designed to improve both parental and child wellbeing.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by University of Texas at Dallas. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
MJ co-designed the study, conducted recruitment, created the online modules, enrolled and managed participants, scored and interpreted data, and wrote the manuscript. JF co-designed the study, assisted with recruitment, and manuscript preparation and edits. KT assisted with data interpretation and manuscript preparation and edits. AM assisted with participant screening and recruitment, and manuscript edits. All authors contributed to the article and approved the submitted version.
FUNDING
The Heppner Brain Research in Children's Kindness supported by HERO and the Beneficient Trust Company funded the Kind Minds study. We are deeply grateful to HERO and the Beneficient Trust Company's financial support. | 2022-03-29T14:10:14.620Z | 2022-03-24T00:00:00.000 | {
"year": 2022,
"sha1": "a6867cbd50fbfbef8fa6cbd6fe122e9ac5f90f37",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a6867cbd50fbfbef8fa6cbd6fe122e9ac5f90f37",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265571548 | pes2o/s2orc | v3-fos-license | Urban Advanced Mobility Dependability: A Model-Based Quantification on Vehicular Ad Hoc Networks with Virtual Machine Migration
In the rapidly evolving urban advanced mobility (UAM) sphere, Vehicular Ad Hoc Networks (VANETs) are crucial for robust communication and operational efficiency in future urban environments. This paper quantifies VANETs to improve their reliability and availability, essential for integrating UAM into urban infrastructures. It proposes a novel Stochastic Petri Nets (SPN) method for evaluating VANET-based Vehicle Communication and Control (VCC) architectures, crucial given the dynamic demands of UAM. The SPN model, incorporating virtual machine (VM) migration and Edge Computing, addresses VANET integration challenges with Edge Computing. It uses stochastic elements to mirror VANET scenarios, enhancing network robustness and dependability, vital for the operational integrity of UAM. Case studies using this model offer insights into system availability and reliability, guiding VANET optimizations for UAM. The paper also applies a Design of Experiments (DoE) approach for a sensitivity analysis of SPN components, identifying key parameters affecting system availability. This is critical for refining the model for UAM efficiency. This research is significant for monitoring UAM systems in future cities, presenting a cost-effective framework over traditional methods and advancing VANET reliability and availability in urban mobility contexts.
Introduction
Vehicular Ad Hoc Networks (VANETs) represent a significant advancement in wireless communication technology.These networks are pivotal in enhancing vehicular connectivity, thereby fostering a safer and more efficient environment for all traffic participants.GSMA estimates indicate that approximately 20% of the global vehicular fleet, estimated at 1.5 billion, are internet-connected, contributing substantially to data generation [1].Projections suggest that by 2027, there will be an annual growth rate of around 17%, resulting in 367 million connected vehicles.
Typically, a VANET incorporates a static infrastructure component, known as a Roadside Unit (RSU), positioned alongside thoroughfares.Vehicles interface with this infrastructure through an onboard unit (OBU) [2].It is presumed that each vehicle is outfitted with sensors to collect environmental data.The OBU processes these data and engages in communication with other vehicles or RSUs, either directly or indirectly.Additionally, RSUs have the capability to connect to the internet, thereby facilitating vehicular access to various services [3].
VANETs find application in diverse traffic-related domains, encompassing areas such as network security [4], traffic management [5], and parking space optimization [6].Nonetheless, managing the Quality of Service (QoS) within VANETs poses a multifaceted and critical challenge.Key challenges include ensuring network availability and reliability, addressing latency, and managing traffic, as well as grappling with issues of poor connectivity, limited flexibility, and scalability constraints.The physical distance from Edge processing and storage centers can also introduce considerable communication delays [7][8][9].
The literature review reveals a paucity of studies employing Stochastic Petri Nets (SPN) in the context under consideration in this research.The proposed model incorporates crucial factors such as availability and reliability, utilizing VM migration across the network to link various RSUs and implementing an Edge Computing-based data processing system.This model facilitates a comprehensive analysis of the key parameters, aiming to refine architectures to address the integration challenges of VANETs with Edge Computing.The principal contributions of this research are as follows: • Development of an SPN model to assess the reliability and availability of VANETbased VCC architectures, factoring in stochastic elements to emulate realistic scenarios.This aims to enhance the robustness of vehicular network environments, ensuring dependable and consistent performance.• Execution of case studies using the proposed models, offering a blueprint for other researchers in applying these models.These case studies focus on identifying and analyzing primary parameters influencing system availability, providing preliminary insights into the critical variables affecting system reliability, and facilitating enhancements and optimizations.
•
Conducting a sensitivity analysis of the SPN model components, identifying parameters with significant influence on system availability.This analysis enhances the understanding of the model and aids in its optimization.
The structure of this work is organized as follows: Section 2 introduces essential concepts foundational to this study.Section 3 describes the architecture forming the basis of the model.Section 4 details the developed SPN model.Section 5 presents two sensitivity analyses performed on the SPN model.Section 6 elaborates on the outcomes of the case study.Finally, Section 7 concludes the paper and outlines directions for future work.
Background
This section succinctly delineates the core concepts fundamental to this research, primarily focusing on SPNs.An elucidation of experimental design and sensitivity analysis will follow.These concepts are critical for comprehending the methodologies and techniques employed in the formulation of this article.Understanding these foundational principles is essential for appreciating the intricacies of the proposed models and analyses within the context of this study.Related works of the above discussions are provided in Table 1.
Stochastic Petri Nets
A Petri Net is a combined graphical and mathematical representation that effectively models systems and processes undergoing continuous changes and concurrent operations.This modeling is particularly valuable for systems characterized by simultaneous multiaction scenarios [10].The present study employs a sophisticated form of Petri Nets, termed SPNs, which are distinguished by their ability to incorporate randomness and probabilistic behaviors [11].SPNs are comprised of three primary elements that delineate the system's states or conditions: transitions (denoting potential system events or actions), tokens (symbolizing entities within the system), and each token's potential association with a specific resource [12].
The functionality of SPNs hinges on two essential types of connections: (I) Input Arcs, which are preconditions for triggering transitions, necessitating specific tokens' presence at designated locations; and (II) Output Arcs, which dictate the subsequent transferal of tokens upon a transition's activation [13].Transitions in SPNs are categorized into timed transitions, obeying stochastic distributions [14], and immediate transitions that occur instantaneously upon activation.Additionally, inhibitor arcs play a pivotal role in controlling token flow between locations, with tokens being allocated to particular places within the system.
Crucially, SPNs utilize guard conditions to define the specific prerequisites for transition activations.These conditions often incorporate random variables, thus introducing a probabilistic dimension.The fulfillment of these guard conditions triggers transitions according to predefined rate functions, effectuating a change in the system's state [15].Figure 1 visually explicates the core components of an SPN model, providing a comprehensive understanding of its structure and functionality.
Sensitivity Analysis with DoE
Design of Experiments (DoE) is an extensively utilized methodology in research and development for enhancing processes, products, and systems.It entails the meticulous planning and execution of controlled experiments to gather pertinent and substantial data [16].The initial step in DoE involves defining the experiment's objective, followed by identifying the variables or factors that could influence the outcome.Subsequently, an experimental plan is formulated, which includes determining the levels for each factor and designing the experiments to extract significant insights.The execution of the experiment is aligned with this plan, leading to the collection and statistical analysis of the results.DoE equips system designers with the ability to discern the most influential variables, understand their interactions, and fine-tune the conditions to achieve optimal outcomes with minimal experimental iterations [17].Complementing DoE, sensitivity analysis serves as a pivotal technique for examining how variations in the input parameters or model attributes influence the outputs or results [18].This analysis is essential for assessing the robustness of a system or model in the face of uncertainties or changes in parameters [19].
Through sensitivity analysis, it becomes feasible to identify which variables significantly affect outputs and which have lesser impacts, thereby guiding the allocation of resources and efforts towards efficient system optimization.Both DoE and sensitivity analysis play crucial roles in research, engineering, and strategic decision making, providing pathways to more effective solutions, resource and time conservation, and the acquisition of valuable insights for the enhancement of processes and systems.This efficiency is achieved as they necessitate fewer experiments to glean significant information [20,21].In the initial category of research, simulation served as the primary method for evaluation.The study by [22] focused on assessing the performance of Medium Access Control (MAC) protocols within VANETs.Ref. [23] introduced a threat-oriented authentication strategy designed to bolster secure communication in vehicle-to-vehicle (V2V) and vehicleto-infrastructure (V2I) interactions, utilizing a combination of encryption keys.The research conducted by [24] highlighted deficiencies in the 802.11system, particularly the absence of backoff and binary exponential retransmission mechanisms, which were found to adversely affect the QoS during periods of intense traffic.
Further, Ref. [25] proposed an innovative mobile agent migration mechanism.This mechanism, rooted in network location analytics, was employed to simulate a VANET environment, exploring its practical applications.In parallel, the studies by [26,27] utilized the TCP Context Migration Scheme (TOMS), a novel approach aimed at enhancing data services within vehicular networks.This scheme entailed the proactive establishment of TCP connections, managed by a mobile TCP proxy that assumed the role of a cluster leader.
Lastly, the investigation by [28] delved into the application of Vehicle Edge Computing (VEC) alongside conventional anomaly detection techniques.This approach was targeted at identifying and quantifying message loss, thereby evaluating the extent of fault coverage within these networked systems.
Measurement-Based Methods
In the second category, the selected studies employed measurement as their primary evaluation methodology.Ref.
[29] developed a container-based virtualization architecture, facilitating dynamic migration within ad hoc vehicular networks, particularly in VANETs.This innovation aimed to enhance flexibility and responsiveness in these networks.Meanwhile, Ref. [30] undertook the task of classifying security requirements, identifying key characteristics, and delineating the challenges related to security within similar VANET scenarios.
In another significant contribution, Ref. [31] introduced a seamless transition system that leverages the capabilities of SDN and Media Independent Handover (MIH).This system was designed to dynamically modify the topology of VANETs, enhancing their adaptability and efficiency.Additionally, Ref. [32] proposed a novel concept termed Broadcast as a Service (BaaS), specifically tailored for VANETs.This solution aimed to efficiently disseminate data across networked vehicles utilizing cloud computing technologies.Lastly, the work of Ref. [35] applied SDN to improve the management and migration of microservices in Vehicular Fog Networks (VFNs), taking into account the dynamic nature of vehicular nodes.
Modeling-Based Methods
The third classification encompasses studies that utilized modeling as their core evaluation technique.The research of [33] presented a model to describe the connectivity patterns among vehicles on highways.This model is crucial for the development of protocols and applications in VANETs, tailored to their specific connectivity traits.Similarly, [34] adopted an analytical approach rooted in SPN theory.This approach was used to assess the infrastructure of VANETs, taking into consideration the mobility of the network and its inherent limitations.Notably, this study modeled the service fees of RSUs using exponential distributions.
Contributions of This Work in Relation to Others
This study introduces an SPN model that evaluates the impact of VM migration across multiple RSUs within vehicular networks.A review of the literature reveals the scarcity of studies addressing this specific scenario within the VANET context.The adoption of such an approach is economically advantageous, as it enables the analysis of availability and reliability without necessitating a physical infrastructure for testing.
The review further indicates that systems modeling, as employed in this study, provides a more predictive and comprehensive understanding compared to measurement and simulation methods.This is achieved by simplifying the representation of critical system elements.In contrast, measurement and simulation tend to rely on observational data, which may not fully encapsulate the complexity of the system.Most of the reviewed studies focused primarily on performance evaluation, with limited attention to metrics such as availability, reliability, or downtime.Moreover, there is a noticeable gap in studies exploring the cooperation between multiple RSUs and the use of sensitivity analysis to determine the impact of Mean Time To Failure (MTTF) and Mean Time To Repair (MTTR) parameters on system performance.
Evaluated Architecture
In this section, the envisaged architecture for integrating VANETs is elucidated.The foundational scenario, as illustrated in Figure 2, encompasses an array of RSUs, with their respective coverage zones depicted as green and red circles.The operational dynamics of this architecture are as follows: • (1) Active RSU Coverage and Vehicle Interaction: Vehicles in transit enter the coverage area of active RSUs (represented by green circles), wherein these RSUs facilitate communication and gather data from the vehicles.• (2) Response to RSU Failure: In the event of an RSU malfunction, leading to a disruption in data collection, a contingency protocol is activated.• (3) VM Migration for Uninterrupted System Availability: To ensure continued system functionality, an allocation of data from VMs is performed that will be transferred to the subsequent RSU within the network after an RSU fails.• (4) Data Management: Subsequent to collection, all data are transmitted to an Edge Server for storage and further processing.
The RSUs in this architecture are equipped with advanced communication technologies, potentially including 5G [36] or Lora [37], enabling interaction with vehicular systems.These units are strategically placed along roadways to form a dependable communication infrastructure, essential for the efficient operation of the VANET system.This arrangement guarantees a seamless data flow and operational continuity of the system, even amidst individual RSU failures, thereby augmenting the reliability and resilience of the VANET infrastructure.
In this segment, the technical composition of the VANETs is detailed, emphasizing the integration of On-Board Units (OBUs) in each vehicle.These OBUs are imbued with communication capabilities, enabling the transmission and reception of messages.Additionally, they are outfitted with GPS tracking devices, facilitating the sharing of precise, real-time locational data [38].The infrastructure is based on the assumption that all RSUs maintain a connection to a private Edge server via a high-speed wireless link, such as 5G; 5G offers faster data transmission speeds and reduced latency, key features for system efficiency [39].
The selection of these technologies plays a significant role in influencing the overall system availability [40].The choice of communication technology emerges as a vital consideration in the model's design process.The architecture's design is inherently scalable, allowing for the integration of a variable number of RSUs to meet the specific requirements of the deployment area.The depicted scenario in the figure showcases four groups of RSUs, but this configuration can be dynamically adjusted to suit varying demands.
A key attribute of this architecture is its fault detection capability in RSUs.When an RSU is compromised, indicated by red in Figure 2, it triggers the migration of VMs to the next cluster of RSUs.This migration is essential for maintaining operational continuity and ensuring system availability, thereby minimizing disruptions in communication and data processing.
Data processing in this system is executed at the Edge (Edge Computing), which enhances communication efficiency and reduces latency.The proximity of RSUs to the vehicles permits a portion of the data processing to occur locally, thus enhancing the system's real-time responsiveness.The proposed architecture aims to establish an effective communicative link between vehicles and cloud infrastructure, with a strong focus on reliability, scalability, and the migration of VMs to guarantee uninterrupted operations.The ensuing section will delve into the model used to assess the reliability and availability of this system.
Proposed Model
This section delineates the models applied in this study, which are constructed based on the architecture outlined in the preceding section.It further details the reliability and representative availability models in scenarios both with and without the implementation of migration strategies.All the models and simulations in this study were conducted using the Mercury Tool [41].
System Reliability Model
The model for analyzing the reliability of VANETs is depicted in Figure 3. Within this context, reliability is defined as the conditional probability that a system will continue functioning over a time interval [0, t], provided that it was operational at the inception of this interval (t = 0) .The presented model (Figure 3) bears resemblance to the model in Figure 4, with a notable distinction: it excludes the MTTR transitions that would facilitate the recovery of components in the event of a failure.This exclusion is a critical aspect, as it directly impacts the system's ability to self-recover post-failure, thereby influencing the overall reliability assessment of the VANET system under study.In the RSU GROUP within the VANET system, the RSUs can host varying numbers of VMs, symbolized by the variables x, y, and z.The reliability of the model under consideration can be quantitatively assessed using Equation (1).This metric is defined as the complement of the probability of failure of any component in the system.Specifically, it is represented as one minus the probability that the RSU Group, RSU Logical Group, Network, and Edge components play all operations simultaneously: R = 1 − P((#EDGE_U > 0) AND (#NET_UP > 0) AND In this model, the variable " P" is utilized to determine the probability that the system will become unavailable or fail.By applying this equation, it is possible to generate a curve that effectively illustrates how the system's reliability diminishes over time.This curve is a crucial analytical tool, as it provides a visual representation of the system's reliability, enabling the identification of trends and potential vulnerabilities over a specified time frame.Such an analysis is vital for understanding the robustness of the system and for making informed decisions regarding maintenance, upgrades, or other interventions to enhance the system's reliability.
Availability Model with Migration
Table 2 systematically details the elements of the availability model, employing tokens to represent the count of operational VMs within each RSU.In this model, the transitions labeled RSU_MTTF and RSU_MTTR are integral for facilitating the exchange of information among different RSU groups via VMs.For the effective operation of an RSU Group, it is imperative that the quantity of tokens in the RSU_UP state aligns with the initially set value.
The availability of an RSU is contingent upon the presence of tokens in the RSU_LOG_UP state and their absence in the RSU_LOG_DW state.Concurrently, the operational state of the NETWORK is indicated by the distribution of tokens: tokens in the NET_UP state signify an active network, while those in NET_DW denote an inactive state.This model underscores the reliance on token distribution for depicting the dynamic status of each RSU and the overall network, thus providing a comprehensive view of the system's availability and the efficacy of the migration strategy.The transitions labeled NET_MTTF and NET_MTTR are pivotal in the management of the network's operational dynamics.Concurrently, the functioning of the EDGE component relies on the E_MTTF and E_MTTR transitions, which govern the activation (EDGE_UP) and deactivation (EDGE_DW) states.The migration process, a key element for ensuring uninterrupted system operation, is regulated through MIGRATE_UP (activation) and MI-GRATE_DW (deactivation) markers.These markers facilitate the transfer of VMs between RSUs, an action critical for maintaining system availability.
Figure 4 presents the proposed SPN model, which is constructed based on the previously outlined scenario.This model encompasses various components such as RSU GROUP, RSU LOGICAL GROUP, NETWORK, EDGE, and MIGRATION.Each of these components is associated with metrics like MTTF and MTTR, which are instrumental in evaluating the availability and reliability of systems and their individual components.
The operational status of the VMs in each RSU is indicated by the presence of tokens in the RSU_UP and RSU_DW states.Here, RSU_UP denotes active RSUs, while RSU_DW represents inactive ones.The transitions between the active and inactive states of each Road Service Unit (RSU) are controlled by the RSU_MTTF and RSU_MTTR transitions.Within the RSU Group, each unit contributes to the collective functionality by sharing information via VMs.For the RSU Group to function effectively, it is essential that the number of tokens in RSU_UP aligns with its pre-established initial value.
Table 3 illustrates the implementation of guard conditions in the transitions T1, T2, T3, and T4, which are situated between the RSUs designated as UP1, UP2, and UP3.These guard conditions are essential for regulating the migration of VMs in scenarios where a logical component of the RSU becomes unavailable.When this logical component is subsequently reactivated and resumes normal operation, the previously migrated VMs are reintegrated into their original RSU.This mechanism of migration and reintegration is pivotal in ensuring the seamless and continuous functioning of the system, as it provides a dynamic response to temporary outages or disruptions within individual RSUs, thereby maintaining overall system integrity and operational continuity.
Places
Transition Condition The transition T2 is activated by the guard condition #RSU_LOG_DW1=1, which denotes a critical event indicative of the instability or inactivation of RSU 1's logical component.This event initiates the process for VM migration from the compromised RSU.The procedure begins by verifying the operational status of the subsequent RSU, indicated by #RSU_LOG_DW2=0.Should this RSU be operational, the migration of VMs is executed accordingly.Subsequently, in the event of a recovery and reactivation of the initially failed RSU, the previously migrated VMs are reintegrated into it.In a parallel scenario, transition T4 is invoked when #RSU_LOG_Dw2=1, a condition signaling the deactivation of the logical component of RSU 2. This state necessitates the migration of VMs from RSU 2 to another operational unit, thereby ensuring the continuity of system functionality.
The availability of the model with migration is quantified using Equation (2).This equation computes the probability that the RSU Group, RSU Logical Group, Network, and Edge components are all operational concurrently.In this context, 'P' denotes the probability, while 'TOKENS' refers to the number of tokens present in a specific state or place within the model.This approach to calculating availability is crucial for assessing the effectiveness of the migration strategy in maintaining continuous operation of the system, even in the face of individual component failures or disruptions.The inclusion of these probabilistic measures provides a comprehensive understanding of the system's resilience and its ability to sustain uninterrupted service through dynamic VM migration processes:
SPN Availability Model: Non-Migration Framework
The depicted SPN availability model in Figure 5 delineates the system's functionality in the absence of migration capabilities.This model parallels its counterpart incorporating migration, encompassing components such as MIGRATION, EDGE, NETWORK, RSU GROUP, and RSU LOGICAL GROUP.Integral to each component is an MTTF and a singular MTTR, with the exception of MIGRATION, which is characterized by the Happened attribute (HAP).This attribute simulates potential disasters or system instabilities, leading to the activation of MIGRATE_DW, thereby deactivating migration processes.
A notable deviation in this non-migratory model is the omission of transition mechanisms within the RSU GROUP, precluding inter-RSU migration.The activation of the recovery process (REC) is contingent upon the RSU GROUP configuration, which facilitates the identification of operational RSUs via the tokens in RSU_UP.The symbol 'N' signifies the possibility of multiple tokens within each RSU, with the quantity of tokens representing the number of operational RSUs.
Conversely, the RSU_DW token count reflects the number of non-operational RSUs.The transitions RSU_MTTF and RSU_MTTR govern the oscillation between the active and inactive states of individual RSUs.Network functionality is ensured when a token resides in NET_UP (active network), and it becomes non-functional with a token in NET_DW (inactive network).The transitions NET_MTTF and NET_MTTR regulate these state changes.
Similarly, the cloud's operational status is indicated by the presence of a token in EDGE_UP, and its non-operational status is signified by a token in EDGE_DW.The transitions EDGE_MTTF and EDGE_MTTR are instrumental in toggling between the cloud's active and inactive states.
Sensitivity Analysis
This research utilizes a Design of Experiments (DoE) approach in conjunction with SPN modeling to derive comprehensive insights into the performance of Edge Computing systems.The variables and their respective levels are consistently applied across models, both incorporating and excluding migration.This methodological consistency is critical to ascertain which variable combinations exert the most significant impact on the system.By exploring these variable combinations across different scenarios, the study robustly underpins the optimization strategies proposed for enhancing the system's availability and reliability.
Sensitivity Analysis of the System Incorporating Migration
The sensitivity analysis was conducted using the experimental setup illustrated in Figure 2, with a primary focus on the time interval preceding system failure.Table 4 presents a comprehensive enumeration of the variables considered in the Design of Experiments (DoE).This enumeration includes detailed descriptions of each factor and specifies their respective levels.The factors assessed are (a) EDGE_F, (b) NET_F, (c) RSU_F, (d) RSU_R, and (e) LOG_F.For each factor, evaluations were conducted at both high and low settings to ascertain their impact on the overall system performance.The specific configurations for these factors, as applied in the various experimental scenarios, are thoroughly delineated in the aforementioned table.Table 5 comprehensively catalogs the permutations of factor levels employed in the simulations designed to assess their impact on system availability within an Edge Computing framework.Each row in the table represents a distinct amalgamation of factor values, specifically EDGE_F, NET_F, RSU_F, RSU_R, and LOG_F.The terminal column of this table quantifies the system availability corresponding to each unique factor combination.This tabulation effectively encapsulates the outcomes of the DOE simulations, facilitating the identification of factor combinations that either significantly influence or minimally affect the system's availability.The structured presentation of these results serves as a pivotal resource for comprehending the dynamics influencing system performance and the relative importance of various system components.
The graph depicted in Figure 6, illustrating the effects of a DOE with migration, provides pivotal insights into the elements most influential on system availability within an Edge Computing environment.Foremost, the MTTF of the physical RSU emerges as the paramount factor, with its impact value approximating 0.70.This underscores the criticality of physical infrastructure reliability in the overall system performance.
Following this, the MTTR of the physical components of RSU, with values oscillating between 0.40 and 0.45, is identified as another crucial determinant in minimizing system failures.This finding accentuates the significance of efficient repair processes in maintaining system integrity.
The interaction between the MTTR of the physical RSU and the MTTF of the logical RSU is also noted to exert a substantial influence on availability.In addition, the chart delineates factors with comparatively lower impacts, such as the interplay between the MTTF of the Edge component and the MTTF of the Network, the MTTF of the logical components, and the interaction between the MTTF of the Network and the MTTR of the physical RSU, each registering impact values below 0.05.While these elements are deemed less consequential in the present analysis, they could acquire greater significance in certain specific scenarios, suggesting the need for a nuanced understanding of different operational contexts in Edge Computing systems.Figure 7 elucidates the intricate interactions between various factors in a migrationinclusive scenario and their collective impact on system availability as assessed through a DOE approach.
In Figure 7a, the interaction between the Mean MTTF of the Edge component and the MTTF of the Network is analyzed.A notable decrease in system availability is observed when the MTTF of the Edge (157.0) is juxtaposed with the Network's MTTF (98.3).This observation is indicative of the system's heightened sensitivity to fluctuations in these parameters, underscoring the critical nature of their balance for optimal system performance.
In Figure 7b, the dynamics between the MTTF of the Network and the MTTF of the logical RSU component are examined.An inverse relationship is discerned here: an escalation in the MTTF of the Edge (from 168.0 to 210.0) correspondingly diminishes the MTTF of the logical RSU.This pattern exemplifies the complex interdependencies within the system, where enhancing the reliability of one component may inversely affect another, thereby impacting overall system availability.
In Figure 7c, the analysis is centered on the interplay between the MTTF of the Network and the MTTR of the physical RSU.An observed increase in the Network's MTTF, along with an enhancement to 210 in the MTTF of the physical RSU component, indicates a notable enhancement in the overall system performance.This result underscores the critical importance of an integrated assessment of both failure and repair durations within the Network and physical components of the RSU.Such a holistic approach is essential for optimizing system availability.These insights collectively contribute to a deeper understanding of system reliability in Edge Computing environments, highlighting the need for a thorough evaluation of the interactions between various system components to achieve optimal system performance.
Analysis of System Sensitivity in the Absence of Migration
The effect graph in Figure 8, derived from the DOE conducted without the migration feature, elucidates the principal factors impacting system availability in an Edge Computing context.The graph reveals that the MTTF of the logical RSU holds paramount significance, as indicated by its value exceeding 0.90.This underscores the critical role of the reliability of the logical RSU in the overall system.
Secondarily, the MTTR of the RSU physical components also emerges as a consequential factor, exhibiting values around 0.60.This finding emphasizes the importance of efficient repair mechanisms in mitigating system downtime during failures.
Other factors, though less influential, still contribute to the system's performance.These include the MTTF of the Network and the interactions between the MTTF of the Edge component and the MTTF of the logical RSU.Additionally, the interplay between the MTTR of the physical RSU and the MTTF of the logical RSU, each with impact values below 0.1, also bears significance.These findings are instrumental in enhancing system reliability, particularly in scenarios where migrating VMs between RSUs is not feasible.They inform strategic resource allocation decisions and guide interventions aimed at bolstering the overall performance of Edge Computing systems in non-migratory environments.This nuanced understanding of the relative impact of various system components and their interactions is essential for targeted improvements in system robustness and reliability.In Figure 9a, the graph underscores the system's sensitivity to changes in the MTTF of the Edge component relative to the MTTF of the Network.A significant observation here is that an elevation in the Edge's MTTF to 157.0 prompts a slight increase in the Network's MTTF, from 95.0 to 95.3.This shift underscores the substantial influence that the Edge's MTTF exerts on overall system availability.
Figure 9b examines the interplay between the MTTF of the physical RSU and that of its logical counterpart.It is observed that augmenting the reliability of the logical part of the RSU positively influences the MTTF of the physical RSU.This relationship highlights the criticality of integrating these factors to enhance the overall system performance.
Lastly, Figure 9c delves into the interaction between the MTTR and the MTTF of the physical RSU.An increase in the MTTR of the physical RSU is shown to improve the MTTF.However, an increase in MTTF does not significantly impact the MTTR.This finding accentuates the importance of a balanced approach to managing failure and repair times, as this balance is key to optimizing system availability.
Together, these insights from Figure 9 emphasize the complex nature of factor interdependencies in Edge Computing systems, particularly in scenarios where migration is not an option.Understanding these relationships is crucial for the strategic planning and implementation of measures aimed at enhancing the reliability and efficiency of such systems.
Case Study
In this segment, the paper delineates the findings from the analytical evaluation of the models introduced herein.This evaluation encompasses a comprehensive assessment of both the availability and reliability metrics for all the models under consideration.Table 6 provides a detailed account of the parameter values assigned to various system components, with these values meticulously sourced from extant scholarly publications [40,[42][43][44].The parameters detailed include those pertaining to the RSU group, the MTTF and MTTR for the Edge component, as well as the failure and repair durations associated with the network.Additionally, the table specifies the initial values for the tokens employed in the system, a critical element in the modeling process.The graph depicting system availability with migration, as shown in Figure 10, delineates a non-linear association between the quantity of VMs utilized and the consequent system availability.Notably, this graph exhibits a convergence of the availability metrics at specific VM counts, namely 8, 16, and 32.This overlapping of data lines suggests that the deployment of eight VMs is sufficient to assure the desired level of availability.Such an observation is pivotal in informing strategies for resource allocation and cost optimization.It implies that beyond a threshold of eight VMs, additional VMs do not significantly enhance system availability, thereby offering a pathway to maximize resource efficiency.This efficient allocation of VMs, without compromising the operational efficacy of the system, is essential in balancing cost effectiveness with system performance.
The research study includes Figure 11, which portrays the graph of system availability in scenarios where migration between RSUs is not implemented.This graph maintains consistency in the range of VMs as used in Figure 10, varying from 2 to 32 VMs in operation.The MTTF for these VMs is set between 100 and 1000 h.A critical observation from the results depicted in this graph is the comparative reduction in system availability when migration is not employed, as opposed to scenarios where migration is feasible.Specifically, the graph shows that system availability commences at '1.00 nines' with the operation of just 2 VMs, and it marginally escalates to '1.36 nines' with the use of four VMs.This pattern of availability underscores the significance of migrating VMs between RSUs in enhancing system reliability.The ability to migrate VMs appears to be a crucial factor in achieving higher availability, as evidenced by the increase in the number of 'nines' in the availability metric.This insight highlights the importance of VM migration as a strategy to bolster the robustness and dependability of the system, especially in contexts where maintaining high availability is paramount.The study thus provides a compelling argument for incorporating VM migration between RSUs as a means to optimize system performance and reliability.In the context of a system deploying four VMs with a MTTF set at 1000 h, it is feasible to achieve a level of system availability analogous to that obtained with a deployment of 32 VMs.This finding suggests that an augmentation in the number of VMs does not correspondingly result in a substantial increase in system availability.Particularly in the model excluding migration, the availability attributed to the logical component of the RSUs emerges as a predominant factor as discerned from the sensitivity analysis.
VM VM VM VM VM
This distinction between the models with and without migration underscores the efficacy of the VM migration technique, especially in scenarios characterized by high demand.It accentuates the necessity of integrating VM migration into strategies for computing resource allocation.Figure 12 provides a comparative analysis, juxtaposing the average cases in scenarios of system operation with and without migration.This comparison elucidates the differential impacts of VM migration on system availability, thereby underscoring its critical role in the optimization of computing resources in demanding operational environments.Figure 12 presents an analysis of system availability based on the failure time of RSUs to assess performance in configurations with and without VM migration.In scenarios without migration, the system demonstrates notable availability, achieving approximately 10.00 nines and increasing slightly to just over 14.00 nines when the MTTF is set at 100 h.Conversely, in configurations with migration, the availability shows a marked increase as the MTTF is extended, reaching an impressive 20.00 nines with an MTTF of 100 h.These results clearly indicate that migration exerts a positive influence on system availability, particularly in contexts characterized by extended MTTF periods.
Turning to Figure 13, the reliability graph for the physical component of the system illustrates the variance in the MTTF of the physical aspect of the RSU, with values spanning 168.0, 250.0, and 500.0 over a duration of 800 h.Reliability is a critical metric for evaluating the system's capability to function consistently without failures and interruptions.The data portrayed in this graph establish a direct correlation between the MTTF of the physical component and the overall reliability of the system.This relationship indicates that an increase in the MTTF of the physical part is directly proportional to an enhancement in the system's reliability.This insight is pivotal for understanding the impact of the physical component's robustness on the overall operational stability of the system.As the MTTF is extended, the system exhibits enhanced robustness and resilience, thereby diminishing the frequency of failures and bolstering reliable operations.Conversely, a system characterized by an MTTF of less than 168 h tends to be more vulnerable to failures and faces challenges in sustaining stable operations.This insight is invaluable for strategizing enhancements to the physical infrastructure of systems in Edge Computing environments.Effective planning in this context encompasses the implementation of both preventive and corrective strategies aimed at augmenting the reliability, quality, and availability of services delivered to end users.
Figure 14 displays the reliability graph for the logical component of the system, with the MTTF varying between 168.0, 250.0, and 500.0 over a span of 800 h.Assessing the MTTF of the logical part is vital to gauge its capacity to sustain adequate operational performance, particularly in Edge Computing contexts where uninterrupted availability is paramount.The data gleaned from this graph offer insights into the reliability trends of the system's logical component relative to the MTTF of VMs.
It is observed that the system with the lowest MTTF of 168 h exhibits a decline in reliability before reaching 80 h of operation.This trend signifies a reduced capability of the system to maintain stability and remain free from failures over a shorter duration.In contrast, the system with the highest MTTF of 500 h demonstrates consistent reliability throughout the initial 80 h of operation, underscoring its superior ability to remain operational and reliable for an extended period before encountering declines in reliability.These findings are crucial in understanding the resilience of the logical component of the system and in guiding decisions related to the management and optimization of Edge Computing systems.
The analysis of the data underscores the heightened significance of the logical component in determining the system's reliability, surpassing the influence of the physical part.Although both aspects are integral, the role of VMs, particularly their efficiency in terms of MTTF, emerges as a critical factor in ensuring uninterrupted system availability.The observation that a system with an elevated MTTF exhibits prolonged stable reliability prior to any decline in performance indicates that the efficacy and dependability of VMs are key determinants of system stability.Consequently, it is imperative to place emphasis on strategies aimed at bolstering the reliability of the system's logical aspect.This strategic focus encompasses several key measures: • VM management policies: The development and implementation of comprehensive management policies for VMs are crucial.These policies should be designed to effectively handle resource allocation, scaling, and migration, thereby enhancing system performance.
•
Performance monitoring: Rigorous and continuous monitoring of system performance is essential.This enables the early identification and resolution of potential issues, thereby maintaining the system's operational integrity.
•
Maintenance practices: The adoption of appropriate and systematic maintenance practices is vital in ensuring the optimal functioning of VMs.Regular maintenance activities, including updates and troubleshooting, are necessary for sustaining system health and efficiency.
Implementing these strategies will significantly enhance the availability and quality of the services rendered by the system.Such enhancements not only contribute to the system's resilience and operational efficiency but also to the overall user experience in Edge Computing environments.Therefore, prioritizing improvements in the logical part of the system is not merely a technical imperative but a strategic approach to achieving superior service delivery.
Conclusions
This research meticulously analyzed system availability within networks of RSUs using SPN and the DOE methodology, focusing on environments with and without VM migration.The findings successfully met the study's objectives, providing insightful revelations about the impact of critical factors on system performance in both migration scenarios.A significant aspect highlighted by the study is the critical role of VM migration in VANETs, which proves to be fundamental in optimizing system availability.This process not only ensures a balanced distribution of workload but also substantially minimizes system downtime.The study identifies the MTTF of physical RSUs and the MTTR of the logical components of RSUs as pivotal determinants of system availability.These findings emphasize the necessity of enhancing these components to secure reliable and efficient operation in VANET contexts.
For future research directions, a more granular investigation of the identified factor interactions is suggested.This should include considering a broader range of variables and conducting empirical experiments in real-world environments to further validate and refine the proposed model.Additionally, exploring alternative approaches to VM migration and incorporating advanced load-balancing strategies are proposed as avenues for further study.These investigations are expected to provide deeper insights and enable additional improvements in system availability and performance, especially in the complex and evolving landscape of Edge Computing.This comprehensive approach to future research will not only enhance the understanding of system dynamics in VANETs but also contribute to the development of more resilient and efficient vehicular network systems.
Figure 6 .
Figure 6.Impact of Different Factors on the System with Migration.
Figure 7 .
Figure 7. Interaction between factors in the system with migration.
Figure 8 .
Figure 8. Impact of different factors on the system without migration.
Figure 9
Figure 9 presents an insightful graph of DOE interactions in a context devoid of migration, offering a clear view of how various factor combinations impact system availability in Edge Computing environments.In Figure9a, the graph underscores the system's sensitivity to changes in the MTTF of the Edge component relative to the MTTF of the Network.A significant observation here is that an elevation in the Edge's MTTF to 157.0 prompts a slight increase in the Network's MTTF, from 95.0 to 95.3.This shift underscores the substantial influence that the Edge's MTTF exerts on overall system availability.Figure9bexamines the interplay between the MTTF of the physical RSU and that of its logical counterpart.It is observed that augmenting the reliability of the logical part of the RSU positively influences the MTTF of the physical RSU.This relationship highlights the criticality of integrating these factors to enhance the overall system performance.Lastly, Figure9cdelves into the interaction between the MTTR and the MTTF of the physical RSU.An increase in the MTTR of the physical RSU is shown to improve the MTTF.However, an increase in MTTF does not significantly impact the MTTR.This finding accentuates the importance of a balanced approach to managing failure and repair times, as this balance is key to optimizing system availability.Together, these insights from Figure9emphasize the complex nature of factor interdependencies in Edge Computing systems, particularly in scenarios where migration is not an option.Understanding these relationships is crucial for the strategic planning and implementation of measures aimed at enhancing the reliability and efficiency of such systems.
Figure 9 .
Figure 9. Interaction between factors in the system without migration.
Figure 12 .
Figure 12.Comparison between the with-migration and without-migration models.
Figure 13 .
Figure 13.Reliability-physical part of the RSU.
Figure 14 .
Figure 14.Reliability-logical part of the RSU.
Table 2 .
Description of the main components of the model.
Represents the MTTR of the system's RSUs Represents the MTTF of the system's RSUs Represents the MTTR of the logical part of the RSUs Represents the MTTF of the logical part of the RSUs Transitions between RSUs in the system
Table 3 .
Description of model storage conditions. | 2023-12-04T17:34:01.522Z | 2023-11-28T00:00:00.000 | {
"year": 2023,
"sha1": "beeeec0ae2a6ab08580d3f7a99637550bfd84864",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/23/9485/pdf?version=1701187871",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d836cc54e888733daef49c7e46ff2f4eefeb4f1f",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237092690 | pes2o/s2orc | v3-fos-license | Myeloid cell-mediated targeting of LIF to dystrophic muscle causes transient increases in muscle fiber lesions by disrupting the recruitment and dispersion of macrophages in muscle
Abstract Leukemia inhibitory factor (LIF) can influence development by increasing cell proliferation and inhibiting differentiation. Because of its potency for expanding stem cell populations, delivery of exogenous LIF to diseased tissue could have therapeutic value. However, systemic elevations of LIF can have negative, off-target effects. We tested whether inflammatory cells expressing a LIF transgene under control of a leukocyte-specific, CD11b promoter provide a strategy to target LIF to sites of damage in the mdx mouse model of Duchenne muscular dystrophy, leading to increased numbers of muscle stem cells and improved muscle regeneration. However, transgene expression in inflammatory cells did not increase muscle growth or increase numbers of stem cells required for regeneration. Instead, transgene expression disrupted the normal dispersion of macrophages in dystrophic muscles, leading to transient increases in muscle damage in foci where macrophages were highly concentrated during early stages of pathology. The defect in inflammatory cell dispersion reflected impaired chemotaxis of macrophages to C-C motif chemokine ligand-2 and local increases of LIF production that produced large aggregations of cytolytic macrophages. Transgene expression also induced a shift in macrophage phenotype away from a CD206+, M2-biased phenotype that supports regeneration. However, at later stages of the disease when macrophage numbers declined, they dispersed in the muscle, leading to reductions in muscle fiber damage, compared to non-transgenic mdx mice. Together, the findings show that macrophage-mediated delivery of transgenic LIF exerts differential effects on macrophage dispersion and muscle damage depending on the stage of dystrophic pathology.
Introduction
A recently developed strategy for targeting therapeutic molecules to dystrophic muscle exploits inflammatory cells as natural vectors to selectively express and deliver potentially beneficial proteins to diseased muscle (1). Because inflammatory cells rapidly invade dystrophic muscle specifically at the times and locations where pathology is active, and afterwards they naturally reduce in numbers and activity when pathology attenuates, they provide a rapidly responsive, intrinsic system to target disease. This targeting approach is especially valuable in diseases such as Duchenne muscular dystrophy (DMD) which is unpredictably 'asynchronous' (2), in which different muscles and even different groups of muscle fibers in the same muscle occur at different stages of injury and repair. As a consequence, foci of muscle damage and necrosis can exist hundreds of micrometers from sites where muscle fibers are experiencing regeneration or hundreds of micrometers from sites where muscle fibers have not experienced damage.
Leukemia inhibitory factor (LIF) has long been expected to have potential therapeutic benefits in treating muscular dystrophy, attributable to its numerous influences on myogenesis. For example, LIF stimulates myoblast proliferation in vitro (3)(4)(5), which is mediated through the Jak2-Stat3 pathway (6) and may also expand their numbers in vitro by reducing their frequency of apoptosis (7). In addition, LIF can inhibit the formation of postmitotic, multinucleated myotubes in vitro by inhibiting myoblast differentiation, which may also lead to expansion of myoblast numbers (7). LIF can also affect growth of myotubes at later stages of myogenesis in vitro by increasing their protein synthesis via an Akt-mediated mechanism (8). These pro-myogenic, anabolic influences of LIF on muscle cells in vitro are reflected in the responses of muscle to changes in LIF expression or delivery following muscle injury or in muscle disease. Elevations of systemic LIF levels in mice experiencing acute muscle injury or denervation cause faster growth of regenerative muscles (9,10) and muscle regeneration following acute injury is slower in LIFnull mutant mice (11). Similarly, increased delivery of LIF to diaphragm muscles in the mdx mouse model of DMD produced larger muscle fibers (12).
Although those in vitro and in vivo effects of LIF on myogenesis support its potentially beneficial role in the treatment of injured and diseased muscle, increased delivery of LIF can also cause negative, off-target effects. For example, prolonged, systemic elevations of LIF in cancer can significantly increase wasting of non-diseased muscle fibers (13)(14)(15), emphasizing the importance of targeting LIF specifically to sites of active muscle pathology, to enhance myogenesis at those sites. We previously addressed this obstacle to targeting LIF specifically to sites of pathological muscle damage by transplanting transgenic bone marrow cells (BMCs) that expressed a LIF transgene under control of the leukocyte-specific, CD11b promoter (CD11b/LIF transgene) (1). Because the transplanted cells subsequently differentiated into inflammatory cells in which the LIF transgene was expressed at high levels at sites of muscle damage, the system provided a targeted delivery of LIF to sites of pathology. The primary beneficial outcome of that therapeutic intervention was a reduction in fibrosis, which is a debilitating feature of muscular dystrophy (1).
Expression of the CD11b/LIF transgene in mdx mice also produced a significant increase in the number of regenerating fibers in dystrophic muscle, which could represent either a beneficial or a detrimental effect of targeted delivery of LIF. On one hand, the outcome could reflect improved muscle regeneration, which would be consistent with many of the influences of LIF on muscle cells in vitro. Alternatively, the increase in muscle regeneration could also result from amplification of muscle fiber damage, which would lead to more repair. In either scenario, the LIF-induced changes in the extent of regeneration of dystrophic muscle could be caused by perturbations of the immune response to muscular dystrophy, because LIF can modulate the inflammatory response to injury or disease (16,17) and perturbations of the immune response to muscular dystrophy can either worsen muscle damage (18)(19)(20)(21) or improve repair (1,(21)(22)(23)(24)(25)(26).
In the present investigation, we test whether expression of the CD11b/LIF transgene in inflammatory cells leads to changes in the function of inflammatory cells that can influence injury or growth of dystrophic muscles. In particular, we assess whether the CD11b/LIF transgene influences the numbers, distribution or phenotype of innate immune cells or specific macrophage subpopulations in dystrophic, mdx muscle and whether their cytotoxicity or chemotactic response is affected. We also test whether expression of the transgene amplifies muscle fiber damage in vivo or influences the regeneration or growth of dystrophic muscles over the course of mdx dystrophy. Together, the findings will contribute to the assessment of whether targeted delivery of the CD11b/LIF transgene to dystrophic muscle has promising therapeutic potential.
Results
We confirmed that LIF expression was elevated in macrophages located in muscles of LIF/mdx mice by isolating macrophages from pooled limb muscles obtained from LIF/mdx and WT/mdx mice and assaying for LIF expression by QPCR (Fig. 1A). We next investigated whether LIF protein occurred at elevated levels where macrophages accumulated in transgenic muscles. Labeling of adjacent muscle cross-sections for CD68+ macrophages (Fig. 1B), LIF protein (Fig. 1C) and CD206+ macrophages (Fig. 1D) showed that LIF protein content was increased at sites enriched with CD68+ macrophages but with fewer CD206+ macrophages. Our previous findings demonstrated that LIF protein content was greater in inflammatory lesions of LIF/mdx muscles compared to WT/mdx muscles (1). We also assayed whether there were higher levels of LIF protein in muscle fibers of LIF/mdx mice compared to fibers in WT/mdx mice, which would indicate ectopic expression of the gene in muscle, but found no significant difference between the two genotypes (Supplementary Material, Fig. S1). These data validate that the overexpression of LIF in whole muscles of LIF/mdx mice results from LIF expression by transgenic macrophages.
We then tested whether the greater co-localization of LIF with M1-biased macrophages was attributable to greater expression of LIF mRNA by M1-biased macrophages compared to M2-biased macrophages. First, we validated that unstimulated (Th0), bone marrow-derived macrophages (BMDMs) isolated from LIF/mdx mice (CD11b/LIF+ BMDMs) expressed significantly higher levels of LIF than BMDMs from WT/mdx mice (CD11b/LIF-BMDMs) (Fig. 1E). We then assayed whether expression levels of LIF were affected by whether CD11b/LIF+ BMDMs were stimulated with Th1 (TNFα and IFNγ ) or Th2 cytokines (IL4 and IL10). Th1 cytokines are expressed during pro-inflammatory responses and activate macrophages toward an M1 phenotype (27)(28)(29)(30)(31). Th2 cytokines downregulate inflammation and activate an M2 phenotype in macrophages (20,21,23,27,29,30,(32)(33)(34)(35). Our results show that stimulation of CD11b/LIF+ BMDMs with Th1 cytokines approximately doubled their production of LIF (Fig. 1E). However, stimulation of CD11b/LIF+ BMDMs with Th2 cytokines Figure 1. LIF expression is elevated in intramuscular macrophages of LIF/mdx mice. Histogram of LIF expression in macrophages isolated from the hind limb muscles of WT/mdx and LIF/mdx mice. (A). * indicates significant difference compared to CD11b/LIF-muscle macrophages (P < 0.05). P-values are based on a two-tailed t-test. N = 5 for WT/mdx and n = 3 for LIF/mdx mice. Bars = SEM. Immunolabeling of adjacent cross-sections with anti-CD68 (B), anti-LIF (C) and anti-CD206 (D) shows local, elevated LIF protein content at sites that are most highly enriched with CD68+ macrophages in LIF/mdx muscles. Fibers that are numbered '1,' '2' or '3' are individual fibers that appear in adjacent sections, to provide reference points in the sections. Scale bars = 50 μm. QPCR analysis of CD11b/LIF-and CD11b/LIF+ BMDMs that were unstimulated (Th0) or stimulated with Th1 or Th2 cytokines show increased expression of lif in M1-biased BMDMs (E). * indicates significant difference between the two groups indicated by the ends of the horizontal brackets (P < 0.05). N = 3 for all groups. P-values are based on two-way ANOVA with Tukey's multiple comparisons test. Bars = SEM. did not affect LIF expression relative to Th0 transgenic BMDMs (Fig. 1E). These findings show that expression of LIF in LIF/mdx mice is highest in M1-biased macrophages, which indicates that LIF is delivered to muscles in LIF/mdx mice primarily by pro-inflammatory macrophages.
CD11b/LIF transgene expression reduces CD206+ macrophage numbers
We assessed the effects of CD11b/LIF transgene expression on muscle inflammation in mdx mice during the acute onset of pathology (1-month-old), successful regeneration (3-months-old) and during the period of progressive degeneration (12-months-old) that includes progressive reductions of satellite cell numbers and function, progressive reductions in muscle fiber size, progressive increases in fibrosis and progressive reductions in muscle strength (36)(37)(38). Although the total number of CD11b + innate immune cells in mdx muscle declined following the acute peak of pathology ( Fig. 2 A-F; M), transgene expression did not affect CD11b + cell numbers at any stage (Fig. 2M). This negative result resembles the absence of treatment effect of CD11b/LIF transgene expression on CD68+ macrophages in mdx muscle (1). However, when we assayed whether the transgene affected the numbers of CD206+, M2-biased macrophages ( Fig. 2G-L, N), we observed a transgenemediated reduction in the numbers of CD206+ macrophages at 1-month and 3-months of age, although their numbers returned to non-transgenic mdx levels by 12-months of age (Fig. 2N). These findings indicate that CD11b/LIF transgene expression selectively reduces the numbers of CD206+ macrophages in dystrophic muscles at early stages of the disease.
Macrophage-mediated delivery of transgenic LIF reduces macrophage dispersion
Although we found no effect of the CD11b/LIF transgene on total numbers of CD11b + innate immune cells in mdx muscle, the transgene dramatically influenced the dispersion of immune cells in muscle. At the 1-month time-point, CD11b + (Fig. 3A), CD68+ (Fig. 3B) and CD206+ (Fig. 3C) macrophages were broadly dispersed throughout the muscles of WT/mdx mice. In contrast, CD11b + (Fig. 3D) and CD68+ (Fig. 3E) showed clumped population dispersions in LIF/mdx muscles, despite no effect of the transgene on the total numbers in muscle of either cell population ( Fig. 2M) (1). CD206+ macrophages also exhibited a clumped population dispersion in inflammatory lesions of LIF/mdx mice (Fig. 3F), although the effect was not as apparent as it was for CD68+ macrophages. The transgene did not affect macrophage dispersion at the 3-month ( Fig. 3G-L) or 12-month (data not shown) time-points.
In addition to reducing available CCL2, transgene expression may also reduce the chemotactic response of macrophages to CCL2. We used a chemotaxis assay to test whether CD11b/LIF transgene expression could reduce the migration of BMDMs in response to CCL2 and found that Th0 and M2-biased, CD11b/LIF-BMDMs were responsive to CCL2 (Fig. 4I). However, expression of the transgene by BMDMs eliminated their chemotactic response to CCL2 (Fig. 4I). Collectively, our results show that elevated expression of LIF by transgenic macrophages reduces their expression of CCL2 and their chemotactic response to CCL2, which may contribute to the disruption of normal dispersion of macrophages in mdx muscles.
Muscle fiber damage is increased at sites of increased macrophage accumulation
Prolonged activation of M1-biased macrophages in mdx muscles can exacerbate fiber damage through muscle membrane lysis (22,50,51). We assayed whether the localized increase in CD68+ macrophages could affect fiber damage in LIF/mdx muscles. Labeling of injured (albumin+) fibers (52) in 1-month-old muscles of WT/mdx (Fig. 5A) and LIF/mdx (Fig. 5B) mice showed albumin+ fiber clusters that resembled CD68+ macrophage clusters at the same time-point (Fig. 3). Labeling of adjacent cross-sections of LIF/mdx muscles for CD68+ macrophages (Fig. 5C), albumin+ fibers ( Fig. 5D) and CD206+ macrophages ( Fig. 5E) confirmed co-localization of injured fibers to areas enriched with CD68+ macrophages and relatively few CD206+ macrophages. Quantification of albumin+ fibers at each stage of the pathology showed a transient increase in fiber damage of LIF/mdx muscles at 1-and 3-months of age (Fig. 5F). However, at 12-months of age when macrophages are broadly dispersed in muscles, LIF/mdx muscles showed significantly less fiber damage (Fig. 5F). We also assayed whether transgene expression affected variance of muscle fiber cross-sectional area, which is also an indicator of muscle pathology (53,54), and found a transient increase in fiber area variance at the 1-month timepoint ( Fig. 5G). Increased variance occurs when there is increased damage because there are more injured fibers and each injured fiber can be at a different stage of repair and growth, which increases size variance in the population.
In addition to increasing muscle damage by increasing the numbers of M1-biased macrophages at sites of injury, the transgene may also increase the cytotoxic potential of individual macrophages. To test this possibility, we performed a fluorescence microscopy-based cytotoxicity assay to measure macrophage-mediated lysis of myoblasts. In this approach, non-labeled BMDMs were co-cultured with muscle cells that were pre-labeled with the fluorescent marker, CFSE (488 nm emission). Following the cytotoxicity period, lysed cells were labeled using GelRed (594 nm emission), a cell membraneimpermeable DNA-binding dye. The proportion of lysed muscle cells (CFSE+GelRed+ cells) out of total muscle cells (CFSE+ cells) was quantified using fluorescence microscopy ( Fig. 6A-F). We validated the sensitivity of this assay by showing a positive correlation between numbers of lysed muscle cells and numbers of wild-type BMDMs present in the co-cultures (Fig. 6G). We used this approach to test the cytotoxic potential of CD11b/LIF-and CD11b/LIF+ BMDMs. The BMDMs were polarized to a cytolytic phenotype using Th1 cytokines or left in an unpolarized state prior to co-culturing the BMDMs with muscle cells. Although Th1-stimulated BMDMs of both genotypes showed increased cytotoxicity compared to genotype-matched, unpolarized BMDMs, transgene expression had no effect on the cytotoxic potential of Th1-stimulated or unstimulated BMDMs (Fig. 6H).
Our results indicate that increased muscle damage observed in LIF/mdx mice is caused by increased, localized accumulation of M1-biased macrophages at inflammatory lesions and not by increased cytotoxic potential of CD11b/LIF+ macrophages.
LIF transgenic macrophages accumulate at sites of mdx muscle growth and repair
Our previous work showed an increase in the numbers of regenerating, developmental myosin heavy chain-positive (dMHC+) fibers in LIF/mdx muscles (1), which suggested the possibility that those sites of repair could be associated with elevated numbers of transgenic macrophages. We tested Numbers of CD11b + (M) and CD206+ (N) cells were normalized to muscle volume in mice of both genotypes along the course of pathology and show a reduction of CD206+ macrophages in LIF/mdx muscles. * indicates significant difference compared to age-matched, WT/mdx mice (P < 0.05). # and indicate significant difference compared to 1-and 3-month mice of the same genotype, respectively. P-values are based on two-tailed t-tests. CD11b + cell data: n = 5 for all groups. CD206+ cell data: n = 7 for both groups at the 1-month time-point and n = 5 for all groups at the 3-and 12-month time-points. Bars = SEM.
whether high densities of macrophages were associated with elevated numbers of dMHC+ fibers by labeling adjacent muscle sections for CD68+ macrophages (Fig. 7A, D), dMHC+ fibers ( Fig. 7B, E) and CD206+ macrophages (Fig. 7C, F), which showed that regions of muscle regeneration were most-enriched with CD68+ macrophages. We then assayed whether the proportion of CD68+ macrophages that were located at sites of muscle regeneration was greater in LIF/mdx muscles and observed a nearly 2-fold increase in the density of CD68+ cells at areas of regeneration in LIF/mdx muscles compared to WT/mdx muscles and found that CD68+ macrophages in LIF/mdx muscles were 3.7-fold more concentrated at regenerating areas than non-regenerating areas (Fig. 7G-K). Thus, CD11b/LIF transgene expression increased the density of CD68+ macrophages at sites of muscle regeneration.
CD11b/LIF expression does not affect mdx muscle growth or repair
The observation that LIF transgenic macrophages were present at high numbers at sites of dMHC+ fibers suggested two interpretations of the findings. First, the transgenic macrophages may promote muscle growth and regeneration or the elevated numbers of dMHC+ fibers may occur at sites where macrophages induced cytolysis, leading to subsequent repair. We tested whether transgenic macrophages promoted growth of mdx muscles by assaying for treatment effects on muscle mass or muscle fiber size. However, LIF/mdx mice showed no significant differences in muscle mass (Fig. 8A, B), fiber size ( Fig. 8C-F) or number of fibers per muscle (Fig. 8G) at the 1-, 3-and 12-month time-points. In addition, expression of the transgene did not affect the numbers of Pax7+, MyoD+ or myogenin+ cells in mdx muscle (Fig. 9A-D). Our results indicate that macrophagemediated delivery of transgenic LIF does not have a significant effect on muscle growth or regeneration during the course of mdx dystrophy.
Discussion
The primary finding in our investigation is that expression of a CD11b/LIF transgene by inflammatory cells in dystrophic muscle amplifies muscle fiber damage during the early peak of mdx muscle pathology. However, as inflammatory cell numbers diminished and their dispersal increased during progressive stages of the pathology, the detrimental effect of the transgene declined. By 12 months of age, expression of the transgene produced a significant reduction of fiber damage. The amplification of fiber damage early in the disease is attributable to high, local concentrations of CD68+ macrophages, which can lyse mdx muscle fibers through a free-radical-mediated mechanism (22,51). The proportion of F4/80+ macrophages that express CCL2 (F4/80 + CCL2+) is reduced in the muscles of LIF/mdx mice (E). * indicates significant difference compared to age-matched, WT/mdx mice (P < 0.05). # indicates significant difference compared to 1-month-old mice of the same genotype (P < 0.05). N = 5 for all groups except WT/mdx muscles at the 3-month time-point (n = 4). ELISA of conditioned media shows reduced secretion of CCL2 from WT BMDMs treated with rLIF (F). * indicates significant difference compared to vehicle-treated BMDMs (P < 0.05). N = 5 for both groups. ELISA of conditioned media from CD11b/LIF-and CD11b/LIF+ BMDMs shows reduced secretion of CCL2 mediated by transgene expression (G). * indicates significant difference compared to CD11b/LIF-BMDMs (P < 0.05). N = 5 for both groups. QPCR analysis of CD11b/LIF-and CD11b/LIF+ BMDMs confirms the reduced secretion of CCL2 shown in (G) is caused by reduced expression of ccl2 in CD11b/LIF+ BMDMs (H). * indicates significant difference compared to CD11b/LIF-BMDMs (P < 0.05). N = 5 for CD11b/LIF-BMDMs and n = 4 for CD11b/LIF+ BMDMs. In vitro analysis of the chemotactic response of Th0-, Th1-and Th2-stimulated BMDMs shows reduced response to CCL2 from CD11b/LIF+ BMDMs (I). * indicates significant difference compared to genotype-matched BMDMs receiving the same stimulation (P < 0.05). N = 3 for all groups except CD11b/LIF+ BMDMs with a Th0 and Th2 activation (n = 2). P-values are based on two-tailed t-test for all data. Bars = SEM.
Figure 5.
Muscle fiber damage is more extensive in LIF/mdx muscles at sites of macrophage accumulation. Cross-sections from 1-month-old WT/mdx (A) and LIF/mdx (B) muscles labeled with anti-albumin show injured fiber clusters that resemble CD68+ macrophage dispersion (Fig. 3B and 3E). Scale bars = 500 μm. Immunolabeling of adjacent cross-sections from LIF/mdx muscles with anti-CD68 (C), anti-albumin (D) and anti-CD206 (E) shows greater co-localization of injured fibers with CD68+ macrophages relative to CD206+ macrophages. Fibers that are numbered '1,' '2' or '3' are individual fibers that appear in adjacent sections, to provide reference points in the sections. Scale bars = 50 μm. Transgene expression transiently increases the dystrophic pathology, as shown by an increase in the proportion of albumin+ fibers (F) and muscle fiber CSA variance (G) in LIF/mdx mice. * indicates significant difference compared to age-matched, WT/mdx mice (P < 0.05). # and indicate significant difference compared to 1-and 3-month mice of the same genotype, respectively. P-values are based on two-tailed t-tests. Albumin+ fibers: n = 5 for all groups. Fiber variance: n = 5 for all groups except LIF/mdx samples at the 12-month time-point (n = 4). Bars = SEM.
Although the cytotoxicity of the transgenic macrophages did not differ from wild-type macrophages, the extent of muscle membrane lysis increased as numbers of macrophages increased in vitro or in inflammatory lesions in vivo; thus, the defect in CD68+ macrophage dispersal in the muscle produced high densities of cytolytic cells at foci of muscle fiber damage.
The high, local concentrations of muscle macrophages that were caused by the CD11b/LIF transgene occurred despite previous findings which showed that elevated LIF expression reduced total numbers of F4/80+ monocytes/macrophages that were recruited to mdx muscles at early stages of the pathology (1). This inhibitory effect on recruitment of monocytes/macrophages reflects some specificity of the influence of transgenic LIF on specific leukocyte populations because we found no effect of transgene expression on the numbers of CD11b + innate immune cells in mdx muscle. CD11b is expressed by monocytes and macrophages, but it is also expressed by basophils, neutrophils, eosinophils and NK cells, all of which are present in elevated numbers in mdx muscles (55)(56)(57)(58). This tells us that elevated LIF expression does not reduce the aggregate numbers of innate immune cells in dystrophic muscle, but is more specifically inhibitory for monocytes/macrophages.
The selective reduction in monocytes/macrophage caused by elevated LIF is contrary to expectations based on other investigations. For example, in vivo observations have shown that the recruitment of macrophages to sites of tissue injury in the peripheral or central nervous system is reduced in LIFnull mutant mice (59) and in vitro findings have demonstrated that LIF is directly chemoattractive to macrophages and other myeloid cells (59,60). Our findings indicate that the reduction in macrophage recruitment caused by increased LIF is attributable to inhibition of powerful chemotactic signaling by CCL2 by elevated LIF expression. CCL2 plays a central role in regulating the traffic of immune cells to sites of muscle injury (61,62), intramuscular macrophages that express CCL2 play a major role in recruiting leukocytes to acutely injured muscles (63) and mdx muscle cells and inflammatory cells can release CCL2 to promote inflammation (64). Expression of the CD11b/LIF transgene reduced the production of CCL2 in macrophages and reduced the chemotactic response of macrophages to CCL2, both of which Histogram showing increased numbers of CD68+ macrophages located at sites of dMHC+ regenerative areas relative to dMHC-areas in muscles of WT/mdx and LIF/mdx mice (K). * indicates significant difference between the two groups indicated by the ends of the horizontal brackets (P < 0.05). P-values are based on two-tailed t-test for all groups. N = 5 for all groups. Bars = SEM. may underlie the reduction in monocyte/macrophage recruitment and dispersion in LIF/mdx muscles.
The reduction in numbers of CD206+ macrophages, an M2biased phenotype that can promote muscle fibrosis and regener- Figure 8. The transgene has little influence on muscle fiber size or growth. Measurements of body mass (A), TA muscle mass to body mass ratio (B), fiber CSA (C-F) and fiber numbers (G) show no significant difference between WT/mdx and LIF/mdx mice. * indicates significant difference compared to age-matched, WT/mdx mice at the same CSA bin (P < 0.05). # and indicate significant difference compared to 1-and 3-month mice of the same genotype, respectively. P-values are based on two-tailed t-tests. N = at least 6 for all groups shown in each data graph. Bars = SEM.
ation (18,19,22,65), indicates that elevated expression of LIF may influence macrophage phenotype, shifting them toward a proinflammatory, cytolytic M1-biased phenotype. That possibility is supported by previous findings which showed that elevated expression of LIF in inflammatory cells in mdx muscles reduced the expression of IL-4 and IL-10 (1), which can be produced by M2-biased macrophages and promote the M2 phenotype (21)(22)(23)29,30,32,34,35). However, this differs from the role of LIF in regulating macrophage phenotype in some other diseases. For example, blockade of LIF signaling in tumors in which LIF is expressed at high levels produced a reduction in the expression of M2 phenotypic markers in tumor-associated macrophages, including CD206 and CD163 (66). In addition, peripheral blood monocytes from human donors that were directly stimulated with LIF in vitro exhibited an M2-biased phenotype (17).
Our finding that expression of the CD11b/LIF transgene did not amplify the number of satellite cells in mdx muscles contrasts with previous observations which showed that LIF could increase numbers of C2C12 myoblasts in vitro by increasing their proliferation, reducing their apoptosis and delaying their differentiation into post-mitotic myotubes (3,4,6,7,67). However, whether elevations in LIF delivery to injured or diseased muscle affects satellite cell numbers in vivo has not been previously tested. The lack of effect of CD11b/LIF transgene expression on satellite cell numbers is therapeutically relevant because reductions in satellite cell numbers over the course of mdx muscular dystrophy contribute significantly to the decline of regenerative potential of dystrophic muscle (68)(69)(70)(71). Similarly, expression of the CD11b/LIF transgene did not affect muscle fiber size in mdx mice. This differs from the increase in mdx muscle fiber size that resulted from suturing alginate rods that were infused with recombinant LIF to dystrophic muscles, allowing LIF to diffuse into the muscle for 3 months, leading to an increase in muscle fiber size (12). These differing treatment outcomes may reflect differences in the concentration, location and timing of LIF delivery to the mdx muscles, as indicated in investigations of the effects of LIF administration to acutely injured muscle. For example, continuous delivery of recombinant LIF to acutely injured muscle by a mini-osmotic pump increased muscle fiber growth (9) but systemic elevations of recombinant LIF using three intraperitoneal injections per week did not affect the growth of muscle fibers following acute injury (72).
Collectively, our current findings and previous work (1) show that the therapeutic value of inflammatory cell-mediated delivery of a CD11b/LIF transgene to dystrophic muscle may result primarily from its reduction of muscle fibrosis, and not from improving the growth or regenerative capacity of dystrophic muscle. Expression of the transgene produced long-term reductions in the expression and accumulation of connective tissue proteins in dystrophic muscle (1), which diminished muscle stiffness, which is a debilitating feature of muscular dystrophy (73)(74)(75). However, the potential for expression of the transgene to cause transient increases in muscle fiber damage early in the pathology, while reducing damage at later stages, indicates that this therapeutic approach would be best administered at later stages of the pathology, when progressive fibrosis is a prominent feature of the disease.
Mice
All experimentation complied with relevant ethical regulations for animal testing and research, and experimental study protocols were approved by the Chancellor's Animal Research Committee at the University of California, Los Angeles. C57BL/10ScSn-Dmdmdx/J mice (mdx mice) were purchased from The Jackson Laboratory (Bar Harbor, ME) and bred in specific pathogen-free vivaria.
The CD11b/LIF mdx mouse line was generated using the following strategy. The complete Mus musculus LIF cDNA sequence (611-bp; NM_008501) was amplified by PCR and ligated into a pGL3-Basic vector (Promega) at the Nco I/Xba I sites. The pGL3-Basic vector also contained a 550-bp fragment of the human CD11b promoter at the Hind III site, upstream of the LIF insertion site. The 1215-bp, hCD11b/LIF fragment was isolated from pGL3-Basic by restriction endonuclease digestion with Xho I/Xba I and used for pronuclear injection into CB6F1 eggs to generate transgenic mice. Positive founders were identified by PCR screening for the hCD11b/LIF construct. Founder mice were backcrossed with C57BL/6 J mice for at least seven generations to generate hemizygous, transgenic (CD11b/LIF.Tg+) mice.
CD11b/LIF mdx transgenic mice were produced by crossing CD11b/LIF.Tg+, hemizygous males with mdx females to generate CD11b/LIF.Tg + hemizygous, transgenic mice that were dystrophin-deficient. Dystrophin deficiency was verified by ARMS PCR screening and presence of the hCD11b/LIF construct was determined by PCR screening. The CD11b/LIF mdx mice were backcrossed with wild-type mdx mice for seven generations to produce hemizygous CD11b/LIF mdx mice. The CD11b/LIF mdx line is maintained as hemizygous to produce transgenic (LIF/mdx) mice and wild-type (WT/mdx) littermate controls for experimentation. We showed in previous work that muscle tissue from this transgenic mouse line has more than 60% greater expression of LIF than WT/mdx mice (1).
LIF/mdx and WT/mdx mice were euthanized by inhalation of 32% isoflurane (Zoetis) at 1-, 3-or 12-months of age. Body mass was recorded prior to tissue collection. Both tibialis anterior (TA) muscles were dissected from each mouse and the individual muscle masses were recorded. Investigators collecting data and performing analysis were aware of animal numbers only and were blinded to treatment groups.
Immunohistochemistry
The right TA muscle from each male mouse was dissected and immediately frozen in O.C.T. compound (Tissue-Tek) in liquid nitrogen-cooled isopentane. Muscle cross-sections were cut at a thickness of 10 μm at −20 • C and mounted onto glass slides. Cross-sections were fixed for 10 min in acetone cooled to −80 • C (for sections to be labeled with anti-CD11b, anti-CD206, anti-CD68, anti-developmental myosin heavy chain (dMHC) or anti-MyoD) or 2% paraformaldehyde (PFA) cooled to 4 • C (for sections to be labeled with anti-LIF) or 4% PFA cooled to 4 • C (for sections to be labeled with anti-Pax7 or anti-myogenin), or methanol cooled to 4 • C (for sections to be labeled with antialbumin). Endogenous peroxidase activity was quenched using 0.3% H 2 O 2 for 10 min. Sections to be labeled for Pax7, MyoD and myogenin were immersed in antigen retrieval buffer (10 mM sodium citrate, 0.05% Tween-20, pH 6) at 95-100 • C for 40 min prior to the peroxidase quench step. Sections to be labeled with anti-CD11b, anti-CD206 or anti-CD68 were blocked at room temperature (RT) in bovine serum albumin (BSA) buffer (3% BSA, 0.05% Tween-20, 0.2% gelatin, 0.15 M NaCl, 0.05 M Tris-HCl; 30 min). Sections to be labeled with anti-LIF were blocked in 3% ovalbumin buffer (3% ovalbumin, 0.05% Tween-20, 0. Positive signal was visualized in all slides with the peroxidase substrate, 3-amino-9-ethylcarbazole (AEC, Vector #SK-4200). The sections were washed in phosphate buffered saline (PBS) after each step, beginning with the fixation.
Stereology
The number of cells per volume of muscle was determined by measuring the total volume of each section using a stereological, point-counting technique to determine section area and then multiplying that value by the section thickness (10 μm The sections were washed in PBS following each step of their processing. Data were collected by identifying sites containing dMHC+ fibers and then counting the numbers of CD68+ macrophages in a standardized volume of 289 000 μm 3 surrounding the dMHC+ fibers. The volume utilized was calculated using a point-counting technique to calculate the area of the field of view surrounding dMHC+ sites (28 900 μm 2 ) and multiplying the area by the section thickness (10 μm). All sites containing dMHC+ fibers in each sample were used for data collection. An equivalent number of healthy sites of equal volume were used to quantify the numbers of CD68+ macrophages at sites without dMHC+ fibers in each sample. The data were expressed as the density of CD68+ cells per mm 3 (CD68+ cells/mm 3 ). Cell counts were performed on an Olympus
Gene
Forward Reverse GCTCAGCCAGATGCAGTTAAC CTCTCTCTTGAGCTTGGTGAC BH2 fluorescence microscope. Confocal images were acquired on a Leica TCS-SP5 confocal microscope. The relative quantity of LIF in LIF/mdx and WT/mdx muscle fibers was assayed by determining the mean fluorescence intensity (MFI) of muscle fibers following labeling with anti-LIF and a fluorescent secondary antibody. Sections were fixed in 2% PFA cooled to 4 Mounting medium with DAPI. The sections were washed in PBS after each step. The MFI of 20 randomly selected muscle fibers in each sample was quantified using ImageJ (National Institutes of Health). Images used for MFI measurements were acquired on an Olympus BH2 fluorescence microscope. Confocal images were acquired on a Leica TCS-SP5 confocal microscope.
Myofiber number quantification and CSA measurements
Cross-sections from the TA muscle mid-belly were stained with hematoxylin (Vector #H-3401) for 10 min. Muscle fiber CSA was quantified using ImageJ (National Institutes of Health). The average CSA of each sample was calculated from 500 randomly sampled fibers. The classification for large or small fibers was determined by setting three standard deviations from the mean CSA for the control group at each time-point as previously described (76). Fibers were considered to be small or large in 1month TAs if the CSA was less than 796 μm 2 or greater than 1785 μm 2 , respectively. Fibers were considered to be small or large in 3-month TAs if the CSA was less than 2000 μm 2 or greater than 4414 μm 2 , respectively. Fibers were considered to be small or large in 12-month TAs if the CSA was less than 832 μm 2 or greater than 3453 μm 2 , respectively. Fibers were considered normal if their CSA was between the threshold measurements for small and large fibers. Images used for CSA measurements were acquired on an Olympus BH2 microscope equipped with Nomarski optics.
RNA isolation and quantitative PCR
Cell cultures were washed with Dulbecco's phosphate-buffered saline (DPBS, Sigma-Aldrich #5652) cooled to 4 • C and the RNA was isolated in TRIzol Reagent (Ambion #15596018) according to the manufacturer's protocol. The isolated RNA was further cleaned and concentrated using an RNA Clean and Concentrator-5 kit (Zymo Research #R1014). The RNA was quantified, reversed transcribed to cDNA, and used for qPCR as previously described (18,77). We followed established guidelines for experimental design, data normalization and data analysis (78)(79)(80). Primer sequences used for qPCR are listed in Table 1.
Muscle macrophage isolation
Skeletal muscles from male and female, 1-month-old mdx mice were minced in 1.25 mg/ml collagenase types IA and IV (Sigma-Aldrich #C9891, #C5138) in Dulbecco's Modified Eagle medium (Sigma #D1152) and digested at 37 • C for 1 h with gentle trituration each 15 min. The digestate was diluted with DPBS, filtered through 70 μm mesh filters and the liberated cells collected by centrifugation. The cells were resuspended in DPBS, overlaid on Histopaque-1077 (Sigma-Aldrich #1077-1) and centrifuged at 400 x g for 30 min at RT. Macrophages were collected from the DPBS-Histopaque interface and RNA isolated from the cells as described above. QPCR was performed using tpt1 and hprt1 as house-keeping genes. Muscle macrophages were collected from five WT/mdx and three LIF/mdx mice.
ELISA analysis of CCL2 in BMDM conditioned media
CCL2 secretion by BMDMs was measured as previously described (1). Briefly, BMDMs from wild-type, WT/mdx and LIF/mdx mice were generated as described above. BMCs from two male mice of each genotype were pooled to generate the BMDMs. On the sixth day of culture, the BMDMs were switched to DMEM containing 0.25% HI-FBS, 1% Pen/Strep and 10 ng/ml M-CSF, with or without 10 ng/ml recombinant mouse LIF (eBioscience #14-8521). After 24 h of stimulation, the conditioned media were collected, briefly centrifuged to remove particulates, and analyzed for BMDMsecreted CCL2 (Duoset ELISA, R&D Systems, #DY479) according to the manufacturer's instructions.
Cytotoxicity assay
Macrophage-mediated cytotoxicity was assessed using co-cultures of BMDMs and C2C12 muscle cells. BMDMs from one female mouse of each genotype (WT/mdx and LIF/mdx) were generated as described above with the following modifications. Freshly isolated BMCs were plated at 5 × 10 6 cells per 10cm, low-adherence dish (Eisco #CH0372C) in macrophage growth medium for 6 days. Adherent cells were activated to a cytotoxic, M1-biased phenotype using activation medium containing Th1 cytokines for 24 h. Unstimulated BMDMs were cultured in activation medium without Th1 cytokines. Following activation, the BMDMs were washed with DPBS and detached from the dishes using Cellstripper (Corning #25-056-Cl) for 10 min. The detached BMDMs were centrifuged at 526 x g for 5 min, resuspended in DPBS, and total cell numbers were calculated using a hemocytometer. BMDMs were resuspended in cytotoxicity assay medium (Hank's balanced salt solution (HBSS; Sigma-Aldrich #H1387), 0.25% HI-FBS, 400 μM L-arginine).
One day prior to co-culture, 12-well plates were prepared by adding 8-mm glass coverslips coated with 2% gelatin to each well. C2C12 muscle cells were plated in the 12-well plates at 5.94 × 10 4 cells per well in growth medium (DMEM (Sigma-Aldrich #D1152), 10% FBS, 1% Pen/Strep) for 24 h to allow the cells to reach 70% confluency and attach to the glass coverslips. The muscle cells were then washed with DPBS and fluorescently labeled with CFDA-SE (Accurate Chemical #14456) to allow visual differentiation from unlabeled BMDMs. The muscle cells were incubated in labeling medium (HBSS, 0.1% BSA, 5 μM CFDA-SE) for 10 min at 37 • C in 5% CO 2 . CFDA-SE is a cell membranepermeable dye that does not cause cytotoxicity at the concentration used. Intracellular CFDA-SE is cleaved by endogenous esterases to form cell membrane-impermeable CFSE. CFSE is a fluorescent molecule (488 nm emission) that binds intracellular proteins, permanently labeling cells. The cells were washed with growth medium to remove residual CFDA-SE from each well. The cells were then incubated in growth medium for 5 min at 37 • C in 5% CO 2 to allow unreacted CFDA-SE to flow out of the cells and avoid labeling BMDMs. The labeled cells received a final wash using HBSS to remove residual growth medium.
The BMDMs were added to the muscle cultures at 1.3 × 10 6 BMDMs/well in cytotoxicity assay medium. Following 6 h of coculture at 37 • C in 5% CO 2 , each co-culture well was washed with DPBS. GelRed (Biotium #41003-1) diluted in cytotoxicity assay medium (1:2500 dilution) was added to each well for 10 min at 37 • C in 5% CO 2 to label permeabilized muscle cells. GelRed is a cell membrane-impermeable, fluorescent dye (593 nm emission) that binds to nucleic acids. Following a final DPBS wash, the glass coverslips were removed from each well and mounted onto glass microscope slides using Fluoro-Gel (Electron Microscopy Sciences #17985-10).
Fluorescence microscopy with an Olympus BH2 microscope was used to collect cytotoxicity data based on the following criteria: BMDMs were CFSE-GelRed-, non-permeabilized muscle cells were CFSE+GelRed-and permeabilized muscle cells were CFSE+GelRed+. Data were expressed as the proportion of permeabilized muscle cells out of total C2C12 cells ([CFSE+GelRed+ cells]/[total CFSE+ cells]) on each coverslip. Three coverslips were included per group. The proportion of permeabilized muscle cells was quantified from 15 randomly chosen fields per coverslip. The average proportion of permeabilized muscle cells per coverslip was calculated and used as a single datum to calculate the mean and SEM for each group. The data were normalized to a muscle cell-only control group. Data were verified by repeating the experiment in triplicate.
In a separate experiment, we verified the sensitivity of this assay by testing the influence of increasing numbers of Th1stimulated BMDMs on muscle cell lysis. The experiment was repeated as described above. The muscle cell cultures were cocultured with no BMDMs, low numbers of BMDMs (6.55 × 10 5 cells), medium numbers of BMDMs (1.30 × 10 6 cells) or high numbers of BMDMs (2.60 × 10 6 cells). Because the wells containing high numbers of BMDMs prevented accurate counts of total muscle cells, data were expressed as GelRed+ cells/mm 2 .
Chemotaxis assay
BMDMs were isolated from two male mice of each genotype (WT/mdx and LIF/mdx) using the following strategy. BMCs were aseptically flushed from the femurs and tibiae as described earlier. The BMCs were plated at 1.0 × 10 7 cells per 6-cm, ultralow attachment dish (Corning #3261) in macrophage growth medium containing Th1 or Th2 cytokines for 24 h at 37 • C in 5% CO 2 . Unpolarized BMDMs were cultured in macrophage growth medium without additional cytokines. The cells were washed with DPBS and adherent cells were detached using Cellstripper as described previously. The cells were collected and BMDMs were purified using a Histopaque-1077 gradient (Sigma-Aldrich #10771) according to the manufacturer's instructions. The BMDMs were resuspended in chemotaxis medium (RPMI-1640, 1% Pen/Strept, 1% BSA).
We tested the chemotactic ability of the BMDMs in response to CCL2 using a chemotaxis chamber (Neuro Probe #AP48) following the manufacturer's protocol. We used 10 ng/ml of CCL2 (R&D Systems #479-JE/CF) in chemotaxis medium to measure chemotaxis. Spontaneous migration was measured using chemotaxis medium without CCL2. Cells in the chemotaxis chamber were incubated for 2 h at 37 • C in 5% CO 2 .
Three wells were included in each group. The numbers of migratory cells were quantified in five randomly chosen fields per well. The average number of migratory cells per field in each well was calculated and used as a single datum to calculate the mean and SEM for each group. Data were verified by repeating the experiment in triplicate. Data were collected using an Olympus BX50 microscope equipped with Nomarski optics.
Statistical analysis
All data are presented as mean ± SEM. Statistical significance was calculated using an unpaired Student's t-test, one-way analysis of variance (ANOVA) with Tukey's multiple comparisons test, or two-way ANOVA with Tukey's multiple comparisons test using Prism 7 (GraphPad). Differences with a P-value < 0.05 were considered statistically significant.
Supplementary Material
Supplementary Material is available at HMG online. | 2021-08-17T06:22:19.405Z | 2021-08-14T00:00:00.000 | {
"year": 2021,
"sha1": "146370f68c08c01e5a0c52d89d0d2054787d5638",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/hmg/advance-article-pdf/doi/10.1093/hmg/ddab230/40482379/ddab230.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b16e90b4ce5d35e08bac2af492bd96df39b7c78",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17127921 | pes2o/s2orc | v3-fos-license | MiR-222 in Cardiovascular Diseases: Physiology and Pathology
MicroRNAs (miRNAs and miRs) are endogenous 19–22 nucleotide, small noncoding RNAs with highly conservative and tissue specific expression. They can negatively modulate target gene expressions through decreasing transcription or posttranscriptional inducing mRNA decay. Increasing evidence suggests that deregulated miRNAs play an important role in the genesis of cardiovascular diseases. Additionally, circulating miRNAs can be biomarkers for cardiovascular diseases. MiR-222 has been reported to play important roles in a variety of physiological and pathological processes in the heart. Here we reviewed the recent studies about the roles of miR-222 in cardiovascular diseases. MiR-222 may be a potential cardiovascular biomarker and a new therapeutic target in cardiovascular diseases.
Introduction
Cardiovascular disease is a predominant cause of morbidity and mortality in the world [1]. The number of patients suffering from cardiovascular disease is growing larger and larger. The major categories of cardiovascular disease include disease of the blood vessels and the myocardium. The contemporary view thinks that most cardiovascular diseases resulted from a complex dysregulation of genetics and environmental factors. Also there are many molecular components that participate in this process, including noncoding RNAs.
MicroRNAs (miRNAs and miRs) are endogenous 19-22 nucleotide, small noncoding RNAs with highly conservative and tissue specific expression. miRNAs can modulate mRNA levels through decreasing transcription or posttranscription induced mRNA decay [2]. Since the first discovery of miR-NAs in 1993, they have been found in many species and could participate in various physiological and pathological processes [3][4][5][6]. So far, there are over 1000 miRNAs that have been identified, among which at least 200 miRNAs are consistently expressed in the cardiovascular system [7]. miR-NAs can regulate cardiomyocytes hypertrophy, senescence, apoptosis, autophagy, and metabolism. Changes of miRNAs have been found to participate in the genesis of many diseases including cardiovascular diseases [8].
miR-222, firstly discovered in human umbilical vein endothelial cells (HUVECs), has been reported to play important roles in epithelial tumors evidenced by its frequently increased expressions in epithelial tumors [9]. Reduction of miR-222 could inhibit cell proliferation and induce mitochondrial-mediated apoptosis through directly targeting the p53 upregulated modulator of apoptosis (PUMA) in breast cancer [10]. Its function on proliferation has also been confirmed in glioblastomas, thyroid papillary cancer, breast cancer, pancreatic cancer, hepatocellular carcinoma, and lung cancer [11][12][13][14][15]. On the other hand, miR-222 can play tumorsuppressive roles through the downregulation of c-kit in erythroleukemia cells. Apart from its role in cancer progress, miR-222 has been found to participate in many physiological and pathological processes in the cardiovascular system (Table 1). Here we reviewed the recent studies about the roles of miR-222 in cardiovascular diseases. MiR-222 may be a potential cardiovascular biomarker and a new therapeutic target in cardiovascular diseases. pathological hypertrophy, which is related to myocardial structural disorder and cardiac dysfunction, physiological hypertrophy is characterized by normal cardiac structure and normal or improved cardiac function [28]. MiR-222 expression levels were found to be commonly increased in two distinct models of exercise, namely, voluntary wheel running and a ramp swimming exercise model as well as the exercise rehabilitation after heart failure in human. MiR-222 was able to promote cardiomyocytes hypertrophy, proliferation, and survival through directly targeting p27, HIPK-1, HIPK-2, and HMBOX1 [17].
MiR-222 Regulates Physiological Function in Cardiac Stem
Cells. Heart has limited regenerative capacity, which might be based on cardiomyocyte division and cardiac stem and progenitor cell activation [29]. Cardiac stem cells (CSCs) are self-renewing, clonogenic, and multipotent, and they can differentiate to mature cardiomyocytes and improve the function and regeneration of the cardiovascular system [30]. CSCs can be activated by physical exercise training [18]. Interestingly, it has been found that the upregulation of miR-222 induced by coculturing human embryonic-stem cell-derived cardiomyocytes (m/hESC-CMs) with endothelial cells could increase and promote CSCs transformation to cardiomyocyte [18].
MiR-222 Regulates Physiological Function in Human
Umbilical Vein Endothelial Cells. Human umbilical vein endothelial cells (HUVECs) have unique ability to form capillary-like structures in response to some stimuli. MiR-222 has been reported to exert angiogenesis function through modulating HUVECs angiogenic activity by targeting c-Kit [31,32].
Sex-Specific Expression of miR-222.
There are differences between men and women in cardiovascular diseases incidence, while studies show that males are more likely to suffer from heart attacks than females [33,34]. MiR-222 are encoded on the X chromosome in mouse, rat, human and have sex-specific expression. Studies have indicated that miR-222 was specifically decreased in mature female mouse hearts as compared with male mouse hearts [31,35].
MiR-222 Regulates Pathological Function
Unraveling the role of miR-222 in regulating cardiac pathological function may foster new therapeutic targets for cardiovascular diseases (Figure 2).
Cardiac Ischemia Reperfusion Injury.
Myocardial ischemic reperfusion is a complex process involving numerous mechanisms including reactive oxygen species (ROS) overload, inflammation and calcium overload, energy metabolism dysfunction, and mitochondrial permeability transition pore (mPTP) opening [36][37][38]. MiR-222 has been reported to be able to protect against cardiac dysfunction after ischemic injury. MiR-222 can promote cardiomyocyte proliferation and reduce cardiomyocyte apoptosis through P27. In addition, miR-222 overexpression mice have well-preserved cardiac function and reduced cardiac fibrosis when subjected to cardiac ischemia reperfusion [17].
Heart
Failure. Heart failure is the terminal outcome of the majority of cardiovascular diseases, and it seriously reduces the quality of life. A significant inhibition of autophagy in Tg-miR-222 mice after heart failure was observed, which was through mTOR, a negative regulator of autophagy [19]. Inhibition of autophagy induced by miR-222 may cause accumulation of protein and organelles injury, even the impairment of cardiac function. Angiogenesis has been proposed as a promising therapy for ischemia heart disease and heart failure. miR-221/222 family seemed to inhibit angiogenesis [21]. MiR-222 was significantly decreased in endothelial cells (ECs) when cultured for 24 h with HDL from chronic heart failure (CHF) patients compared to healthy control. The downregulation of miR-222 may be a compensatory mechanism of ECs to counteract cardiovascular adverse events [39].
Viral Myocarditis.
Cardiac inflammation is an important cause of dilated cardiomyopathy and heart failure. In young healthy adults, it can cause sudden death. Viral myocarditis is one of cardiac inflammation diseases. MiR-222 has been reported to be able to orchestrate the antiviral and anti-inflammatory response through downregulation of IRF-2 [23]. Inhibition of miR-222 would increase the risk of cardiac injury. HIV-infected cardiomyopathies is another kind of inflammation diseases [22,40]. MiR-222 can regulate cell adhesion molecules ICAM-1 translation directly or indirectly (through IFN-) to inhibit inflammation [22,41].
Congenital Heart
Disease. Tetralogy of Fallot (TOF) is one of the most common congenital heart malformations in children [42]. miR-222 was found to display a high expression level in right ventricular outflow tract (RVOT) tissues compared with controls. Cardiac myocyte proliferation and differentiation is a key event in heart development. Further functional analysis showed that overexpression of miR-222 promoted cell proliferation and regulated cell differentiation by inhibiting the expression of the cardiomyocyte marker genes during the cardiomyogenic differentiation [25]. In another congenital heart disease, ventricular septal defect, the decreased expression of miR-222 also indicated its important role in heart development [20].
MiR-222 Regulates Pathological Function in Blood Vessels
3.2.1. Atherosclerosis. During the genesis of atherosclerosis, there are various molecules and cellular components that can make atherosclerotic plaque vulnerable and even rupture [43]. Many studies show that miRNAs also participate in this process [44]. MiR-222 derived from ECs may play its protective role by blocking intraplaque neovascularization and suppressing the inflammatory activation of ECs, without enhancing the proliferation of ECs [45,46].
Peripheral Arterial Disease. Smooth muscle cells
(SMCs) constitute the medial layer of arteries and regulate the vascular tone via their contractile apparatus [27]. MiR-222 was reported to take part in the development of neointima and promotes neointima formation after vascular injury by enhancing the proliferation of SMCs. Furthermore, in the peripheral artery disease (PAD) caused by atherosclerosis or inflammation of the peripheral arteries, studies have showed that miR-222 also inhibited the proliferation of vascular smooth muscle cell by targeting p27 [45] to stable the plaque [24] and promoted skeletal muscle regeneration after ischemia. Besides that, under the administration of superoxide dismutase-2 (SOD-2), miR-222 plays its protective role against peripheral artery disease by regulating p57 expression [26] but not P27.
Conclusions
In conclusion, miR-222 controls many cardiac physiological functions and its deregulation has been implicated in many cardiovascular diseases. Targeting miR-222 might be a promising therapeutic target for cardiovascular diseases. | 2018-04-03T01:35:09.457Z | 2017-01-03T00:00:00.000 | {
"year": 2017,
"sha1": "18a390088629e80ce67da8dc285c04da06f2da0e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2017/4962426.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa65f433481600e5b9bdf6e88ccf4bad3a9d8329",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
237899520 | pes2o/s2orc | v3-fos-license | Recurrent Neural Network Based Short-Term Load Forecast with Spline Bases and Real-Time Adaptation
Short-term load forecast (STLF) plays an important role in power system operations. This paper proposes a spline bases-assisted Recurrent Neural Network (RNN) for STLF with a semi-parametric model being adopted to determine the suitable spline bases for constructing the RNN model. To reduce the exposure to real-time uncertainties, interpolation is achieved by an adapted mean adjustment and exponentially weighted moving average (EWMA) scheme for finer time interval forecast adjustment. To circumvent the effects of forecasted apparent temperature bias, the forecasted temperatures issued by the weather bureau are adjusted using the average of the forecast errors over the preceding 28 days. The proposed RNN model is trained using 15-min interval load data from the Taiwan Power Company (TPC) and has been used by system operators since 2019. Forecast results show that the spline bases-assisted RNN-STLF method accurately predicts the short-term variations in power demand over the studied time period. The proposed real-time short-term load calibration scheme can help accommodate unexpected changes in load patterns and shows great potential for real-time applications.
Introduction
Short Term Load Forecasting (STLF) can be used to obtain the most economical way to commit power generation sources while fulfilling policies requirements, ensuring reliability and meeting the security, environmental, and equipment constraints of the power system [1].
The daily load profile generally follows cyclic and seasonal patterns related to both the climate and human activities, and is intrinsically a univariate time series. Many general forecasting methods based on regression or time-series models can be used for load forecasting (e.g., a semi-parametric additive model [2] or an autoregressive integrated moving average (ARIMA) [3]). These methods assume, however, a linear relationship between the observed and future time series. This assumption makes them less effective for time series with significant nonlinear characteristics, such as those associated with energy demand. Chen et al. [4] considers a more complicated time series model with a functional trend curve to improve the forecast results.
Due to their nonlinear fitting ability, machine learning techniques have been applied to many forecasting problems. The Artificial Neural Network (ANN) [5] is a typical machine learning method. ANNs learn regularities and patterns automatically from past recorded data and produce generalized results with the ability to be self-adaptive. Feed forward Multilayer Perceptron (MLP) [6,7] and Generalized Regression Neural Network
Methodology
The proposed RNN-based STLF procedure, with selected bases through a semiparametric model and a real-time load forecast adjustment scheme, is shown in Figure 1. In the first stage, forecasts of daily load patterns up to the next seven days are obtained using the apparent temperatures predicted by the Taiwan Central Weather Bureau (TCWB).
In the second stage, based on past-forecasted results, real-time adapted forecasting load sequences are generated through the interpolation method using real-time adaptation and an exponentially weighted moving average (EWMA). Figure 1 presents the data flow of the proposed RNN-based RNN adp model. Appl. Sci. 2021, 11, x FOR PEER REVIEW 3 of 1 the first stage, forecasts of daily load patterns up to the next seven days are obtained using the apparent temperatures predicted by the Taiwan Central Weather Bureau (TCWB). In the second stage, based on past-forecasted results, real-time adapted forecasting load se quences are generated through the interpolation method using real-time adaptation and an exponentially weighted moving average (EWMA). Figure 1 presents the data flow o the proposed RNN-based RNN model.
Apparent Temperature
The system net-loads are greatly affected by external factors such as temperature humidity, wind speed and seasonal events that change over time. The apparent tempera ture index [21] equivalent to the temperature felt is used to evaluate its effect upon th load. The apparent temperature is defined as: ], e is the pressur in hPa, V is the wind speed in m/sec, and RH is the relative humidity in percent. Th apparent temperature for the next 48 h (with a 3 h resolution) is provided by the TCWB Throughout this work, references to "temperature" refer to the apparent temperature. Different regions have different weather patterns. The total system load is a combi nation of the loads from the north, central, south and east regions of Taiwan (whose aver age load proportions are 38%, 28%, 33% and 1%, respectively). Since the goal is to forecas the total system load, the temperatures of the four regions are merged into one value by taking the weighted average of their temperatures with weights equal to their load pro portions; namely: (2 where = 1, … ,4 corresponds to the north, central, south and east regions respectively , is the temperature in the ith region and ℓ i is the proportion of regional load in system total energy demand.
Spline Basis Functions
To catch the general behaviors of the daily load patterns, we consider the class o multi-resolution basis functions proposed by Tzeng and Huang [19], which are ordered in the direction of increasing resolution detail, with the number of bases, K, being chosen to be large enough to represent the general 24-h patterns. On the other hand, due to th fact that the daily load patterns may change rapidly between the peak and off-peak peri ods, we also include the cubic B-spline basis functions to accommodate load patterns tha The system net-loads are greatly affected by external factors such as temperature, humidity, wind speed and seasonal events that change over time. The apparent temperature index [21] equivalent to the temperature felt is used to evaluate its effect upon the load. The apparent temperature T a is defined as: where T c is the temperature in Celsius, e = 6.105 * RH 100 * Exp 17.27T c 237.7+T c , e is the pressure in hPa, V is the wind speed in m/sec, and RH is the relative humidity in percent. The apparent temperature for the next 48 h (with a 3 h resolution) is provided by the TCWB. Throughout this work, references to "temperature" refer to the apparent temperature.
Different regions have different weather patterns. The total system load is a combination of the loads from the north, central, south and east regions of Taiwan (whose average load proportions are 38%, 28%, 33% and 1%, respectively). Since the goal is to forecast the total system load, the temperatures of the four regions are merged into one value by taking the weighted average of their temperatures with weights equal to their load proportions; namely: where i = 1, . . . , 4 corresponds to the north, central, south and east regions respectively, T a,i is the temperature in the ith region and i is the proportion of regional load in system total energy demand.
Spline Basis Functions
To catch the general behaviors of the daily load patterns, we consider the class of multi-resolution basis functions proposed by Tzeng and Huang [19], which are ordered in the direction of increasing resolution detail, with the number of bases, K, being chosen to be large enough to represent the general 24-h patterns. On the other hand, due to the fact that the daily load patterns may change rapidly between the peak and off-peak periods, we also include the cubic B-spline basis functions to accommodate load patterns that change substantially within relatively short periods of time. Two sets of bases functions are used: (1). the multi-resolution bases { f 1 , . . . , f n } defined on n control points {s 1 , . . . , s n } and (2). the B-spline bases of order d, B i,d , i = 1, . . . , n with knots at {s 1 , . . . , s n }. Details about the spline basis functions can be found in [19].
Semi-Parametric Model
A semi-parametric (SPM) model is adopted for STLF under the framework of additive models [22], by a suitable combination of the two aforementioned sets of spline basis functions, together with the nonlinear function of the temperature. Now let the sequence of daily load random vectors at time t be y t = (y(t, 1), . . . , y(t, n)) , t = 1, . . . , T, with y(t,s) denoting the load at time t and local time grid location (control point) s, s = 1, . . . , n.
The model for STLF is assumed to have the following form: where µ(t, s) represents the mean function of the logarithm of the daily load y(t, s), and (t, s) is the corresponding random error at time t = 1, · · · , T and period s = 1, . . . , n, assuming that t = ( (t, 1), . . . , (t, n)) ∼ N(0, Σ n (t)), with Σ n (t) being the covariance matrix of the random error vector t at time t. In the case of an independent error at time t and assuming Σ n (t) = σ 2 I n , the time series errors sequence t , t = 1, . . . , T, may yield a different covariance matrix estimate. Specific patterns of the response variable (i.e., the system net load) of the model for forecasts are described below, under the following explanatory variables.
1.
It is observed that there are patterns depicted by the intra-daily, intra-weekly, peak and off-peak effects to be modeled by the multi-resolution and cubic B-spline bases.
2.
It is clear that the temperature significantly affects the load pattern. The weighted average of the temperature at different periods each day, and similarly the daily highest and lowest and the weighted average of temperatures in the different regions, are included as important predictors.
3.
The interaction effects of the period with the day type within each week are also crucial.
To accommodate these three effects, and also the temperature effect, let the mean function at time t and point s be given by: with: interaction effect among the intra-daily and intra-weekly DW(t, s) 4 apparent temperature effect T(t, s) To model the intra-daily effect, we use a combination of the first 24 multi-resolution bases { f 1 , . . . , f 24 } for 96 control points with s i = i, i = 1, . . . , 96, and 96 cubic B-spline bases B d j,4 , j = 1, . . . , 96 with knots at s j , j = 1, . . . , 96. Similarly, the intra-weekly effect is modeled by 7 cubic B-spline bases B w k, with knots at w k , k = 1, . . . , 7.
The interaction effects among the intra-daily and intra-weekly are modeled by the products of the corresponding intra-daily and intra-weekly bases functions, namely:
Model Bases Selection
Many standard estimators can be improved by shrinkage methods, such as ridge regression [23] and Lasso regression. This study adopts Lasso regression to obtain sparse solutions from the model bases selection.
In a Lasso regression, the value of the parameter controls both the size and the number of coefficients. Cross-validation is a resampling technique which can find a parameter value that ensures a proper balance between bias and variance. In this case, cross-validation considers the best tuning parameter value to be the one that minimizes the estimated test error rate of the forecasting results. More details about the Lasso estimate and adaptive Lasso can be found in [17,18].
Temperature Forecast Adjustment
Currently, TCWB provides three-hour temperature forecasts for the present day (D-day), the next day (D + 1), and the maximum and minimum temperature forecasts for the days (D + 2) to (D + 7).
Calibration of temperature forecasts
To calibrate the day-ahead temperature forecasts at the eight time points s 1 , . . . , s 8 provided by TCWB before D-day (the first day that the temperature forecasts are to be calibrated), the errors between the historical apparent temperatures T a (t, s), and the recorded day-ahead temperature forecasts of the 28 days before D-day,T a (t, s), t = D − 1, . . . , D − 28, s = s 1 , . . . , s 8 , are used for the calibration. Define both the historical and the forecasted mean temperatures of the 28 days before D-day as T(D, s) andT(D, s) respectively, where s = s 1 , . . . , s 8 : For D-day, let the error between the mean temperatures and the calibrated temperature forecasts be, respectively: The calibrations and (D + 1)-day's eight temperature forecast time points can be found similarly. Note that samples from historical days with unusual temperature pattern are treated as outliers and thus deleted beforehand.
2.
Refined temperature forecasts As we need to make load forecasts at 15-min intervals, we first interpolate the provided three-hour forecast data into a 15-min resolution, which will lead to smaller biases versus the real 15-min interval temperatures. We use the well-established Cubic Hermite interpolation method [20] and present that interpolation formula below.
Let the sequence {u i } k i=1 be a partition u 1 < u 2 < · · · < u k of the interval [u 1 , u k ], and let {T i }, T i = h(u i ), be the corresponding data points. The local grid spacing is ∆u (i+0.5) = u (i+1) − u i , and the slope of the piecewise linear interpolant between the data The cubic Hermite interpolant polynomial defined for u i < x < u i+1 is: Then the interpolant method produces as its output a sequence of temperature forecasts at 15-min intervals.
3.
Transformed temperature forecasts It is noted that the effect of temperature to the load is nonlinear as shown in Figure 2 and upon examination it is observed that the load is approximately linearly related to the logistic sigmoid transformation of the temperature through where c 0 = 30 represents the location of the reflection from concave upward to concave downward, c 1 = 0.85 represents the scale parameter controlling slope changes.
Then the interpolant method produces as its output a sequence of tem casts at 15-min intervals.
Transformed temperature forecasts
It is noted that the effect of temperature to the load is nonlinear as sho and upon examination it is observed that the load is approximately linearly logistic sigmoid transformation of the temperature through where 0 = 30 represents the location of the reflection from concave upw downward, 1 = 0.85 represents the scale parameter controlling slope cha
Recurrent Neural Network with Selected Bases
RNN introduces loops in the network and allow internal connections units to enable exploration of the temporal relationships among the data structure with selected model bases taken from the resulting SPM describe sented in the following.
General Structure of RNN
Each of the RNN layers uses a loop to iterate over the time steps of the RNN with a single hidden layer is illustrated below.
The input training data is the adaptive lasso estimator effects, it is giv The mapping of the output ( ) can be represented as: where (0) = 0, t = 1,..., T, ϕ are the activation functions, u is the input we is the input weight for ( − ) , where both are the same for all time points
Recurrent Neural Network with Selected Bases
RNN introduces loops in the network and allow internal connections among hidden units to enable exploration of the temporal relationships among the data [24]. The RNN structure with selected model bases taken from the resulting SPM described above is presented in the following.
1.
General Structure of RNN Each of the RNN layers uses a loop to iterate over the time steps of the sequence. An RNN with a single hidden layer is illustrated below.
The input training data is the adaptive lasso estimator effects, it is given as: The mapping of the output o (t) can be represented as: where o (0) = 0, t = 1,..., T, φ are the activation functions, u is the input weight for , t = 1, . . . , T, which can be obtained through the following equation: where h where V is output weight matrices and b 0 is a parameter in the model representing the bias of the hidden layer and the output layer. With suitable choices for the parameters, such as the number of layers k, number of neurons in each layer m, and time steps of a sequence T, the RNN is expected to perform better than a more general model structure considering time effects in the neural network framework for STLF problems. We build an RNN model with k multilayer perceptrons in Python using the tensorflow library.
2.
Configuration Architecture The RNN training process is heavily influenced by the choice of hyper parameters: sequence size, number of hidden layers and number of nodes per hidden layer. Efforts were made to search for a hyper parameter space to test different parameter combinations most suitable for TPC system. The experiment was conducted using standard RNN network to provide a best set of hyper-parameters. The results shown in Table 1 indicate that the best number of hyper-parameters units is 14, Layers is 3, and Time steps is 4.
Real-Time Adapted Forecasting
The increasing use of renewable power sources has produced an increa tency and a ramping in the net load profile that requires additional control e tain frequency quality. For a complete treatment for STLF, we also prov adaptive STLF procedure to help system operators with a detailed view int power system condition, so as to aid in their decision making. A quasi-r based forecasting model (RNN ) with the objective of providing short-t casts is described below.
Load Forecasts Interpolation
In the first stage, the STLF results are interpolated to be a sequence every 5-min period. The Cubic Hermite interpolation method is used to pr forecasts at 5-min intervals. This real time load data at 5-min intervals is th second step to adaptively adjust the forecasting results.
Adaptive Load Forecasting
The correction value is the average difference between the actual and load values in the past 15-min interval. In other words, it is the average o ferences of the actual and forecasted load values calculated at 5-min interv
Exponentially Weighted Average
Finally, we use the EWMA to smooth the correction result. The expon ing is given by the formula: is the ith corrected value.
Test Results
TPC system load data from January 2012 to December 2019 are used proposed method. The load data used here is the net load (the power serve ators minus the TPC's pumped storage load). The days in each year are di classes: general days and special days. Special days refer to exceptional d
Real-Time Adapted Forecasting
The increasing use of renewable power sources has produced an increase in intermittency and a ramping in the net load profile that requires additional control efforts to maintain frequency quality. For a complete treatment for STLF, we also provide a real-time adaptive STLF procedure to help system operators with a detailed view into the real-time power system condition, so as to aid in their decision making. A quasi-real time RNN-based forecasting model (RNN adp ) with the objective of providing short-term load forecasts is described below.
Load Forecasts Interpolation
In the first stage, the STLF results are interpolated to be a sequence with values in every 5-min period. The Cubic Hermite interpolation method is used to produce the load forecasts at 5-min intervals. This real time load data at 5-min intervals is then used in the second step to adaptively adjust the forecasting results.
Adaptive Load Forecasting
The correction value is the average difference between the actual and the forecasted load values in the past 15-min interval. In other words, it is the average of the three differences of the actual and forecasted load values calculated at 5-min intervals.
Exponentially Weighted Average
Finally, we use the EWMA to smooth the correction result. The exponential smoothing is given by the formula: where y i is the ith corrected value.
Test Results
TPC system load data from January 2012 to December 2019 are used for testing the proposed method. The load data used here is the net load (the power served by all generators minus the TPC's pumped storage load). The days in each year are divided into two classes: general days and special days. Special days refer to exceptional days that have their own load patterns (e.g., holidays, days experiencing a typhoon, etc.). General days refer to either typical working days or weekends. The main goal of this study is to provide STLF method for general days.
Training Data Selection
For the future day loads to be predicted, the training samples are chosen from historical days with a similar load pattern. The input-target pairs are the historical temperature (predicted and actual) and load data recorded during the corresponding days in the previous 28 days (4 weeks), together with the 6 weeks around the same period of the previous year and the predicted temperatures of the future days from TCWB. To select a subset of model bases as predictors for estimating the future day loads, a training set that has 70 daily loads and temperature data corresponding to the time period shown in Figure 4 is used. The forecasting process begins every morning at 9:00 a.m. to forecast demand up to next 7 days with 15 min resolution. The test results obtained when applying the method to forecast the load in year 2018-2019 are presented. STLF performance indices, such as the mean absolute mean error (MAE), root mean square error (RMSE), absolute performance error (APE) and mean absolute percentage error (MAPE), are used to evaluate the forecasting accuracy of the model used [25].
ci. 2021, 11, x FOR PEER REVIEW year and the predicted temperatures of the future days from TCW model bases as predictors for estimating the future day loads, a daily loads and temperature data corresponding to the time peri used. The forecasting process begins every morning at 9:00 a.m. to next 7 days with 15 min resolution. The test results obtained whe to forecast the load in year 2018-2019 are presented. STLF perfo the mean absolute mean error (MAE), root mean square error (R mance error (APE) and mean absolute percentage error (MAPE), forecasting accuracy of the model used [25].
Comparison of Test Results Obtained from the Semi-Parametric M
The MAE, RMSE and MAPE of the accuracies of the load fo from 2018-2019 are provided. The forecasting accuracies for the tw historical temperatures serve as a baseline for comparison. In Ta (D + 1)-day monthly MAE, RMSE and MAPE of the two models w tures are given in details. It can be seen that with the actual tem have good accuracies on the (D + 1)-day forecasts and the perform is especially outstanding on most of the months from 2018-2019 w MAPE at 2.03, 1.70 respectively.
Comparison of Test Results Obtained from the Semi-Parametric Model and the RNN Model
The MAE, RMSE and MAPE of the accuracies of the load forecasts for every month from 2018-2019 are provided. The forecasting accuracies for the two models based on the historical temperatures serve as a baseline for comparison. In Table 2 the corresponding (D + 1)-day monthly MAE, RMSE and MAPE of the two models with historical temperatures are given in details. It can be seen that with the actual temperature, both models have good accuracies on the (D + 1)-day forecasts and the performances of the RNN model is especially outstanding on most of the months from 2018-2019 with annual averages of MAPE at 2.03, 1.70 respectively.
Forecasts with Temperature Calibration
The actual temperature data indicate that temperature forecasting biases increase rapidly with large values (around 3 degrees). Figure 5 presents the MAPE time plots of two STLF models using original and adjusted temperature forecasts as inputs. From Figure 5 and Table 3, it can be seen that, after calibrating the forecasted temperature through bias correction based on the previous 28 days' temperature forecasting biases, the forecasting accuracies are significantly improved.
Appl. Sci. 2021, 11, x FOR PEER REVIEW Figure 5. The mean absolute percentage errors (MAPEs) of semi-parametric (SPM) and RN els with the historical, forecasted and calibrated temperatures for the (D + 1)-day daily loa terns.
Real-Time Forecast Performance
The monthly performance comparisons for 2018 on the MAPE of the real-time D-day forecast for the next 6 h are given in Table 4. As the table shows, the annual averages of the MAPE for the RNN-based RNN adp model are below 1%. Figure 6 shows the performance of the model for a typical day of the studied period, June 22. As can be seen, the real time load pattern and the forecasting load pattern of the RNN model have similar trend patterns, but the forecasted curve is much lower. With the adjusted model, RNN adp , however, the forecast accuracy improves significantly, obtaining an average error of 0.567% across the entire day.
Real-Time Forecast Performance
The monthly performance comparisons for 2018 on the MAPE of the real-time D-day forecast for the next 6 h are given in Table 4. As the table shows, the annual averages of the MAPE for the RNN-based RNN adp model are below 1%. Figure 6 shows the performance of the model for a typical day of the studied period, June 22. As can be seen, the real time load pattern and the forecasting load pattern of the RNN model have similar trend patterns, but the forecasted curve is much lower. With the adjusted model, RNN , however, the forecast accuracy improves significantly, obtaining an average error of 0.567% across the entire day.
Comparison of ANN, MIX, SPM and RNN Model Performance
In this sub-section, performance of different STLF methods are compared, including a two-stage Artificial Neural Network (ANN) model and an STLF model developed previously by TPC with special attention being given to the adjustments of the peak and nadir load forecasts [26], and a mixed model (MIX) with a weighted average of the ANN and against our basic RNN without Lasso variable selection or temperature calibration, where the weights are inversely proportional to the MAPEs of the previous day.
The performances of the next (D + 1)-day monthly MAPEs for these four forecasting models are presented in Table 5, for the year 2018. As the table shows, each model has its own advantages and disadvantages in daily load profile and max/min load forecasts. For example, the RNN model has the best overall yearly average performance for monthly MAPEs: 2.34 for the daily loads and 2.23 for daily peak loads. The SPM model, however, performs the best for nadir loads, with an average monthly MAPE of 2.18. The RNN performance is the best in the spring seasons and fairly good in the winter, when daily load patterns are stable. The SPM performance is the best in the summer season when the weather varies more. The MIX model has its best accuracy in the winter season. The ANN does very well in February and August, when there is the most uncertainty in the load pattern. Both the ANN and MIX models have large biases in June, however, especially for days adjacent to special days.
In more closely examining the daily MAPEs of the four models, we find that there are only 2 days for which all four models have MAPEs greater than 4, thus failing to catch the real load patterns: the day before Chinese New Year's Eve in February and the "nine in one" election day in November. Figure 7 shows the actual and (D + 1)-day forecasts for the four models made on the previous D-day of these two days. All four models have similar forecasts with large biases to the real load on these two days. This indicates that these two days should be considered as special days in the future, so as to avoid large biases being included in the training samples for future forecasts, thereby helping improving the biases, particularly in February, for the SPM and RNN models. Figure 8 presents the boxplots of daily MAPEs in 2018, after deleting the two days mentioned above. The performances of SPM and RNN are shown to be generally more robust, with fewer extremely large MAPEs. One of those days is 13 June 2018, where the MIX and ANN have sim casts, while the SPM and RNN perform reasonably well. The other day is where the MIX model is slightly better than the ANN, with smaller biases. One of those days is 13 June 2018, where the MIX and ANN have similar daily forecasts, while the SPM and RNN perform reasonably well. The other day is 11 June 2018, where the MIX model is slightly better than the ANN, with smaller biases.
Another indication that the four models complement each other well is that only about 5% of forecasting days have MAPEs greater than 2.5 for all four models. This 5% compares to the 18% of days where the MAPEs of both ANN and MIX are both greater than 2.5, and the 14% of days where this is true for both SPM and RNN-a reduction of more than 9% in these cases. Among those days where all four models had large errors, about two thirds had explainable unexpected circumstances that caused the forecasting errors, such as TPC executing electric demand bidding, extreme weather conditions, or special events.
A new model can, in fact, be created by using an optimal weighted average of the ANN, SPM and RNN forecasts as such a hybrid model might further reduce the forecasting errors with proper time-varying weightings. How to choose appropriate optimal weights is a topic worthy of further investigation.
3.6. Performances of the (D + 2) to (D + 7) Day Forecasting Accuracies of the RNN Models Figure 10 presents the monthly averages of the (D + 2)-day to (D + 7)-day forecasting MAPEs of the RNN models with forecasting temperatures from 2018-2019. Note that the seasonal patterns appear in both years, and, as they are the (D + 2)-day to (D + 7)-day forecasts, the higher MAPEs due to longer-range forecasts are to be expected.
casts, while the SPM and RNN perform reasonably well. The other day is where the MIX model is slightly better than the ANN, with smaller biases.
Another indication that the four models complement each other we about 5% of forecasting days have MAPEs greater than 2.5 for all four mo compares to the 18% of days where the MAPEs of both ANN and MIX ar than 2.5, and the 14% of days where this is true for both SPM and RNNmore than 9% in these cases. Among those days where all four models ha about two thirds had explainable unexpected circumstances that caused t errors, such as TPC executing electric demand bidding, extreme weather special events.
A new model can, in fact, be created by using an optimal weighted a ANN, SPM and RNN forecasts as such a hybrid model might further reduc ing errors with proper time-varying weightings. How to choose approp weights is a topic worthy of further investigation. Figure 10 presents the monthly averages of the (D + 2)-day to (D + 7)-d MAPEs of the RNN models with forecasting temperatures from 2018-2019 seasonal patterns appear in both years, and, as they are the (D + 2)-day forecasts, the higher MAPEs due to longer-range forecasts are to be expecte
Conclusions
An STLF method using a semi-parametric model and RNN with selected bases is presented. This tool has been adopted by TPC Operation Department for daily operation purposes since 2019. Due to the weather characteristics in Taiwan, test results indicate that STLF is especially challenging at season transitions: from spring to summer and from summer to autumn. The main advantage of an RNN-based STLF proposed here is that with the calibrated forecasted temperatures and features extracted from the load series, through ensembles of B-spline and multi-resolution bases after statistical variable selection approach, it can avoid the overfitting problems in the deep learning stage and adapt to these weather changing patterns earlier than other methods. Noticeable improvements of the MAPEs in 2019 for the RNN model with calibrating temperatures as compared with other methods are observed. However, the intra-day load forecasts are sometimes far off due to unexpected meteorologic factors. The real-time adaptive load forecasts for the next one hour with every 5-min interval and helps the system operator to adjust the ancillary service requirement to meet the electricity demand changes. In [15,16], both have used LSTM as the deep learning methodology; we have also tried the LSTM model, where results for the STLF show that the improvements on the accuracies of the forecasts are limited and the model is more complicated and takes much more time to compute, which is not that feasible for daily use in practice. In [16], a similar-days selection procedure is adopted, which is worthy of more studies to see the advantages and shortcomings of this approach for our dataset with fast changing weather patterns. Besides the techniques presented, an optimal model averaging various load forecasting models is a topic for further investigation, as is how to extend the training samples to special day load forecasts.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-09-01T15:09:34.594Z | 2021-06-25T00:00:00.000 | {
"year": 2021,
"sha1": "f2fff7fe3b78ab260b7269c00345bb62db824e8c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/13/5930/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "041d0223d1395602d102cc6105ecdf856f9fae13",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
235792777 | pes2o/s2orc | v3-fos-license | Effect of Speed and Surface Type on Individual Rein and Combined Left–Right Circle Movement Asymmetry in Horses on the Lunge
Differences in movement asymmetry between surfaces and with increasing speed increase the complexity of incorporating gait analysis measurements from lunging into clinical decision making. This observational study sets out to quantify by means of quantitative gait analysis the influence of surface and speed on individual-rein movement asymmetry measurements and their averages across reins (average-rein measurements). Head, withers, and pelvic movement asymmetry was quantified in 27 horses, identified previously as presenting with considerable movement asymmetries on the straight, during trot in hand and on the lunge on two surfaces at two speeds. Mixed linear models (p < 0.05) with horse as the random factor and surface and speed category (and direction) as fixed factors analyzed the effects on 11 individual-rein and average-rein asymmetry measures. Limits of agreement quantified differences between individual-rein and average-rein measurements. A higher number of individual-rein asymmetry variables—particularly when the limb that contributed to movement asymmetry on the straight was on the inside of the circle—were affected by speed (nine variables, all p ≤ 0.047) and surface (three variables, all p ≤ 0.037) compared with average-rein asymmetry variables (two for speed, all p ≤ 0.003; two for surface, all p ≤ 0.046). Six variables were significantly different between straight-line and average-rein assessments (all p ≤ 0.031), and asymmetry values were smaller for average-rein assessments. Limits of agreement bias varied between +0.4 and +4.0 mm with standard deviations between 3.2 and 12.9 mm. Fewer average-rein variables were affected by speed highlighting the benefit of comparing left and right rein measurements. Only one asymmetry variable showed a surface difference for individual-rein and average-rein data, emphasizing the benefit of assessing surface differences on each rein individually. Variability in straight-line vs. average-rein measurements across horses and exercise conditions highlight the potential for average-rein measurements during the diagnostic process; further studies after diagnostic analgesia are needed.
INTRODUCTION
In addition to presenting a horse in hand, on the straight, in the symmetrical gait of trot (1) horse movement is commonly assessed on the lunge during the equine lameness or poor performance examination (2,3). The need to exert centripetal force toward the center of the circle leads to the horse leaning into the circle and the limb on the inside of the circle having a more acute angle relative to the ground compared with the limb on the outside of the circle (4). In addition, the more the amount of body lean angle increases, the higher the speed and smaller the circle (5). This behavior can be predicted from the increasing centripetal force (Fcentri = mv 2 /r; m = mass, v = forward velocity, r = circle radius) and the assumption that the horse aims at minimizing extrasagittal joint torques.
There is also an association between movement asymmetry measures-commonly used in the context of quantitative assessment of lameness-and body lean angle: increasing movement asymmetry is measured with increasing body lean (5). In non-lame or mildly lame horses, visual assessment appears to be affected little by the measurable increase in movement asymmetry on the circle (6). However, referred or compensatory movements (7) may contribute to confusions during visual observation of horses on the lunge (8).
The increase in movement asymmetry on the lunge as a function of body lean (5) currently presents a challenge for integrating quantitative movement asymmetry measurements into the lameness examination. The recently shown effect that, after successful diagnostic analgesia of limb-related lameness, body lean angle becomes more similar between lunging directions (9) highlights the potential of investigating methods that combine the measurements obtained on left and right rein into one combined outcome parameter.
Here, we explore the influence of speed and surface on a movement asymmetry outcome parameter combining left and right rein measurement into one value: the average of the individual rein values. It is hypothesized that on the lunge, in horses with preexisting movement asymmetries measured during in hand trot on the straight: (1) Averages of movement asymmetry measures across reins ("average-rein measurements") are less affected by speed than the individual rein measurements since at similar speeds between reins, opposite directions of body lean will result in circle-induced movement asymmetries of similar magnitude, which will cancel out in the average value. (2) Differences between surfaces will be consistently apparent in both the individual rein asymmetry values as well as in the combined average-rein measurements. (3) Average-rein measurements will be more exacerbated in comparison with straight-line asymmetry. Due to the increased limb angulation of, in particular the inside limb, asymmetry values will increase considerably on one rein. Due to limb angulation being more similar to straight-line locomotion for the outside limb, asymmetry measurements will be more similar to the straight-line measurement on the opposite rein. Hence, the average-rein movement asymmetry would be expected to exceed the corresponding straightline measurement.
Horses
Data collection was part of a study aiming to assess the effect of oral administration of non-steroidal anti-inflammatory drugs (NSAID) on upper body movement asymmetries (10) approved by the Ethical Committee for Animal Experiments, Uppsala, Sweden, application number C 48/13 and C 92/15. Informed written consent was obtained from all horse owners. For that study, the effect of meloxicam administration was assessed in N = 66 horses (out of a total of 140 horses initially screened) with preexisting movement asymmetries [>6 mm for head movement asymmetry; >3 mm for pelvic movement asymmetry; (11)] in a placebo-controlled, crossover study.
In the present study, the effect of NSAID administration was not assessed. However, a subset of N = 27 horses [out of the 66 with preexisting movement asymmetries, i.e., at least one asymmetry parameter had been found outside "normal limits"; see (10) for more details] for which successful data collection had been achieved during in-hand exercise as well as on the lunge with an additional five-sensor inertial measurement unit (IMU) gait analysis system were included. Horses were only included, if at least one successful in-hand assessment on the straight and one successful lunge assessment (on both reins) had been performed. Mean absolute asymmetry values-characterizing the amount of asymmetry independent of its direction-varied between 10 and 18 mm for head movement, between 5 and 10 mm for withers movement, and between 5 and 8 mm for pelvic movement (Please refer to data processing for details about the measured parameters and Table 1 for more details). Horse details including age, gender, breed, height, body mass, discipline, and level of the N = 27 horses used here can be found in Supplementary Table 1.
Gait Analysis System
Each horse was equipped with a wireless five-sensor IMU gait analysis system consisting of five MTw wireless IMU sensors (first generation, Xsens, Enschede, The Netherlands, tri-axial accelerometer ±16× gravity, tri-axial gyroscope ±2,000 deg/s, tri-axial magnetometer ±1.9 mGauss). Sensors were attached in custom-made neoprene pouches over poll (highest point of head in center between ears), withers (over thoracic vertebrae 6), sacrum (in the center between the tubera sacrale), and each tuber coxae (cranio-dorsal aspect). All sensors were synchronized to a wireless transceiver station (Awinda, Xsens) and transmitted synchronized orientation data (Euler angles) and calibrated accelerations at a rate of 100 samples per second to a nearby laptop computer running MTManager (Xsens). Data collection was manually started and stopped by the operator aiming at collection of a minimum of 25-30 strides of steady-state locomotion per assessment condition.
Assessment Conditions
Each horse was assessed during trot in hand and on the lunge (15m-circle) on both left and right rein. Horses were assessed on two surfaces: a "hard" (gravel based) surface and a "soft" arena surface, at two different trotting speeds: "slow" and "fast." This resulted in a maximum of 12 assessments per horse for a complete
Data Processing
From Sensor Data to Movement Asymmetry IMU data were processed following published protocols (12,13). Tri-axial sensor acceleration was rotated into a right-handed, horse-and gravity-based reference frame (x: positive forward in the direction of travel, z: positive upward aligned with gravity, y: perpendicular to x and z, i.e., to the left of the horse) and double integrated to vertical displacement. Displacement data were segmented into individual strides (14) and differences between minima, maxima, and upward amplitudes extracted from each stride for poll, withers, and sacrum sensors (15). Hip hike difference (difference between upward movement amplitude of left tuber coxae during right-hind stance and of right tuber coxae during left-hind stance) and range of motion difference (difference between range of motion of left tuber coxae and right tuber coxae) were also calculated. This resulted in 11 asymmetry values: three for vertical head displacement (HDmin, HDmax, and HDup), three for withers displacement (WDmin, WDmax, and WDup), three for pelvic displacement (PDmin, PDmax, and PDup), and two for differential tuber coxae movement [hip hike difference (HHD), range of motion difference (RD)]. Median values for each of the 11 movement asymmetry parameters across all strides were tabulated for each assessment condition together with stride time (an output parameter of the stride segmentation process (14), surface (hard, soft) and speed (slow, fast) category, as well as movement direction (straight, left, and right).
Data Normalization
In order to optimize the use of the data of N = 27 horses, a data normalization procedure was implemented for each of the 11 movement asymmetry parameters. First, this procedure aimed at expressing movement asymmetries in relation to "preexisting" movement asymmetries-positive values: same direction as "preexisting" asymmetry; negative values: opposite direction as "preexisting" asymmetry. Second, instead of labeling movement direction as "left" or "right" rein, movement direction was expressed as "inside" or "outside" rein again in relation to the "preexisting" movement asymmetries.
Normalization with respect to preexisting straight-line asymmetries: The implemented normalization necessitates the identification of "preexisting movement asymmetry." It was decided to base this decision for each movement asymmetry parameter on the respective value obtained during straightline assessment. If for a given horse the preexisting asymmetry value was found to be negative, ALL values for this asymmetry parameter were inverted for this horse. For example, for a horse with a negative value for HDmin obtained during the straight line assessment, all HDmin values were inverted. For horses with more than one straight-line measurement (obtained on different surfaces and at different speeds), the average value across all straight-line measurements was used for the categorization of the "preexisting" asymmetry. In essence, this means that the more positive a value gets, the more exacerbated the preexisting asymmetry would be. Negative values, on the other hand, would indicate that the horse has "switched limbs" and is in that particular instance showing a movement asymmetry that is opposite to the preexisting asymmetry. This normalization was implemented for each movement parameter independently, i.e., for a horse with a positive HDmin and a negative HDmax during straight-line assessment, HDmax values would be inverted, but not HDmin values.
Normalization of movement direction: The direction label of "inside" was attributed to the left rein for horses with "left-sided" preexisting asymmetry and the label "outside" to the right rein (vice versa for horses with "right-sided" preexisting movement asymmetry). The preexisting asymmetries were categorized as left or right asymmetrical based on published associations between force and movement asymmetry (16,17) in relation to the sign of each asymmetry parameter.
Combining Inside and Outside Rein Data
Finally, in order to address the research questions concerning individual rein vs. average-rein movement asymmetry, mean asymmetry values were calculated across left and right rein for each normalized outcome parameter for each exercise condition. For each horse, only exercise conditions for which both left and right rein data had been collected were entered into the final data set. In that case, normalized individual and averagerein movement symmetry measures were then tabulated together with surface (hard/soft) and speed (slow/fast) category as well as movement direction (straight/inside/outside/average-rein).
Statistical Testing
All statistical testing was implemented in SPSS (version 26, SPSS Inc.), and the level of significance was set at p < 0.05 throughout. Note: Instead of applying the Bonferroni correction to the significance level, alpha, this study reports the Bonferroni-adjusted p-values (p-values based on Fisher's least significant difference multiplied by the number of comparisons done). This allows assessment of significance with reference to the traditional alpha of 5%, without increasing type II errors.
Preexisting Movement Asymmetries
Basic descriptive statistics (mean, standard deviation, minimum, and maximum) are being provided to characterize the preexisting movement asymmetries observed in the study sample of N = 27 horses. In addition, the numbers of horses categorized with left-or right-sided "preexisting" asymmetries are given for each asymmetry parameter.
In order to illustrate the consistency of "preexisting" asymmetry across different speed and surface categories on the straight-the basis of the implemented data normalization procedure-intra-horse variation across straight-line surface + speed combinations was assessed. First, an average value was calculated for each horse and each asymmetry parameter across all available straight-line condition mean values. If at least two straight-line conditions had been measured for a horse, the differences between this horse's straight-line mean value and each individual straight-line value were calculated and the standard deviation (SD) of these differences calculated as an indicator of intra-horse variation (intra-horse SD) for each asymmetry parameter.
Influence of Speed
No direct speed measurement was obtained during data collection. As a consequence, it was investigated whether stride time could be used as a reliable proxy for speed. Since an increase in speed has been shown previously as leading to a decrease in stride time within a particular gait (18), it was hypothesized that differences in stride time would be measurable between the data that had been subjectively categorized as "slow" and "fast" during data collection. A mixed linear model was implemented with horse number as random factor, surface category, movement direction, and speed category as fixed factors, and stride time as outcome parameter. A Bonferroni correction for multiple comparisons was implemented for pairwise comparisons between movement directions and estimated marginal means were investigated to study which conditions showed reduced/increased stride times.
Associations and Differences Between Straight-Line and Average Rein Movement Asymmetry
Scatter plots of straight-line asymmetry (x-axis) vs. average-rein asymmetry (y-axis) values were created, linear trend lines fitted, and R 2 values for the trend line calculated. Slope values close to a value of 1 indicate that average-rein asymmetry values are similar to straight-line asymmetry, with values smaller than 1 indicating reduced average-rein asymmetry during lunging and values exceeding 1 indicating increased average-rein asymmetry on the lunge compared with the straight line.
In addition, to illustrate any differences between inside rein, outside rein, and average-rein data, scatter plots and trend lines were added to the same plots with straight-line asymmetry on the x-axis and matching individual-rein asymmetry on the y-axis.
Bland and Altman style limits of agreement (19) were calculated between straight-line and matching average-rein asymmetry values. Differences were calculated between matching assessment conditions (e.g., soft, slow, and straight line compared with soft, slow, average rein) for the normalized asymmetry parameters. Mean and SD of these differences were calculated across all matching conditions for which data were available to express limits of agreement.
Mixed linear models with normalized asymmetry values, with horse as random factor, surface, speed, and direction (straight line and average rein) as fixed factors were implemented for each of the 11 asymmetry parameters. Estimated marginal means (hard, soft; fast, slow; straight-line, average-rein) were calculated to illustrate the size and direction of any significant effects.
Effect of Surface and Speed
A further 22 mixed linear models with normalized asymmetry parameters, with horse as random factor, surface, and speed category as fixed factors were implemented: two models per asymmetry parameter; one based on the data gathered from the inside rein exercise, one based on the data gathered from the outside rein exercise. Estimated marginal means (hard, soft; fast, slow) were calculated to illustrate the size and direction of any significant effects.
Histograms of model residuals were inspected visually for each model, and all residuals were considered to follow a normal distribution.
"Preexisting" Straight-Line Movement Asymmetries
Asymmetry values of head, withers, sacrum, and differential tuber coxae displacement derived from N = 97 straight-line and in-hand gait assessments in N = 27 horses are presented in Table 1. Mean and standard deviation as well as minimum and maximum values of the original asymmetry values are presented together with the number of horses presenting with left-sided (#L) or right-sided values for each parameter. Also given is a value illustrating intra-horse variation (Davg) indicating the interval of asymmetry values (i.e., ±Davg) representing 68% of intra-horse straight-line asymmetry values across the assessed surfaces and speeds in the study horses.
Influence of Speed
The implemented mixed model with stride time as outcome parameter resulted in p-values <0.001 for speed category as well as for surface type and movement direction. The grand mean for stride time was found to be 772 ms, estimated marginal means were 753 ms for fast speed, and 791 ms for slow speed, 765 ms for hard surface and 779 ms for soft surface, 747 ms for straight line trot, 786 ms for left rein, and 783 ms for right rein. Pairwise Bonferroni-corrected comparisons identified significant differences between straight line and left rein (p < 0.001) and between straight line and right rein (p < 0.001) but not between left rein and right rein (p = 1.0).
The identified significant influence of speed category on stride time meant that for further statistical modeling, speed category (fast, slow) was used.
Associations and Differences Between Straight-Line and Average Rein Movement Asymmetry
Slope values of linear trend lines fitted to scatter plots of asymmetry values for pairs of straight-line and average-rein asymmetry parameters of matching exercise conditions, i.e., straight-line, hard surface, slow speed and average-rein, hard surface, and slow speed, showed values between 0.390 and 0.900 (see Table 2). All slope values were found to be <1 indicative of higher amounts of movement asymmetry on the straightline compared with the matching average-rein exercise. Values closest to one were found for asymmetry measures derived from vertical displacement of the withers (0.659-0.900), followed by head movement (0.486-0.809) and pelvic movement (0.390-0.631). The smallest slope values were found for movement parameters derived from tuber coxae movement and for pelvic upward movement asymmetry (values ≤ 0.5). R 2 values range from 0.123 for PDup (slope 0.39) to 0.604 for WDup (slope 0.9) indicating a fair amount of variation between horses and exercise conditions. Slope values ( Table 2 and Figures 1, 2) for inside rein asymmetry data (magenta) ranged from 0.267 (HDmax) to 1.03 (HDmin) and for outside rein asymmetry (cyan) from 0.139 (PDup) to 0.974 (WDup). For six variables (HDmin, HDup, PDmin, PDup, HHD, and RD), inside slope values were higher than outside slope values, and the opposite was found for the remaining five variables (HDmax, PDmax, WDmin, WDmax, and WDup). The only asymmetry parameters for which outside rein values are consistently higher than inside rein values (cyan line sitting on top of magenta line, Figures 1, 2) are related to the displacement maxima (HDmax, WDmax, and PDmax). For HDmin and HDup, the lines of best fit for inside and outside rein data are crossing indicating that the relationship between inside rein and outside rein asymmetry can be different dependent on the straight-line value (crossing point for HDmin at around 18-mm straight-line asymmetry for HDup at >50-mm straight-line asymmetry).
Linear mixed models investigating straight-line and average-rein measurements ( Table 4) showed that two movement parameters were significantly affected by surface with one (HDmin, p = 0.046) showing marginally increased asymmetry on the soft surface and the other (HHD, p = 0.021) the opposite effect. Differences between estimated marginal means were small (below 2 mm) for both parameters. Two movement parameters were significantly affected by speed with both showing reduced asymmetry at the slower speed (WDmax, p = 0.008; WDup, p = 0.003). Again, differences between estimated marginal means were below 2 mm.
Six movement parameters (HDmax, PDmin, PDmax, PDup, HHD, and RD) were found to be significantly different between the straight-line and the average-rein condition (p < 0.001 to p = 0.031). Five of the six affected parameters are related to pelvic movement, either to vertical movement of the sacrum or to the vertical movement difference between left and right tuber coxae. All six parameters showed increased asymmetry on the straight-line compared with the average-rein values. Differences between estimated marginal means were small and ranged from just below 1 mm (PDmax) to 4.1 mm (HDmax) with tuber coxae derived differences (HHD, RD) in the order of 2.6-2.7 mm and sacrum-derived differences ranging from 0.95 mm for PDmax to 2.6 mm for PDup. None of the asymmetry parameters derived
Provided are values for fixed factors of surface (hard, soft), speed (fast, slow), and direction (straight, circle). All estimated marginal mean values (EMM) and difference values (|diff|) in mm. Significant p-values, the corresponding EMM values and their differences are highlighted in bold red.
from vertical withers movement were significantly different between straight-line and average-rein conditions.
Effect of Surface
Linear mixed models for the individual rein data (
Effect of Speed
Nine of the individual rein asymmetry data models ( Table 5) showed significant effects of speed. For all nine affected parameters, increased amounts of asymmetry were measured at the faster speed. Six movement asymmetry parameters were affected by speed with the horse trotting with the limb attributed to the baseline asymmetry on the inside of the circle (PDmin, PDup, WDmin, WDup, HHD, and RD), and all six showed movement asymmetries in the same direction as the "baseline" asymmetry measured on the straight-line. Three movement asymmetry parameters were affected by speed on the outside rein (PDmin, WDmin, and WDmax) with the two parameters related to weight bearing asymmetry (WDmin and PDmin) indicating movement asymmetries in the opposite direction of the "baseline" straight-line measurement.
All nine asymmetry parameters affected by speed were either derived from pelvic or withers movement, and none of the head asymmetry parameters were affected by speed.
DISCUSSION
In this study, we have investigated upper body movement asymmetry parameters of horses trotting in hand on the straight as well as on the lunge on both reins. A particular area of interest was the combination of movement symmetry measures obtained on the individual reins into a common parameter, here termed "average-rein" measurement. Rhodin et al. (20) showed that average-rein measurements can be useful to reduce the circle-dependent asymmetries created by increased body lean angle for some symmetry variables but the effect of speed and surface was not evaluated in that study. The interest in using "average-rein" measurement was further fueled by the potential to reduce the influence of speed-with increasing speed on the circle associated with increasing body lean angle and increasing movement asymmetry (5)-when left-rein and right-rein speed effects may cancel out. It was also hoped that differences in asymmetry measures between surfaces (21) would be preserved by this operation, and hence, presenting average-rein data could be useful in reducing the complexity of interpreting gait analysis data in clinically lame horses: being faced with too much information may lead to suboptimal decisions (22).
The study population of horses consisted of a subset of horses from a larger scale, placebo-controlled, crossover investigation into the effects of meloxicam on movement asymmetry (10). As such, all horses had been identified previously as showing movement asymmetry values outside threshold values commonly employed during clinical lameness investigations. However, not all horses showed the same type of asymmetry with reference to the subset of asymmetry variables outside threshold values. While mean absolute values of head and pelvic asymmetry values of the 27 horses used here were (in some cases just) outside threshold values, there is a large spread of asymmetry values across horses (Table 1) highlighting the inhomogeneous nature of movement asymmetries shown. The overarching study had not identified a significant effect of meloxicam (10). Hence, we cannot easily draw conclusions about whether these horses showed movement asymmetry in reaction to musculoskeletal pain, i.e., we cannot be sure whether the movement asymmetries were simply expressions of biological variation, motor laterality, or were related to a none response to the specific treatment administered here. Fact is that the asymmetry values vary greatly between horses showing values of up to 45 mm for head movement and up to 20 mm for pelvic movement. Each of the 27 horses showed at least one asymmetry parameter exceeding 8 mm for head movement or exceeding 5 mm for pelvic movement (original threshold values of 6 mm for head movement and of 3 mm for pelvic movement [(11) adjusted using published correction equations (23)] (Supplementary Table 1). Thirteen (48%) of the horses also exceeded at least one of the higher threshold values for head asymmetry (HDmin 14 mm; HDmax 16 mm) or pelvic movement asymmetry (PDmin 11 mm; PDmax 9 mm) previously shown to be representatives of intervals containing 90% of daily repeat gait assessments in Thoroughbred racehorses in training (24) obtained with an identical gait analysis system to the present study. It hence appears unlikely that these values are a result of daily variation. The baseline mean absolute values of pelvic movement asymmetry in the present study ( Table 1) are also higher than the values in 37 clinically hind limb lame horses that showed a significant reduction in movement asymmetry after diagnostic analgesia (25), further supporting the assumption that these asymmetries might not be a result of daily variation.
As reported previously within different gaits as a function of increasing speed (18), stride time was found to decrease between the subjectively defined speed categories (slow, fast) confirming that this subjective classification had been successful. Stride time was found to be increased on the soft surface compared with the hard surface, which is in contrast to a previous study reporting no significant difference between asphalt and a sandfiber-based surface (24). Our findings with regard to stride time are, however, in agreement with another study reporting reduced stride times on the straight compared with on the lunge (21). Importantly for our investigation into combining asymmetry measures between reins, no significant difference in stride time was identified between the two reins. This indicates that the speed-related increase in asymmetry on the circle [related to increasing body lean angle (5)] should cancel out between reins. Consequently, only 2 of the 11 average-rein asymmetry parameters were found to be affected by speed. In contrast, six asymmetry parameters were affected by speed on the inside rein and three on the outside rein. This supports our hypothesis that average-rein measurements are less affected by changes in speed, of course, with the caveat that similar speeds are used on the two reins and also with keeping in mind that more similar body lean angle between reins has been observed after successful diagnostic analgesia (9). Future studies should consider a direct speed measurement, for example, via GPS or calculating speed from the number of circles trotted (determined from inertial sensor heading data) and the circle radius so as to avoid using subjectively defined speed categories.
One contributing factor to the higher number of asymmetry parameters influenced by speed on the inside rein (six) compared with the outside rein (three) may be the increased angulation of the limb on the inside of a non-banked circle (4). This angulation may further exacerbate stresses related to the production of combined vertical and centripetal force on the circle (25), and this effect may be more obvious on a hard surface where higher transversal forces and moments are produced (24) and where the hoof cannot rotate "into the surface." Interestingly, none of the head asymmetry parameters were found to be affected by speed, neither on the inside rein nor on the outside rein. This may be related to the generally more inconsistent direction of head movement asymmetry reported previously between horses and reins (20) and/or to the variation in baseline asymmetry values in the study population with head movement asymmetry varying between −46 and +38 mm. Further studies with horses undergoing diagnostic analgesia during clinical lameness investigations may be warranted to enhance our understanding about whether this may be related to changes in asymmetry as a function of speed and surface for specific orthopedic deficits.
In this context, it also seems noteworthy that two of the parameters affected by speed for the outside rein models (PDmin, WDmin) showed negative movement asymmetry values, i.e., an asymmetry pattern that is opposite to the one observed during straight-line trot. This is also apparent from the lines of best fit in Figures 1, 2. For these two parameters, the line of best fit for the outside rein (cyan line) is either completely below the x-axis for the range shown here (PDmin) or is crossing the x-axis into the positive at a value of ∼16 mm straightline asymmetry (WDmin). This indicates that the circle effect, which makes these horses appear to be increasingly inside hind limb asymmetrical with increasing speed and decreasing circle radius (5), is outweighing the "baseline" movement asymmetry measured on the straight line, which should make these horses appear outside hind limb asymmetrical. So, for example, a horse with a left hind PDmin type asymmetry on the straight line would typically show an LH asymmetry on the left rein and an RH asymmetry on the right rein (even for more exacerbated straight-line asymmetries of up to 15 mm, Figure 2, PDmin).
Only three individual-rein movement asymmetry parameters were found to be significantly affected by the surface they were lunged on; all three showed an effect for the inside rein movement asymmetry. This might be an indicator that the increased angulation of the inside limb toward the ground surface (4) may be involved in these surface differences due to the increased transversal forces identified on a hard asphalt surface (24) and particularly the ability of the inside hoof to sink into the ground asymmetrically on a softer surface and, hence, preserve a better alignment of the distal limb [which shows increased angulation (4)]. In all three affected parameters (PDmax, WDmax, and HHD), increased levels of asymmetry were found on the hard surface.
Two average-rein asymmetry parameters (HDmin and HHD) were found to show significant differences as a function of surface. Only one of the parameters (HHD) was affected by surface both for the inside rein condition as well as for the average-rein condition. For this parameter, similar to the individual rein condition, a higher asymmetry value was found for the hard surface; the difference between hard and soft surface was, however, smaller for the average rein measurement. As a result, we can only partly support our second hypothesis that differences between surfaces are consistently apparent for individual rein and average-rein measurements. The lack of significant surface-related differences on the outside rein suggests that in clinically lame horses, it is more important to lunge horses on two different surfaces with the "suspected lame" limb on the inside of the circle rather than on the outside of the circle. Again, further studies with clinically lame horses after diagnostic analgesia may identify specific conditions for which this is not the case. For example, horses with proximal suspensory desmitis have been reported to show more accentuated lameness with the lame limb on the outside of the circle on soft ground (26), and this has been proposed to be related to an increased loading rate (24).
The slopes of the linear trend lines fitted to straight-line data plotted vs. matching average-rein asymmetry values (Figure 1 and Table 2) all show values <1. This indicates that across the two reins, movement asymmetry is reduced rather than increased. This finding goes against our third hypothesis. When considering this finding, it needs to be considered that stride time was at its lowest for the straight-line condition, which may suggest that the horses in this study, many of which showed considerable movement asymmetries on the straight, chose to trot at a lower stride rate on the circle compared with the straight line. Assuming a lower stride frequency is related to reduced speed (18), the increase in body lean angle on the circle may only be small, and hence, the hypothesized increase in movement asymmetry in response to the circular movement may only be small (5). Enhancing upper body movement symmetry measurements with speed estimates (27) may provide further insights into this complex topic. At least for the pelvic asymmetry parameters, where four of the five parameters showed higher values for the inside rein slope compared with the outside rein slope ( Table 2 and Figure 2), there seems to be some supporting evidence that the increased limb angulation of the inside limb [2] may play a role here increasingly "amplifying" the straightline asymmetry with increasing baseline values in particular with a slope value of just above 1 for PDmin (one of only two slope values >1).
The amount of spread around the trend lines fitted to straight-line vs. average-rein data, which is obvious for all asymmetry parameters in Figure 1, indicates a considerable amount of variation in average-rein asymmetry. This needs to be further investigated. The administration of an oral NSAID in the complete group of horses (N = 66), of which a subset of 27 horses was investigated, was not related to consistent changes in movement asymmetry (10). Hence, further studies may best be conducted in clinical cases with the use of diagnostic analgesia.
It is encouraging to note that even in this non-homogeneous sample, there are several significant differences between straightline and average-rein asymmetry values. While there are significant differences for head and pelvic movement asymmetry, withers asymmetry shows little variation across straight-line and average-rein measurements: none of the asymmetry parameters of the withers were significantly different between straight-line and average-rein ( Table 4) and mean difference values were small (≤1 mm, Table 3, mean) with comparatively tight standard deviation values (≤5 mm, Table 3, SD). This indicates that the relationship between head and withers movement asymmetry and between pelvis and withers movement asymmetry may change consistently between straight-line and average rein. Since withers movement has been identified as a good differentiator between "true" and "compensatory" head nod (28,29), studying the relationship between head, withers, and pelvic movement asymmetry and their relative timing (30) in horses on the lunge may lead to further insights into the mechanics of trotting on the circle and/or improve the value of measurements of movement asymmetry during lunge exercise for differentiating between different causes of lameness. This could be undertaken very elegantly in straight-line and lunge measurements in clinically lame horses undergoing diagnostic analgesia.
CONCLUSIONS
In this study, we have investigated the effect of surface and speed on individual rein and average-rein movement symmetry in horses trotting on the lunge.
In contrast to nine movement symmetry parameters being affected by speed when investigating data from each individual rein, only 2 (of 11) average-rein asymmetry parameters were found to be affected by speed. Presenting average-rein data may, hence, be helpful for simplifying the interpretation of lunge movement asymmetry data in clinically lame horses.
Only one movement symmetry parameter (HHD) showed similar surface-related effects for individual rein data (inside rein) and for average-rein data with increased asymmetry on the hard surface. Consequently, when interested in surfacerelated differences, average-rein movement symmetry data is unlikely to be sufficient evidence, and individual rein data should be consulted.
In contrast to our hypothesis predicting increased average-rein movement asymmetry compared with straight-line asymmetry, average-rein asymmetry values were all found to be smaller than condition matched straight-line movement symmetry values. The consequences of this for clinical lameness exams should be further investigated, for example, by quantifying average-rein asymmetry before and after diagnostic analgesia.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by the Ethical Committee for Animal Experiments, SLU, Uppsala, Sweden, application number C 48/13 and C 92/15. Data collection was performed during the years 2013−2016. Written informed consent was obtained from the owners for the participation of their animals in this study.
AUTHOR CONTRIBUTIONS
MR and TP designed this study. EP-S, MR, and EH contributed to the data collection. TP, HG, and OO prepared the initial draft of the manuscript. All authors contributed to the data analysis, interpretation, manuscript revision, and have given their final approval. | 2021-07-12T13:25:45.725Z | 2021-07-12T00:00:00.000 | {
"year": 2021,
"sha1": "d10e7a111a15be3a5554586e7638ff139acdf171",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2021.692031/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d10e7a111a15be3a5554586e7638ff139acdf171",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
262044664 | pes2o/s2orc | v3-fos-license | A prospective study on the prevalence of MASLD in people with type‐2 diabetes in the community. Cost effectiveness of screening strategies
As screening for the liver disease and risk‐stratification pathways are not established in patients with type‐2 diabetes mellitus (T2DM), we evaluated the diagnostic performance and the cost‐utility of different screening strategies for MASLD in the community.
| INTRODUC TI ON
Metabolic dysfunction-associated steatotic liver disease (MASLD) is the most common cause of abnormal liver function tests (LFTs) worldwide, with a global estimated prevalence of 30%. 1 MASLD is also expected to become the leading cause of end-stage liver disease in the coming decades. 2 Histologically, MASLD encompasses a spectrum of disorders from steatosis with or without hepatocellular injury and/ or inflammation (Metabolic dysfunction-associated Steato-Hepatitis, MASH) and a variable degree of fibrosis through to cirrhosis. 3Fibrosis stage represents the strongest predictor of clinical outcomes-liver and non-liver-related-in these patients. 4From a clinical perspective, the presence of type 2 diabetes mellitus (T2DM), is an independent predictor of advanced fibrosis in patients with MASLD, 5 with a greater prevalence of advanced disease in diabetic compared to nondiabetic individuals, especially in younger ages. 6ven the high prevalence-estimated at 55.5% 7 -and severity of MASLD in the diabetic population, there is a major interest in early detection of the disease, especially in primary care, 8 where diagnosing MASLD is perceived as a clinical challenge, with specific concerns on performing risk-stratification among patients. 9Both the AASLD 10 and the EASL guidelines 11 recommend screening for MASLD in highrisk risk groups (i.e., patients with metabolic syndrome) following a 2-tier system.Specifically, patients should be stratified using noninvasive markers of fibrosis such as Fibrosis score-4 (FIB-4) and/or NAFLD fibrosis score in primary care, followed by Enhanced liver fibrosis test (ELF) and/or transient elastography (TE) in a specialist setting.Such strategy relies heavily on scores, which were derived from a tertiary care setting, and whose diagnostic accuracy in primary care is still unclear. 12Furthermore, there is still a debate on whether screening might be cost-effective in this population. 8Finally, despite the most recent diabetes management guidelines suggesting to screen diabetics for MASLD, 13 the overall awareness among diabetologists remains low. 14 this study, we aimed to establish the prevalence of MASLD in patients with T2DM in primary care.Moreover, we tested the performance of non-invasive markers and further developed a riskstratification pathway.We also built a Markov model simulating MASLD screening and assessed the cost-utility of different screening strategies for MASLD in the diabetic community.
| Study population
This single-centre, cross-sectional study prospectively recruited consecutive patients with T2DM in primary care and community clinics from the North-West London general practitioner (GP) network.Inclusion criteria were the ability to give informed consent, age >18 years and presence of T2DM, as defined by medical history or recent 2 h post-challenge plasma glucose ≥11.1 mmol/L.Patients were excluded if they had known the liver disease.
| Screening for liver disease and MASLD
All the patients were screened for liver disease and the presence of MASLD with blood tests (full liver screen), imaging ultrasound (US) and TE, with liver stiffness measurement (LSM) and Controlled Handling Editor: Luca Valenti the cost-utility analysis, ICER was £2480/QALY for TE, £2541.24/QALYfor ELF and £2059.98/QALYfor FIB-4.
Conclusion:
Screening for MASLD in the diabetic population in primary care is costeffective and should become part of a holistic assessment.However, traditional screening strategies, including FIB-4 and ELF, underestimate the presence of significant liver disease in this setting.
K E Y W O R D S
liver fibrosis, metabolic dysfunction-associated steatotic liver disease, primary care, screening
Lay summary
Metabolic dysfunction-associated steatotic liver disease (MASLD), a common disease where excessive fat accumulated in the liver and may result in cirrhosis and heart attacks, is a highly prevalent, yet largely underappreciated liver condition that is closely associated with metabolic disease and type 2 diabetes mellitus (T2DM).Yet, a strategy to understand who is at risk of developing this disease and suffering liver damage is lacking.
In this study, we describe the prevalence of advanced liver disease in diabetics in primary care and we define an easy way to screen diabetics for MASLD.We demonstrate that, among diabetics, education level is associated with a greater risk of having liver disease.Moreover, we demonstrate that screening for fatty liver in primary care using non-invasive markers of fibrosis, is cost-effective and should be offered to all the diabetics in the community.attenuation parameter (CAP) score.Moderate-to-high cardiovascular risk was defined based on Qrisk2 score as ≥20%.Further details on screening procedures are given in Supplementary material.
| Screening strategies and identification rates
In this study, five screening strategies were compared against the standard of care: (1) US plus LFTs, (2) FIB-4, (3) NAFLD fibrosis score, (4) ELF and ( 5) TE.Standard of care was derived from previously published economic evaluations of MASLD screening where this entailed LFTs or no screening 15,16 (Supplementary Table S1).As part of the standard of care, abnormal LFTs were assumed to prompt referral to the hospital, with a 65% specificity and 35% sensitivity for liver fibrosis. 17 the first tier of each strategy, patients were divided into two groups: no disease/MASLD without significant fibrosis versus MASLD with significant and advanced fibrosis.No disease and MASLD without significant fibrosis were considered the same group for this analysis as the management would be similar and would not trigger a referral to secondary care compared to MASLD with significant and advanced fibrosis that triggers a referral to specialist care. 11In strategy 1 (US plus LFTs), MASLD with significant fibrosis was defined as evidence of steatosis and features of chronic liver disease on ultrasound, plus elevated LFTs.In strategy 2, significant fibrosis was defined as FIB-4 >1.3 and in strategy 3, as NAFLD fibrosis score >−1.45.In strategy 4 (ELF) significant fibrosis was defined as ELF≥9.8 and in strategy 5 (TE), as LSM ≥8.1 kPa. 11
| Decision-analytic model
We developed a decision tree to characterise the risk stratification and diagnostic performance of each of the primary care screening strategies evaluated.We adapted previously published Markov models 15,17 to characterise the subsequent health states and disease pathways of patients based on their initial primary care screening risk stratification (Figure 1).Patients with advanced stages of liver disease could progress to health states that reflect end-stage liver disease, including decompensated cirrhosis (DC), hepatocellular carcinoma (HCC), liver transplant (LT) and death. 15With a diagnosis present (status of significant liver disease and/or compensated cirrhosis), there is a probability that the management of the patient is modified to reduce the risk of progression to CC, decompensation or death.In this case, progression rates of SLD and CC groups were assumed as slower, compared to those whose diagnosis was missed at screening (false negatives). 15Details on model input parameters are given in Supplementary material.
| Model outcomes
The cost-utility analysis in the base-case was conducted over a lifetime horizon and generated the cost per quality-adjusted life year (QALY) gained.A discount rate of 3.5% per year was applied to outcomes and costs, as recommended by the NICE guidelines. 18 calculated the average cost-effectiveness and the incremental cost-effectiveness ratio (ICER) compared to the standard of care. 15,17Life expectancy, lifetime costs and the number of correct diagnoses were also estimated.According to NICE guidelines, a cost-effectiveness threshold (CET) of £20 000/QALY gained was set for the base-case analysis as per previous studies. 19 probabilities, utility values, costs and screening ratios) were varied to determine the impact of their variability on cost-effectiveness results.Sensitivity analysis ranges and probabilistic distributions were derived from previous literature and are reported in detail in Supplementary Tables S4-S8.
| Study population
Between April 2019 and January 2021, a total of 300 consecutive patients with T2DM were enrolled from the North-West London GP network.Overall, 287 patients underwent the whole screening procedure, while 13 did not complete the screening and were excluded (Supplementary Figure S1).The study population was diverse in terms of ethnic background and also diverse in terms of severity of T2DM and anti-diabetic treatments (Tables 1 and 2).The success rate for performing TE in this population was 99% (286/287).
Of note, 42% of the patients with 8.1 kPa ≤LSM ≤12.1 kPa and 38% of patients with LSM ≥12.1 kPa had normal LFTs.
When the CAP score was used to define steatosis, the overall prevalence of MASLD was 67% (195/287), the prevalence of significant liver disease (LSM ≥8.1 kPa), was 16% (48/287), while the prevalence of advanced fibrosis (LSM ≥12.1 kPa), was 11% (33/287) in the whole population.Of note, 3% (9/287) had elevated CAP score but no evidence of steatosis on the US.However, only those with a positive ultrasound were considered as having steatosis.
| Prevalence of cirrhosis
The prevalence of newly diagnosed cirrhosis secondary to MASLD was 3% (8/287; 6 with clinical diagnosis and 2 based on histology) in the whole diabetic population and 5% (8/184) in the MASLD subgroup (Supplementary Figure S1).The number needed to treat/ screen (NNT) in this population was 4.56 (3.38-7).Due to the COVID-19-related restrictions, only 11 patients underwent a liver biopsy among those with elevated LSM (as per standard of care): all the biopsied cases had liver fibrosis stage ≥2 according to the CRN scoring system.
| Advanced liver disease is more prevalent in the deprived population
In terms of socio-economic status, those with MASLD and significant fibrosis lived in more deprived neighbourhoods according to their median education rank (18 789 vs. 23 148, p = .03)(Supplementary Table S9).Similarly, those with MASLD and advanced fibrosis lived in more deprived neighbourhoods according to their median education rank (18 793 vs. 23 162, p = .05).Conversely, there was no difference in terms of the other deprivation scores: income, employment, health deprivation and disability, barriers to housing and service, and crime.
TA B L E 1 Characteristics of the study population and differences between patients with and without MASLD.
There was no difference in terms of the prevalence of MASLD, significant or advanced liver fibrosis.
When compared to age-matched men, menopausal women showed smaller waist (103 vs. 108 cm, p = .03)and hip circumference (111 vs. 113 cm, p =.018).Menopausal women also showed significantly lower ALT (26 vs. 33 IU/L, p = .003)and CAP score (293 vs. 313 dB/m, p = .013)compared to age-matched men.There was no difference in terms of the prevalence of MASLD, significant or advanced fibrosis.
| Cost-effectiveness analysis
The cost-effectiveness analysis was based on the performance characteristics (positive predictive values and negative predictive values) for the identification of patients with significant and advanced fibrosis from the study population (Supplementary Figures S2-S6).
Overall, screening for MASLD by any of the strategies analysed improved the rate of diagnosis by 8%-15%.All screening strategies were associated with QALY gains, ranging from 121 to 149 years, with TE (148.73 years) resulting in the most substantial gains, followed by FIB-4 (134.07 years), ELF (131.68 years) and NAFLD fibrosis score (121.25 years).The ICER of TE compared to the standard of care was £2480 per QALY gained (Table 4).
The ICER was most sensitive to variations in progression rates (effect of early diagnosis on disease progression), screening test sensitivity and specificity and model time horizon.Nevertheless, when transition probabilities, utilities, screening treatment effect and cost inputs were modified, we found a >99% probability of MASLD screening tests being cost-effective compared to standard of care in all evaluated scenarios (Figure 2, Supplementary Tables S4-S8).When sensitivity and specificity of each screening test were varied in a range between 20% and 100%, the ICER remained cost-effective below £3260 in all scenarios (Supplementary Tables S4-S8).Although all screening strategies were found to be costeffective compared to standard of care in the base-case, when the time horizon was decreased from 40 years (lifetime) to 5 years, only FIB-4 remained cost-effective within the NICE cost-effectiveness threshold criteria.
| DISCUSS ION
Non-alcoholic fatty liver disease has now become the leading cause of chronic liver disease in Western countries and the fastest-growing indication for liver transplantation in the United States. 20Defining and implementing models of care has been identified as an area of priority for tackling MASLD worldwide. 21Specifically, there is need for clearly defined, pragmatical referral management pathways, which are based on clinical context and shared with local primary care providers.Being a high-risk group for advanced liver disease, 6,22 TA B L E 3 Predictive factors for the presence of significant liver disease in the whole diabetic population.patients with T2DM represent an ideal target for MASLD screening in primary care.
In this study, we studied a cohort of patients with diabetes who were screened for MASLD and other liver diseases in primary care, without any a priori selection.This cohort includes patients with a wide range of antidiabetic treatments, comorbidities, ranges of glycaemic control and length of disease.Furthermore, conducting this study in North-West London, provided us with a very diverse in terms of ethnic and social background, which is a bonus compared to other studies in the field.Overall, the prevalence of MASLD based on the US was 64%, while the prevalence of significant liver disease was 17%, advanced liver disease 11% and cirrhosis 3% in the whole cohort.In a recently published work, in diabetic patients over 50 years old in the community and endocrinology clinics, their results are similar to our cohort with the prevalence of MASLD, advanced fibrosis and cirrhosis at 65%, 14% and 6%. 23 our cohort, visceral obesity, education attainment and AST were the main clinical predictors for the presence of significant and advanced fibrosis in primary care.Overall, despite education level being a well-known risk factor for other chronic liver diseases, 24 this is the first work demonstrating clearly that education level is an important determinant of liver disease in the general diabetic TA B L E 4 Base-case cost-effectiveness analysis of MASLD screening strategies versus standard of care (baseline screening).According to the latest published EASL guidelines, patients with T2DM should be screened for MASLD using a two-tier system, that is, FIB-4 and/or ELF in primary care, followed by TE in a specialist setting. 11Nevertheless, standard of care for diagnosing MASLD among GPs still relies on ultrasound and LFTs, possibly due to limited awareness on the disease and/or screening policies. 25,26In this cohort, despite AST being a predictive factor against the liver disease, up to 42% of the patients with 8.1 kPa ≤LSM≤12.1 kPa and 38% of the patients with LSM ≥12.1 kPa had normal LFTs at screening.
Risk stratification should not rely on LFTs, as they both under-and over-estimate the severity of liver disease in MASLD. 26Moreover, according to the results from this study, applying FIB-4 with a cut-off of 1.3 in this population would miss up to 38% of the patients with significant liver disease and these would mainly be younger patients with normal LFTs.These results are in line with recently published data and highlight the limitation of the use of FIB-4 in primary care. 27milarly, when applying a cut-off of ELF ≥9.8, up to 59% of those with significant liver disease would be missed at screening.Despite recent literature highlighting gender-related differences in MASLD phenotypes, 28 in this population, there was no difference in terms of false negative rates between men and women in this population.
Of note, recent evidence has raised the concern that currently used non-invasive markers may underestimate liver disease in diabetics and that more evidence in primary care is needed. 10,29Nevertheless, it is worth noting that when FIB-4 and ELF were used as standards, LSM underestimated the presence of significant fibrosis in 28% and 62% of the patients, respectively.
Though cost-effectiveness data in screening for MASLD in patients with T2DM in the community is emerging, 8 there is still a debate about the appropriate screening strategy.It is of great importance to identify patients with a high risk of progressive disease, as this would lead to a reduction in progression rates to end-stage liver disease and associated healthcare burden.Moreover, not only lifestyle intervention could delay or reverse fibrosis progression 30,31 but also pharmacotherapies will soon be available.Furthermore, as more severe forms of SLD are also associated with the greatest additional risk of cardiovascular events, 32 screening tools which identify advanced SLD may by proxy also identify those at higher risk for acute cardiovascular events, further extending the utility of screening within this scenario.
In this study, we present a cost-effectiveness analysis for screening for MASLD based on a real-life population of patients with T2DM in primary care.MASLD screening in people with T2DM improved diagnostic outcomes and was cost-effective in all evaluated scenarios under a CET at £20 000.Overall, TE was the screening strategy associated with the greatest clinical gains (148.73QALYs).These results are in line with published work 27 and emphasise that screening for MASLD is cost-effective compared to standard of care defined as abnormal LFTs or even the combination of US and LFTs. 15,17,33,34Nevertheless, previous Key input parameters with the highest level of uncertainty (i.e., transition F I G U R E 1 Markov model for the cost-effectiveness analysis.CC, compensated cirrhosis (clinical diagnosis of cirrhosis); DC, decompensated cirrhosis; FN, false negatives (MASLD with LSM ≥8.1 kPa who were false negatives at screening); HCC, hepatocellular carcinoma; MLD, mild liver disease (no MASLD or MASLD with LSM≤8 kPa); MASLD, metabolic-dysfunction associated steatotic liver disease; SLD, significant liver disease (MASLD with LSM ≥8.1 kPa who were true positives at screening); T2DM, type 2 diabetes mellitus; +, diagnosed; −, undiagnosed.
population.Clinicians managing patients with T2DM should be aware of the risks associated with poor education and incorporate this knowledge into their patient clinical management.Multidisciplinary teams should ensure that families with poor literacy have an adequate understanding of them being at higher risk for liver disease.
F I G U R E 2
Cost-effectiveness acceptability curve.Red and blue lines represent the cost-effectiveness acceptability of MASLD screening strategies 1-5 and standard of care, respectively.Streategy 1 ultrasound plus liver function tests; strategy 2 FIB-4, strategy 3 NAFLD fibrosis score; strategy 4 ELF; strategy 5: Transient elastrography.Each dot on the graph shows the probability of each of the strategies (1-5) being cost-effective (Y-axis) at a given cost-effectiveness threshold (X-axis).(1) Point A shows the point at which both strategies have 50% probability of being cost-effective; (2) point B shows the point at which scenario 1 has 100% probability of being cost-effective. | 2023-09-19T06:17:57.679Z | 2023-09-18T00:00:00.000 | {
"year": 2023,
"sha1": "1fb280a4ad4c9ec745f5629756d7218f5804549c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/liv.15730",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "1f91b6f06d30d0029c5329f65b2ce409211791db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73464341 | pes2o/s2orc | v3-fos-license | Public involvement in research about environmental change and health: A case study
Involving and engaging the public are crucial for effective prioritisation, dissemination and implementation of research about the complex interactions between environments and health. Involvement is also important to funders and policy makers who often see it as vital for building trust and justifying the investment of public money. In public health research, ‘the public’ can seem an amorphous target for researchers to engage with, and the short-term nature of research projects can be a challenge. Technocratic and pedagogical approaches have frequently met with resistance, so public involvement needs to be seen in the context of a history which includes contested truths, power inequalities and political activism. It is therefore vital for researchers and policy makers, as well as public contributors, to share best practice and to explore the challenges encountered in public involvement and engagement. This article presents a theoretically informed case study of the contributions made by the Health and Environment Public Engagement Group to the work of the National Institute for Health Research (NIHR) Health Protection Research Unit in Environmental Change and Health (HPRU-ECH). We describe how Health and Environment Public Engagement Group has provided researchers in the HPRU-ECH with a vehicle to support access to public views on multiple aspects of the research work across three workshops, discussion of ongoing research issues at meetings and supporting dissemination to local government partners, as well as public representation on the HPRU-ECH Advisory Board. We conclude that institutional support for standing public involvement groups can provide conduits for connecting public with policy makers and academic institutions. This can enable public involvement and engagement, which would be difficult, if not impossible, to achieve in individual short-term and unconnected research projects.
Introduction
Institutions and funders increasingly require demonstrable public involvement in public health research, implementation and practice, not just in health services research. This can be difficult for researchers to implement in individual projects (Brunton et al., 2017), particularly if these are policy-driven, as projects about interactions between environments and health often are, because policy-driven projects tend to be short term and immediate. Public engagement and involvement can then seem excessively time-consuming. In addition, while more clinically focused health service researchers are often able to identify particular patient or service user groups who they see as obvious candidates for involvement in their work, researchers looking at the complex interactions between environments and public health concerns can find it difficult to conceptualise the public or 'publics' with whom they need to engage. Furthermore, where very broad and universal topics are under consideration, researchers may be anxious about a perceived need to involve a statistically representative public group, generally an unrealistic aspiration which can preclude achievable involvement activities and any benefits these could deliver (Maguire and Britten, 2017).
Proposals to develop functional plans or policies need to be informed by the views and experiences people who will use them or benefit from them. The ultimate goal of research about the interactions between environment and public health is to produce knowledge which can be effectively translated into public policy and which can influence systems, organisations and individual lifestyle choices. There have been many successful public health programmes including the impact of the UK Clean Air Acts of 1956Acts of , 1968Acts of and 1993; yet, there is also a long history of public resistance to public health, and perhaps particularly environmental interventions, when implemented in ways perceived as top down, technocratic and pedagogical. Examples range from the cholera riots of the early 19th century (Burrell and Gill, 2005) to current campaigns against fluoridation and pollution control. 'Forced-fluoridation Freedom Fighters 1 '; and 'Coal Rolling' (Schlenger, 2014;Tabuchi, 2016). It is important to recognise that research about relationships between the environment and public health takes place in a context which includes contested truths, political activism and power inequality (Contandriopoulos, 2004).
The way health issues are reported in the popular press has sometimes added to public confusion and doubt. For instance, two headlines in the same newspaper just over a month apart read 'Bowel and gullet cancer: Just two beers or glasses of wine "raises your risk"' and 'Wine is KEY to a longer life: Daily drink can slash the risk of an early death' (Daily Express, 2017;Reynolds, 2017). Newspaper coverage of 'superfoods' has also raised concerns that the promotion of research may sometimes be overly influenced by commercial concerns (Weitkamp and Eidsvaag, 2014). These examples demonstrate that media messages about health risks and benefits which confront people when making day-to-day choices are often confusing or misleading.
These issues have led institutions to identify a perceived public deficit in scientific understanding, but it has also been argued that the culture of science-policy institutions could be a contributory cause of public mistrust (Wynne, 2006). While five types of barriers to the effective involvement of communities by statutory organisations were identified by Picking et al. (2002), only one of these refers to a potential lack of capacity to engage on the part of the community. The other four barriers identified were a lack of organisational staff skills and competencies; dominance of professional cultures; unsupportive organisational ethos; and local or national political dynamics. As research about the environment and public health increasingly consists of multi-sector collaborations which may include statutory, academic and private sector partners, some of these institutional issues may become amplified within individual research programmes and projects.
Drivers of public involvement in environmental and public health interventions can be broadly categorised as based on either a 'utilitarian' perspective (i.e. focused on achieving specific health, informational or service delivery outcomes) or as motivated by 'social justice' and the redistribution of power and knowledge (Brunton et al., 2017). Those responding to these drivers have been characterised as either 'pragmatists' or 'activists' (Morgan, 2001). These categories can serve as a useful heuristic, to encourage thought and discussion about the aims and hoped for outcomes of public involvement processes when planning a specific research project or programme. In practice, however, there are likely to be a range of internal and external influences on any particular public involvement activities, introduced through differing individual, organisational and community interests (Oliver et al., 2015). It is also possible to imagine a middle ground where these perspectives converge, for instance, if it were believed that improved health outcomes follow from the more effective use of information and resources by empowered individuals and communities, or that support for the public funding of research is promoted by the democratisation of knowledge creation.
There are different definitions of public involvement and engagement utilised in this context. For example, the Research Councils UK (RCUK, 2014) uses 'Public Engagement' as an overarching term for activities in which the public are provided with access to knowledge generated through research and/or opportunities to influence research agendas. However, the National Institute for Health Research (NIHR) INVOLVE makes clear distinctions between (a) involvement: the active involvement of the public in projects or organisations; (b) engagement: the provision of information and knowledge about research; and (c) participation: where people are recruited to take part in research, including trials, focus groups or completing questionnaires (INVOLVE, 2018a).
Many of the activities classed as 'public engagement' by the NIHR INVOLVE definition may be external to particular research projects, or something which follows from their production of knowledge. Examples include presenting at public science fairs; participating in popular media programmes; holding institutional open days or public debates; and disseminating findings in lay terms to research participants and wider public interest groups. However, 'public involvement' implies a much closer working relationship and the ceding of some control to the involved public. Examples of public involvement include acting as advisors or steering group members, taking part in research prioritisation, advising on or co-producing the content and presentation of questionnaires or research documents, informing the structure of mathematical models, gathering data, contributing to data analysis and disseminating findings with or for the research team. Thus, the role of members of the public in public involvement is as advisors, co-applicants and co-authors and co-creators; colleagues rather than subjects, clients or audiences.
One way of conceptualising the complex range of activities that make up public involvement, drawing on sociological literature, is to see it as creating 'knowledge spaces' (Elliott and Williams, 2008;Gibson et al., 2012). This is a metaphor for structures which bring together people with different sorts of knowledge, understandings and experiences of a topic or phenomenon. Within these spaces, people act as co-contributors to what Jasanoff (2000) described as 'civic epistemology', processes which evaluate and utilise knowledge in societal decision-making.
In the context of research about the environment and public health, public involvement knowledge spaces bring together people who may approach issues from different perspectives: individual personal and relational, rational scientific, and often also political and/or public policy. This means these types of spaces are seen as existing somewhere in between the more formal institutional spaces and everyday informal relationships (Maguire and Britten, 2018). This liminal quality of knowledge spaces helps to explain why they can be difficult to establish by individual researchers running short-term projects. It takes significant investment of time and effort to build contacts, trust and ways of working together. In addition, the drivers of public involvement are often experienced at the local level, rooted in place, with communities helping shape the factors impacting on their health, and frequently involving a range of voluntary and community sector organisations. These drivers have led a number of research institutions to found standing public involvement groups.
This article uses our theoretical understanding and practical experience of public involvement in the work of The European Centre for Environment and Human Health (the Centre) to explore challenges and opportunities for creating a flexible knowledge space to enable effective involvement in public health research. In particular, we use the example of involving people in a specific set of our research activities, with the NIHR Health Protection Research Unit in Environmental Change and Health (HPRU-ECH), as a case study to demonstrate the added value brought to the Centre by our standing public engagement group the Health and Environment Public Engagement Group (HEPE).
Context
The Centre is part of the University of Exeter College of Medical and Health, based at the Truro campus in Cornwall (ECEHH, 2018). Our research encompasses both emerging threats to health and well-being posed by environmental change, and the health and well-being benefits the natural environment can provide. This has included a broad range of research topics including the risk of infection from seawater (Leonard et al., 2018); relationships between pro-environmental attitudes and household behaviours (Alcock et al., 2017); health and well-being benefits of biodiverse environments (Lovell et al., 2014); and the relationship between coastal proximity and physical activity (White et al., 2014). This has led us to develop a truly inter-disciplinary team that crosses traditional disciplinary boundaries to include epidemiology, sociology, geography, policy analysis, systematic reviews, health economics, psychology, anthropology and microbiology (Phoenix et al., 2013).
The Centre was launched in 2011 with support from Convergence, the European economic regeneration programme for Cornwall and the Isles of Scilly. So, from the outset, the Centre has aimed to produce research that contributes to social and economic wellbeing in the South West England, as well as having impacts on national and international policy. This has led to the development of ongoing and close working relationships with local government, businesses and voluntary organisations, as well as with a range of national and international research partners.
In 2013, R.G. secured seed funding from the RCUK Catalyst project 2 at University of Exeter to found a standing group of local people to work with the Centre. Practical support for this was provided by the expert Patient and Public Involvement team from NIHR Collaboration for Leadership in Applied Health Research and Care, Southwest Peninsula (PenCLAHRC). 3 Invitations to introductory information events at the Centre were circulated through local radio, social media and through direct contact with community and environmental interest groups from across Cornwall.
From these events a standing group of 12 people were recruited and they chose the name 'Health and Environment Public Engagement' (HEPE). 4 The intention was for the group to act as interested individuals and critical friends to researchers working in the Centre. They were not intended to act as representatives of any particular group or community. Over time, the interests that have led people to join HEPE vary a great deal. They include public access to academic institutions; environmental sustainability; telehealth and low-carbon futures; links between food and mental health; and collaborating with others to find out more about the interaction between environment and health: I love being part of the HEPE group. I feel that our time and opinions are truly valued and taken account of. It's good to meet the people who are doing the research and feel able to help them. My fellow HEPE colleagues have also enriched the work I do as a Volunteer Coordinator for Cornwall Butterfly Conservation. (J.P. HEPE member) Since the autumn of 2013, HEPE has met quarterly. There have also been additional workshops and email consultations about specific research projects between meetings. Our policy, based on those of the PenCLAHRC team, is to reimburse members' travel costs in cash on the day and to offer a small honorarium in recognition of their contribution, paid into their bank account. The HEPE mailing list is reviewed annually, and any member who has not been in contact for more than 6 months is contacted to check whether they wish to continue their involvement. New members are recruited through outreach events and activities as well as by word of mouth and social media.
At the time of writing, six of the original HEPE members remain among the 25 people on our current mailing list, and through HEPE, more than 60 individuals have contributed to over 40 research projects in the intervening 4 years. HEPE members have contributed across the broad spectrum of research taking place in the Centre including work exploring human exposure to antibiotic resistant bacteria in coastal waters (Leonard et al., 2015); older people's sensory experience of the natural world (Orr et al., 2016); and potential benefits to public health and well-being from Europe's blue spaces (Grellier et al., 2017). These contributions have ranged from prioritising research questions and assessing and revising new plans for research to providing feedback on questionnaires, lay summaries and presentations which communicate research findings. HEPE members have also presented on public involvement at conferences and contributed to teaching at undergraduate and post-graduate levels.
This work has been sustained through the commitment of Centre staff with continuing support, in terms of both funding and staff time, from PenCLAHRC, as well as from sponsorship of individual workshops and activities from the budgets of specific projects which have used the resources of the group. R.G. has also received personal funding for work with HEPE as a RCUK Catalyst Public Engagement Champion. As well as involving HEPE in research projects identified by researchers, the Centre has also accessed funding to pursue interests arising within the group, provided training and supported networking opportunities.
Health Protection Research Unit in Environmental Change and Health
In 2014, NIHR founded 11 Health Protection Research Units (HPRUs) as partnerships between academic institutions and Public Health England (PHE). These are intended to act as multi-disciplinary centres of excellence for health protection research.
The HPRU-ECH 5 is a partnership between the London School of Hygiene and Tropical Medicine (LSHTM), PHE, the University of Exeter, the Met Office and University College London (UCL). It focuses on the health and well-being impacts of climate and other environmental change, and how these can be responded to by local, regional and national public health decision-makers. Research within this group aims to develop knowledge and tools that can facilitate adaptation and interventions which can mitigate negative impacts, and also promote benefits from changes in climate, land use and ecosystem services. These activities are intended to support PHE and other government bodies in developing and fulfilling public health policy requirements on adaptation to climate and other environmental change and on environmentally sustainable development.
The research of this partnership is organised in three interconnected themes: (1) Climate Resilience, led by PHE and the LSHTM; (2) Healthy Sustainable Cities, led by PHE and UCL; and (3) Public Health and the Natural Environment, led by PHE and the University of Exeter at the Centre for the Environment and Human Health. It is this third theme in which HEPE has been most closely involved. This theme explores the role of green/blue space (areas of vegetation and/or water) and the natural environment in improving mental and physical health by linking relevant information from large data sets, as well as by exploring the effects of environmental change (including changes in climate and land use) on the transmission of infectious and vector-borne diseases, and on non-communicable diseases through changes in aero-allergens (such as pollen and harmful algal blooms).
Design and conduct of involvement
Once the HPRU-ECH partnership was established, L.E.F., director of the Centre, attended a HEPE meeting where she described the research priorities that had been set by the funders and the structure of the partnership. The group then discussed what aspects of the research they found particularly interesting and how they could most effectively influence the research agenda. The health impacts of access to the natural environment is a subject that HEPE had a long-standing interest in pursuing. An issue that interested the group was uncertainty about the exact questions which would be used to interrogate large data sets in order to explore the interaction between individual's access to green/blue space within their local environment and their health. So, it was agreed to run annual workshops with Centre researchers working on the HPRU-ECH, to explore these issues and questions that arise from them. In between these workshops, HEPE would be kept informed of the work of the HPRU-ECH, and the group also offered to provide representatives for the HPRU-ECH Advisory Board.
Because this is public involvement in which members of the public are acting as special advisors and activity is constituted of consultation, collaboration and co-production of the research, as opposed to data gathering, these activities did not require review by a research ethics committee (INVOLVE, 2018b).
The first workshop -2015
The first workshop took place in November 2015. The HPRU-ECH research team from the Centre worked with K.M., an expert in public involvement from PenCLAHRC. They discussed the requirements outlined in the HPRU-ECH funding agreement, and identified uncertainties and issues which needed to be prioritised in order to plan their ongoing research programme about the health and well-being implications of access to green and blue space in the local environment. From this, they developed a workshop plan which included a series of structured activities to enable focused discussions on these topics with members of the public.
Six members of the research team and fourteen members of the public attended the workshop (six of these were HEPE members and eight were from their wider networks including community, environment and wildlife groups). Information about the national HPRU programme, the remit of the HPRU-ECH and the role of the Centre were circulated in advance.
The workshop began with an icebreaker exercise where people were invited to distribute 'coin' stickers between public health and clinical services. This was intended to get people mingling, to open discussions about their own priorities for health funding in general and to explore the distinction between public health initiatives and clinical services.
The second activity was a series of small group discussions about the impact that perceptions of environmental risk have on access to nature. People moved between tables each with a researcher facilitating discussion of a slightly different question. These explored what information sources people access; how this influences their access green space in the local environment; and how information could be improved. Everyone was encouraged to jot their ideas onto the tablecloths. There is not space here to present all these notes, but an area of broad agreement was the prioritisation of information on how to mitigate risks rather than simply identifying them. For example, too many 'danger' signs were seen as making the outdoors seem intrinsically risky, leading people to greater health risks through avoidance of these spaces and resultant inactivity.
The research team then presented a brief explanation of the data resources they intended to access in their upcoming work on how access green space in the local environment might be related to health. They delivered a series of individual 'pitches' for particular health research issues which could be prioritised for investigation using that data. These included Physical Activity; Obesity; Type 2 Diabetes; Respiratory Health; Mental Health; Social Relations; and Costs of Disease and Cost-benefit Analysis. In an initial vote, taken immediately following the pitches, Mental Health gained the most votes with Physical Activity and Social Relations coming in at joint second. There were then roundtable discussions about why these decisions had been made, followed by a second vote. In this second round, Mental Health still came out top, but Physical Activity was pushed in to third place by Social Relations.
Interestingly, discussions and comments posted on the board suggested that people had not voted for Costs of Disease and Cost-benefit Analysis because they believed that these should be an integral part of any study, rather than a topic in themselves. This is a good example of why it is important to have multiple ways of capturing information from workshop activities had we only recorded the votes rather than the discussions that underpinned them, we might not have recognised that Costs of Disease and Cost-benefit Analysis was seen as a high priority.
Following this event (and also following all subsequent events), feedback was circulated to the workshop attendees, other HEPE members and the researcher partners in the HPRU-ECH. The priorities identified in the workshop became a framework for planning research activities within Theme 3 of the HPRU-ECH.
Second workshop -2016
A year later, HEPE members and people who had taken part in the first workshop were invited to a second event. This was intended to give feedback on the progress of the research and to enable discussion of issues that had arisen since the last meeting. Nine members of the public were able to attend. Seven were HEPE members, two of whom had joined the group after attending the first HPRU-ECH workshop. Three members of the public involved in this workshop had not attended the previous year. Five members of the research team also took part.
The workshop began with a presentation about the work which had already taken place. The research team were initially concerned that their progress fell short of what would be expected, particularly on the prioritised issues of the impacts of access to green space in local environments on Mental Health and Social Relations. This was because the team had needed to address a number of time-consuming administrative and data security issues in order to link the different data sets they wanted to work with. In the event, the group were very supportive and actually welcomed the fact that data security and safeguarding of personal privacy were being taken so seriously by the team.
The bulk of the workshop was devoted to an exercise intended to take forward discussions from the 2015 meeting, about how to use the concepts of cost-effectiveness and value for money in the context of health impacts of access to green space in local environments. This involved discussion of how utility values are commonly calculated. The group were then introduced to the items covered by the General Health Questionnaire (GHQ-12; GL Assessment, 2017) and the 12-Item Short Form Health Survey (SF-12; Optium, 2018), as data gathered using both of these were available in the data sets being accessed.
Three groups, each including a researcher and three public contributors, were given bullet point lists of symptoms experienced by a fictional individual before and after an imagined 'green space' intervention. The interventions included regular organised bus trips to public parks or gardens; a scheme for sharing gardening skills with allotment holders and school students; and volunteering on a tree planting project. The group were asked to discuss whether they felt, based on these 'symptoms', the intervention had made a difference to the individual. If so, was this a difference likely to be important to that individual and would this be identified by their responses to the GHQ-12 and the SF-12 Health Survey?
The groups were then given a narrative about the same individuals, giving more of their personal and social background, and how the green space intervention interacted with other aspects of their lives. Workshop contributors were asked to review their previous answers in light of these insights. Researchers reported that these discussions were extremely helpful in highlighting the complexity of links between green space interventions, well-being and health. In particular, the workshop raised the magnified importance of what might appear to be quite small differences for people with multiple physical and mental health as well as social issues. Researchers were particularly interested in the scepticism public contributors voiced about the potential for these positive impacts to be identifiable in the data. Many of the group's comments echoed the prioritisation of Mental Health and Social Relations which had been identified by attendees at the previous workshop.
Discussions arising from this event were particularly influential in terms of how members of the research team subsequently approached the interrogation and interpretation of data.
Third workshop -2017
The third workshop took place in November 2017. It involved 10 members of the public -4 of whom had not attended either of the previous workshops and 2 who were not HEPE members. Five members of the research team also attended. Information from previous workshops was circulated in advance.
The first part of the workshop again provided an update on the Centre's research within the HPRU-ECH. The research team presented ongoing work on links between the natural environment and physical and mental health as well as work comparing the health and well-being impacts of exposure to nature with those of social connectedness.
The design of research into nature exposure and Social Relations, a topic which had been highlighted as an important issue in the two previous workshops, became the main focus of the afternoon. I.A. proposed to explore associations between residential area natural environments and the quality of people's relationships with their spouse/partner using archived data available from the UK Household Longitudinal Survey (UKHLS, Understanding Society 6 ) as a possible measure of Social Relations. These questions ask about how partners communicate; shared activities; and whether they regretted or thought about ending the relationship.
The proposed research would examine statistical relationships between the individual UKHLS participant scores on the measures to be developed from responses to a multiitem questionnaire about people's marital relationships and their residential area exposure to natural environments. This could help understanding about whether health is improved directly by exposure to natural environments, or whether this occurs indirectly because these environments promote better social relationships.
The objective of the workshop was to explore views about which, if any, of the UKHLS questions on partner relationships were most relevant to measure of the quality of these relationships. First, we discussed that, in this context, for a measure to be 'valid' in terms of the proposed research, it should be equally useful for all social groups; and for it to be considered 'relevant', it should really measure the quality of partner relationships, and not something else like the different cultures, habits or expectations of particular groups or social classes.
We again used one of the narrative scenarios from the previous workshop, mapping an individual's social relations before and after a green space intervention. The scenario provided a focus for discussions about how useful a range of UKHLS questions would be in identifying differences in the quality of these relationships. Another aspect under discussion at the workshop was whether and how exposure to green space might affect an individual's answers to these questions.
While some of the questions seemed relevant, concerns were expressed about how the language used in others might be understood differently by different groups. For example, the idea of 'calmly discussing' was not seen as something which would be particularly valued by all groups or cultures, for some people having more excitable conversations might not show anything negative about their partner relationships. Similarly, whether people had thought about divorce was seen as something likely to be heavily influenced by cultural and religious beliefs.
It was also felt that questions about whether partners' shared interests could be misleading as often having diverse interests is part of a good relationship. Yet, the group believed there could be a plausible connection between the quality of individual personal relationships and green space exposure, suggesting that going for a walk together in the countryside or park, for example, might immediately lead to people exchanging ideas, as being on the walk might give them the chance to discuss things, right then and there.
Ups and downs
We judge the involvement of HEPE in the HPRU-ECH to have been successful and impactful: it has helped to shape and prioritise the research agenda, provided insights which supported analysis and contributed to presentations. Yet, we would not want to paint it as having been without difficulties and misunderstandings. Not all the research team have been enthused and confident throughout, and not everything public contributors have wanted the project to achieve has been possible.
It has been challenging for some members of the research team to involve themselves in arenas that expose their scientific rigour to potential criticism from the perspective of 'lay' (perhaps here best understood as 'unsanctified' knowledge; Britten and Maguire, 2016). Also, as mentioned above, the requirement to return to the group a year after asking for their research priorities and admit that little progress had been made on some of these caused a degree of trepidation. In both cases, these concerns were successfully addressed through engaging in ways that were honest and focused. Open discussions have built trust and supported positive future engagement.
Not everyone who has taken part from a public perspective has been able to recognise the requirements of the research, sometimes seeking to raise issues which could not be addressed effectively in that arena. Having a place where personally important but currently tangential issues can be collected and reported in feedback has proved helpful. Sometimes, it has been possible to identify other research where these issues are relevant so we can signpost people to those projects or organisations. Sharing the agenda in advance, shaped around structured activities, has also helped people to remain focused. These are tools that group members use to remind each other why they are there, taking some control of the space and keeping the discussions on track for themselves.
Sometimes, HEPE members have been critical of the support they have received. For example, after a separate dissemination workshop we ran, which was largely aimed at local government employees with responsibilities for managing public open spaces, we received feedback from public contributors who expressed dissatisfaction with the level of briefing we had provided before the event. They felt their role had not been sufficiently clear, so they were left feeling uncertain about how valuable and impactful their presence at the event had been. This feedback was discussed by the specific research team, and lessons were drawn from it to inform similar work in the future, deciding to offer a more detailed face-to-face pre-event briefing as well as sending information by email and/or post.
Public contributors, and indeed researchers, have sometimes been frustrated by institutional barriers and delays which impact on engagement. For example, when L.E.F. first discussed the HPRU-ECH's work with HEPE, the group proposed having two open seats on the Advisory Board, rather than nominating a representative. This would have given the group flexibility and ensure representation was not dependent on the availability of an individual member. It would also have been an opportunity for members who had not previously been involved in this sort of work to be mentored by more experienced members. In the event, due to organisational and budgetary issues, they were only invited to send a single nominee to attend two meetings. The group has seen this as an opportunity missed.
The experience of involving the public in the HPRU-ECH has been described by researchers as 'playing catch-up', because funder's expectations and definitions of public involvement were not clearly understood by the whole team at the outset. HEPE members have suggested that funders of public health research could be more proactive in communicating information about public involvement and ensuring that realistic public involvement plans are in place before they award major grants. This would require accessible public involvement infrastructures and resources which could support effective planning for involvement at an early stage, as well as infrastructures able to include public contributors in these planning processes. Better reporting of public involvement activities and costs as well as involving the public more effectively in project monitoring and reporting have also been raised by HEPE members as drivers towards more embedded and effective involvement practices.
Discussion
Although the HPRU-ECH workshops have involved 20 different members of the public over 3 years, the continuity of engagement has been maintained through the distribution of electronic feedback to HEPE and discussions at their quarterly meetings. This has enabled contributions from many more people unable to attend workshops, thus broadening accessibility and supporting ongoing interest in the project; leading us to argue that the value provided to the research and the quality of the discussions taking place were not just a product of the workshops themselves. The regular cycle of HEPE meetings and the resultant creative and participatory information sharing have supported the creation of an open knowledge space connecting the Centre and the wider community.
This has been about building relationships over the long term which, as described above, is not to say that everything always runs smoothly. As in any relationship, there have been some disagreements and some difficult discussions. There would be little point in engaging if we always agreed, and it is important for 'critical friends' to be able to maintain that critical edge. The point is for us to be able to communicate disagreements, find a way of understanding our differences and move beyond them. To achieve this, researchers and members of the public need supported spaces to build their confidence and skills to engage effectively. A standing group like HEPE can act as a vehicle to enable this, but it can only be achieved through the investment of time and resources by funders and research institutions.
Research cycles do not run smoothly or evenly. Because of the way that funding rounds, ethics committee meetings and other researcher deadlines fall, there are times when there are many opportunities to be involved and other times when these opportunities seem scarce. A standing group can share out tasks when researchers are being demanding, particularly if people are offered different ways of contributing tailored to their existing skills and availability. Supporting the group to pursue their own interests, sharing information, skills and networking opportunities can help to fill the gaps. This sort of reciprocity helps to sustain relationships with, and within, the group (Mathie et al., 2018), as well as providing the welcoming research environment and support for group members as research partners which has been identified as vital in the context of patient/carer involvement in research (Black et al., 2018).
Working with J.P., a member of HEPE who also works as a volunteer co-ordinator for Butterfly Conservation, we have developed a conceptual model for the cycle of public engagement, with butterflies ( Figure 1). It is intended to demonstrate that, provided with sufficient resources, engagement is an ongoing process which has the capacity to be selfrenewing and productive. But this is not a trivial process, it is about changing the relationship between the public, public policy and academic science in order to enable the effective mobilisation of knowledge in multiple directions, and for the benefit of us all.
The sustained effort required is not efficiently supported by short-term grant funding; in this context, 3-5 years is still short term. In order to enable effective public involvement and engagement in public health research, funders and research institutions need to provide core institutional support for open-ended public involvement knowledge spaces. This means tackling institutional inertia, investing in skills and resources for ongoing public engagement and enabling cultures of publicly engaged research (Hinchliffe et al., 2018). As Pickin et al. (2002) has argued, developing a more participatory culture means questioning and re-evaluating dominant professional values; it also requires organisations to develop structures that support openness rather than risk-aversion. In order to achieve this, we need to ensure that we share our experiences of involvement and engagement, the activities we undertake, their impacts on our research and the difficulties we encounter in the process (Staley, 2015).
Involvement of the public in research is still facing many challenges. In the context of rural settings, Chambers (1983) argued that local people are often considered insufficiently knowledgeable to engage in scientific debates on the issues that might affect them. This statement is probably true of the way other populations are viewed, for example, people in other socio-economic situations, especially when the scientific debate involves 'hard science' such as mathematics. We argue that integration of participatory research with research methods such as traditional mathematical modelling can be highly beneficial in public health research. For example, the deliberative involvement of the public, patients and carers can help to shape the structure of mathematical models, and/ or selection of the most relevant parameters of the system (Grant et al., 2016;Scoones et al., 2017). Groups like HEPE can be involved in identifying desirable outputs from mathematical and other 'hard science' approaches, in assessing the assumptions often made in models and hypotheses, in critically reviewing the findings, in designing and evaluating software and web-based implementation of theoretical approaches, in producing lay language versions of scientific papers and in dissemination of research findings.
Conclusion
The development of productive public involvement in the work of HPRU-ECH's Theme 3 has been facilitated and sustained by the Centre's existing and continuing relationship with HEPE. In this, HEPE has acted as more than a standing group of critical friends, it has become an increasingly powerful, skilled and trusted, open-ended knowledge space; connecting researchers and students with their local communities. This has not been about creating a sounding board which will echo and broadcast institutional views. It is a space in which differing ideas and experiences can be exchanged and explored. Yet, it is important to recognise the constraints acting upon this.
The physicality of 'knowledge space' as a metaphor implies a position in relation to other spaces; this led Massey (1992) to argued that it is important for such spaces to be explicitly located in their broader social and political landscape. A location which has been proposed for an ideal public knowledge space in health research (Maguire and Britten, 2018) is one equipoised between what Habermas (1984Habermas ( , 1987 describes as the 'system' of rational scientific thinking (inherent in institutions) and the 'lifeworld' of relational values (associated with personal and familial relationships). Such a space would be equally owned by those taking part in it and would embody the criteria of the 'ideal speech situation' (Habermas, 2001) by including anyone able to make a relevant contribution; ensuring that each has equal opportunity to speak; and guaranteeing the absence of all deception or coercion.
HEPE does not achieve that ideal. The group was instigated by researchers in a university. It is resourced in terms of funding, meeting facilities and project work, through those researchers and that institution. Those resources are limited and only accessible to the group through the research team. This dependence can be seen as entrenching an already inequitable relationship between HEPE members and researchers, given that cultural and social capital is inherent in academic status. But these are not the only inequalities acting within this space. Differences of power and status within organisations introduced by career and management structures can constrain candour (O'Toole and Bennis, 2009), and this could inhibit the openness of researchers among themselves and with HEPE; the ambition to enable a socially and culturally diverse membership introduces potential inequalities between HEPE members in terms of race, class, gender, educational attainment and so on.
These issues imply that encounters within this space will always fall short of an 'ideal speech situation'. In practice, it may be more useful to see those criteria as aspirational goals rather than requirements (Gillespie et al., 2014). There is value in recognising that, within our limited resources and capacities, we cannot fully achieve equity. As Nancy Fraser (1990) has argued, 'participatory parity', a situation where 'ideal speech' criteria are met, is further undermined if we fail to remain attentive to existing inequalities. It is vital for us to identify those issues which arise from differences in power, capacity, culture, resources and skills between contributors in order for us to act in a way that addresses and mitigates them. What we have been able to create is a flexible vehicle for involvement which provides people with opportunities to contribute in different ways and at different times. Within this, the use of different skill sets and cultural resources are valued. Support and training are also available where there is an appetite for these. The playing field is still uneven, but recognising this enables some mitigation of inequalities.
Mitigating inequalities is and will remain an ongoing task of engagement; it is a process of identifying differences, developing understandings and building relationships. In this article, we have outlined in detail how we have approached this by designing a range of activities that support members of the public to engage with the research of the HPRU-ECH and the broader work of the Centre. These activities support researchers to engage with people who view their work from different perspectives. This is not a matter of diluting the quality of research or undermining the skills brought to bear on research questions. Engaging with the public adds to the knowledge informing the research and ensures its findings are more effectively communicated to those able to apply them in practice. Interdisciplinary research into the interactions between environments and human health is a particularly interesting field to develop this type of publicly engaged research. Our work on the HPRU-ECH already combines different types of knowledge, understanding, skills and methodologies; deals with complex and interacting systems; and requires translations between distinct discourses. In addition, research about environmental change and health encompasses a range of topics which are often of immediate personal interest to members of the public. This provides us with an opportunity to build strong relationships between research institutions and communities, which we hope will act as robust and ongoing connective knowledge spaces. Members of the public, researchers and policy makers will arrive with different agendas, skills and requirements. No single knowledge space will be able to contain all of the things people bring to them or require of them; this means it is important to support networking and signposting to other opportunities and resources. Ongoing structural support, resources and facilitation are needed in order to sustain engagement. What we have outlined in this article is not a single transferrable model which will work for all research, all institutions and all communities. It is a flexible approach to maintaining public involvement knowledge spaces which can enable us to gain different understandings of others and, through them, of ourselves.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: | 2019-03-11T17:18:15.918Z | 2019-02-20T00:00:00.000 | {
"year": 2019,
"sha1": "ef6aaad176d391c32829d2d095a45d352907c03d",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1363459318809405",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "17f7ffef60960d88cf5629cfa3adc22dfea4d24d",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
} |
249252255 | pes2o/s2orc | v3-fos-license | Observations of the Outer Heliosphere, Heliosheath, and Interstellar Medium
The Voyager spacecraft have left the heliosphere and entered the interstellar medium, making the first observations of the termination shock, heliosheath, and heliopause. New Horizons is observing the solar wind in the outer heliosphere and making the first direct observations of solar wind pickup ions. This paper reviews the observations of the solar wind plasma and magnetic fields throughout the heliosphere and in the interstellar medium.
A schematic diagram of the interaction of the heliosphere and LISM looking down at the solar equatorial plane. b) The trajectories of V1, V2, and New Horizons and the locations of the V1 and V2 TS and HP crossings Fig. 2 Plasma observations from 1 AU into the LISM plasma speed, density, and temperature from 1 AU into the LISM. Data inside the HP are 25-day running averages from the V2 plasma science (PLS) instrument and the densities outside the HP are from the V1 and V2 plasma wave subsystem (PWS). The radial speed varies between 300 and 850 km/s; variations smooth out at larger distances as the streams interact Elliott et al. 2019).
At 20 AU, V2 observed solar minimum at low-latitudes with slow, dense cool solar wind. After solar minimum a roughly 1.3-year oscillation was observed at V2 and also at spacecraft throughout the heliosphere (Richardson et al. 1994;Gazis et al. 1995). A similar period was observed in cosmic rays, auroras, and convection speeds on the Sun. At about 50 AU, V2 observed the 2006 solar minimum when it was at 20°S heliolatitude. At these higher latitudes a significant fraction of fast wind was mixed in with the slower equatorial wind, giving a rise in speed to over 500 km/s and an increase in temperature.
Outside of 50 AU the solar wind is slowed down by pickup ions (PUIs); speeds are consistently below 400 km/s at both V2 and NH even though the solar cycle activity was quite different 2008a;Elliott et al. 2019). PUIs originate from LISM neutrals that are ionized and accelerated in the solar wind. The PUIs initially have thermal energies comparable to solar wind flow energies and by 15 AU dominate the plasma thermal pressure. The PUI fraction was ∼17% at the TS giving a 18% slowdown of the solar wind, so roughly 1/3 of the solar wind flow energy is transferred to the PUIs before the TS.
The solar wind and heliosphere vary with the solar cycle. At solar minimum the Sun's dipole tilt is small and the coronal holes accelerate solar wind at high-latitudes. Corotating interaction regions (CIRs) form when high speed solar wind streams overtake slower streams, with a forward-reverse shock pair forming at the interface; these are most prevalent in the declining phase of the solar cycle. The largest solar transients are coronal mass ejections (CMEs), which propagate through the heliosphere as interplanetary CMEs (ICMEs). ICMEs expand outward to about 15 AU until they reach equilibrium with the surrounding solar wind (Richardson et al. 2012). ICMEs propagate through the heliosphere to the HP, where they can drive shocks in the VLISM (Kim et al. 2017;Liu et al. 2014;Gurnett et al. 2013). More ICMEs occur at solar maximum. These ICMEs merge, with slow ICMEs overtaking fast ones, to form merged interaction regions (MIRs). Shocks driven by ICMEs are observed in the solar wind out to the TS, with the last large shock, observed at 78 AU (Richardson et al. 2008a), showing jumps in B, V R , density N, a large increase in the temperature T, and enhanced energetic particle fluxes.
The TS is described in detail below; at the TS the speed drops by a factor of 2, the density goes up by a similar factor, and T increases by an order of magnitude (Richardson et al. 2008b). V R drops slowly across the heliosheath. N decreased after the TS at solar minimum, then increased at ∼105 AU when solar maximum reaches the heliosheath, then was fairly constant until the HP. T decreased from 120,000°K after the TS to 50,000°K outside ∼90 AU. The MIRs driven by ICMEs are still observed in the heliosheath (see section below) with two examples at ∼100 and ∼109 AU. Outside the HP, the Voyager PWS instruments determine the plasma density when plasma oscillations are observed (Gurnett and Kurth 2019;. These densities increase away from the HP. PLS data suggest the plasma T is high, 30,000°-50,000°K, after the HP ); T must eventually decline to the LISM value of 6400°K ).
The Energetic Particle Heliosphere
In addition to the plasma heliosphere, Voyager has also observed the energetic particle heliosphere. Figure 3 shows the distance and heliolatitude of V1 and V2 in panel (a), 140-220 keV ions in panel (b), >70 MeV GCRs in panel (c), and the sunspot number in panel (d). The keV particles peak near the Sun, where they are accelerated by shocks driven by ICMEs or corotating interaction regions (CIRs). Their intensities increase with distance, with occasional increases at large events such as the GMIR in Sept 1991. At 65-75 AU the intensities reach a minimum. Then, almost 10 AU before the TS crossing, the intensities increased due to particles accelerated at the TS leaking inwards into the supersonic wind. These particles are observed when field lines connect the TS and the spacecraft; the intensities increase up to the TS.
At the TS, where these particles are accelerated, the intensity jumps, then remains fairly steady in the heliosheath with a slow decrease as the spacecraft approach the HP. Note that even though V1 and V2 are separated by over 100 AU, the intensities are very similar. At the HP the intensities rapidly decrease to background levels (see HP section). The GCRs come from outside the heliosphere and their intensities generally increase with distance. They are modulated by the solar cycle, with less at solar maxima when magnetic fields are higher. At the TS the slope increases, then the intensity takes a final jump at the HP (for more details see Rankin et al. 2022).
The Solar Wind in the Outer Heliosphere
Many features of the evolution of the solar wind are discussed in previous reviews (Burlaga et al. 1996;Richardson and Stone 2009;von Steiger and Richardson 2006;Burlaga et al. 2021b); therefore, we provide only a high-level overview. New Horizons (NH) provides the most recent solar wind data from the outer heliosphere. NH does not have a magnetometer; however, the plasma instrument provides excellent H + and He ++ data and the first pickup ion measurements from the outer heliosphere. Figure 4 shows the V2 and NH solar rotation averaged plasma speeds, densities, temperatures, dynamic pressures, and thermal pressures versus distance. The bottom panel shows NH remains at low heliolatitudes whereas V2 moved south after the Neptune encounter at 30 AU. NH speeds are close to, but slightly below the V2 speeds. The difference grows larger outside ∼43 AU; this increase is probably because V2 observed solar minimum conditions at these distances and, since it was at 20°S, it observed a mixture of fast and slow solar wind. Both spacecraft observed an R −2 density fall-off as expected. The NH densities are slightly higher than those at V2 inside 43 AU and significantly higher outside 43 AU, giving similar fluxes. The NH interstellar pickup ion (PUI) densities shown in purple are fairly constant, so the percentage of PUIs increases with distance; at 50 AU about 8% of the ions are PUIs (McComas et al. 2017. The solar wind thermal ion temperatures decrease to 20-25 AU, then increase due to energy transfer from the PUIs (Smith et al. 2006). The NH PUI temperatures are fairly constant from 10-46 AU at near 4 × 10 6 K; the PUIs are also heated as they move outward (see Zirnstein et al. 2022). The NH thermal pressure plot shows that the PUI pressure dominates the thermal pressure outside ∼12 AU. The physics of the plasma changes when this occurs, with wave speeds determined by the PUIs and shocks preferentially heating the PUIs.
Interstellar Pickup Ion Effects
The first observational evidence for interstellar PUIs were observations of pressure balance structures in the solar wind; outside of 20 AU a hot PUI component was required if the structures were in pressure equilibrium (Burlaga et al. 1994). The increase in temperature outside 20-25 AU discussed above is another effect of the PUIs; the initial PUI ring distributions are unstable and drive magnetic waves which heat the thermal plasma (Smith et al. 2006;Isenberg et al. 2005). The energy to heat the PUIs comes from the solar wind flow energy; slowing of the solar wind is first observed at about 30 AU Wang and Richardson 2003;Elliott et al. 2019) and the slowdown is about 17% ahead of the TS (Richardson et al. 2008a). The solar wind slowdown is related to the percentage of PUIs by n pui /n sw = (3γ − 1)/(2γ − 1)(δv/v 0 ), where n pui is the PUI density, n sw is the thermal solar wind proton density, δv is the solar wind speed decrease, v 0 is the unperturbed solar wind velocity, and γ is the ratio of specific heats (Lee 1995;. For γ = 5/3, n pui /n sw = 7/6(δv/v 0 ) and a 17% slowdown implies that PUIs make up 20% of the solar wind (although we note below that γ may not be constant). Elliott et al. (2019) tested the equation by using pickup ion and solar wind densities measured with solar wind to calculate n pui /n sw and using the speed at NH and speed measurements at 1 AU from ACE and STEREO to estimate the amount of slowing (δv/v 0 ). They found that when they used a constant polytropic index the two sides of the Richardson et al. 1995 equation did equate. With investigation it became clear that the relationship between the solar wind density and solar wind temperature slowly and systematically evolved with distance. Elliott et al. (2019) fit plots of solar wind density and temperature measurements for given distance ranges using 3 methods. Method 1 fit a simple scatter plot with a line for a given distance range. Method 2 binned the temperature into density bins for a given distance range and fit the binned results. Method 3 fit sets of 8 adjacent points and then binned the polytropic index results by speed for a given distance and determined an index since the index did not vary much by speed. All 3 methods produced a solar wind polytropic index that decreased with increasing distance (Fig. 5). Theoretically, the polytropic index usually refers to the entire plasma and not only the solar wind portion. However, the physical effect encapsulated by the radially decreasing polytropic index determined for the solar wind portion of the data is that the solar wind temperature and density relationship is slowly evolving with increasing distance as the addition of interstellar pickup ions heats and slows the solar wind.
When Elliott et al. (2019) substituted the formula for the polytropic index as a function of distance into the formula derived by , both sides of the equation J.D. Richardson et al. Fig. 6 The average measured solar wind slowing with standard deviations (solid grey line and grey bar) and, in purple, the estimated amount of slowing determined from the NH measured n pui /n sw in the solar wind using the formula, and the method 3 solar wind polytropic index radial variation with distance determined from solar wind observations ( Fig. 15 from Elliott et al. 2019) agreed and the measured slowing was consistent with the measured PUI density (Fig. 6). Additional measurements are required to quantify how the solar wind is heated and slowed by picking up interstellar material. One needs to measure solar wind parcels as they move away from the Sun, the interstellar neutrals, and the pickup ions simultaneously. Although such measurements do not exist, the interaction between the solar wind and interstellar material can be simulated in a more self-consistent way than is done in the work of and Elliott et al. (2019) in order to better understand how the solar wind is heated and slowed as interstellar material is picked up by the solar wind on its journey through the heliosphere.
Solar Wind Magnetic Fields Between 7 and 87 AU
This section describes observations of B in the solar wind by Voyager 1 between 7 AU and 80 AU on scales from one to 128 days (Burlaga and Vinas 2005a). Relatively large clusters of strong magnetic fields, MIRs, move past a spacecraft during a fraction of a solar rotation. MIRs form near one AU and can persist through the heliosheath to the heliopause. During the declining phase of the solar cycle, MIRs often occur quasi-periodically, as shown in data from 1980 in Fig. 7(a). More commonly, MIRs occur sporadically, as illustrated in Fig. 7(d) (Burlaga and Vinas 2005a). MIRs can merge to form larger clusters of strong magnetic fields, called global merged interaction regions, GMIRs, that encircle the sun and extend to high latitudes. GMIRs typically form near ∼20-30 AU, and persist throughout the heliosphere and heliosheath as shown in Fig. 1(b).
All of these interaction regions have "filamentary structures" associated with jumps in B of various sizes over a wide range of scales. Jumps in B and filamentary structures, seen in all of the panels in Fig. 7, are fundamental features of the heliosphere and heliosheath, related to the fact that the solar wind is a driven, non-linear, non-equilibrium system. These features are characteristic of the multifractal structure of the solar wind and heliosheath magnetic field (Burlaga 1995(Burlaga , 2001(Burlaga , 2004. Boltzmann-Gibbs statistical mechanics, with an additive entropy function, cannot describe non-equilibrium physical systems with large variability and multifractal structure. Tsallis (1988Tsallis ( , 1994Tsallis ( , 2004a introduced a generalization of Boltzmann-Gibbs statistical mechanics in which the entropy is non-additive (non-extensive) and he introduced a probability distribution function, the Tsallis distribution function, that can describe observations of the magnetic field in the heliosphere and heliosheath. The Tsallis distribution is proportional to the q-exponential function exp q (−B q E) = (1 − (q − 1)B q E) −1/(q−1) . When q = 1 the qexponential function is a Gaussian distribution. The q-exponential function approaches the form of an exponential function for small E and a power law function for large E. The 2nd moment of the q-exponential function is finite for q < 5/3 and diverges for 5/3 ≤ q ≤ 3. The kappa function, which has been used in plasma physics to model speed distribution functions (Olbert 1968;Vasyliunas 1968), is the q-exponential distribution with q = 1 + 1/κ.
One can quantitatively describe the jumps and filamentary structure by studying the normalized increments of B, dBn(t i ) = (B(t i ) + t n ) -B(t i ))/ B(t i ) , on scales t n =2 n days, where n = 0, 1, 2, 3, 4, 5, 6, and 7 and t i is time. We consider scales from 1 to 128 days. The distribution of increments of dBn(t i ) is described by the Tsallis distribution R q (x) = A(1 + (q − 1) B q (dBn) 2 (−1/(q − 1)) , which is the fundamental distribution function in non-equilibrium statistical mechanics (Tsallis 1988). The non-extensivity parameter q for a Gaussian distribution is q = 1, and in general q > 1. Figure 8 shows the distribution of the increments dBn(t i ) (shown by the plus signs) for a given lag in units of days, ranging from one day to 128 days, as shown on the right side of each panel. Each distribution function was fit with the Tsallis distribution shown as solid curves for each lag t n = 2 n days, where n = 0, 1, 2, 3, 4, 5, 6, and 7.
During 1991, Voyager 1 was in the supersonic solar wind at 45 AU. The observed distributions of increments of B (shown by the plus signs) were highly non-Gaussian. (A Gaussian curve would appear as a parabola on the log scale in Fig. 8). This result clearly demonstrates that the large-scale magnetic fluctuations of B in the supersonic wind cannot be described by Boltzmann statistics. But the distributions can be described by the Tsallis distribution of non-extensive statistical mechanics, which are shown by the solid curves. The column on the J.D. Richardson et al. Fig. 8 shows observations made by Voyager 1 during 2002 near 87 AU, 7 AU before the termination shock crossing. Non-Gaussian (q > 1) Tsallis distributions are observed on scales from 1 to at least 4 days, but at larger scales the distributions are Gaussian. Figure 8 shows data from the CIRs in panel (a) and from a GMIR in panel (b) of Fig. 7. The very large jumps in B/ B are the cause of the non-Gaussian behavior. This behavior is common near 20 -25 AU. However, at larger distances (during 2002 for example) jumps in B were present only at relatively small scales, as expressed by the non-Gaussian fits with a Tsallis distribution on scales from 1 to 4 days. Figure 9 shows the kurtosis for the observations (solid circles) and the Tsallis fits (the exes, x) closer to the sun (panels (a) and (b)) and farther from the Sun (panels (c) and (d)). The solid curves are fits to the observations with an exponential growth curve. The values of the kurtosis were high during 1980 and 1991 when Voyager was within 45 AU of the Sun, decreasing from K = 15 to a plateau for scales greater than 10 days with a relatively large value, K = 7. During 2011 and 2012, when Voyager 1 was at 85 -87 AU, the kurtosis was much smaller, decreasing from ∼K = 6 at a lag of one day to K ∼ 3 (a Gaussian distribution) for lags of 8 to 128 days.
This striking change in the behavior of the kurtosis at larger distances was also observed for the non-extensivity parameter, q. The Tsallis distribution had a finite variance when q < 5/3 and a divergent moment when q > 5/3. For the 1980 and 1991 data (near 8 AU and 45 AU, respectively), q > 5/3 at all scales from one day to 128 days, owing to large tails of the probability distribution functions (PDFs) caused by large fluctuations and jumps in B(t). In Tsallis statistical mechanics, the PDF describes the metastable or quasi-stationary state with the parameter q stat . By contrast, for the 2001 and 2002 data q < 5/3, and q approached 1 (the PDF approached a Gaussian) at scales greater than the solar rotation period. The values of q < 5/3 are a consequence of the smaller tails of the PDFs (smaller jumps in B) during 2001 Fig. 9 The kurtosis of the observations (solid circles) and of the Tsallis fits (x's) closer to the sun (panels (a) and (b)) and farther from the sun (panels (c) and (d)) and 2002. During 2001 and 2002, q tends to approach unity indicating that the distributions are becoming Gaussian. The transition from q > 5/3 at <45 AU to q < 5/3 at >80 AU suggests the possibility of a "phase transition" between 45 AU and 80 AU.
As shown in Fig. 8, the Voyager observations of B showed that the increments of the magnetic field strength by dBn(t i ) = (B(t i ) + t n ) − B(t i ))/ B(t i ) on scales t n = 2 n days, where n = 0, 1, 2, 3, 4, 5, 6, and 7 and t i is time can be described by the Tsallis distribution from 1 AU to 87 AU.
A MHD model which includes pickup ions was used to calculate the radial evolution from 1 AU to 80 AU of the distribution of increments of the field strength B on scales from 1 to 64 days (Burlaga et al. 2007). Plasma and magnetic field data from 1 AU were input into a 1-D MHD model. Figure 10 (left panel) shows the predictions of the model at 80 AU. The distributions of increments dBn of B are shown by the dots and the fit to the observations with a Tsallis distribution is given by the dashed curves. The model distributions are welldescribed by Tsallis distributions on scales t n = 2 n days, where n = 0, 1, 2, 3, 4, 5, and 6, i.e. on scales from 1 to 64 days. The model correctly predicts that there is a significant deviation of the observations from the Tsallis distribution at the largest scale, 2 7 days = 128 days. Thus, the 1-D MHD model predicts that the increments of B at 80 AU are described by the Tsallis distribution on scales from 1-64 days.
The MHD model also predicts that the entropic index q decreases from q > 5/3 at R ∼ 45 AU to q < 5/3 at R ∼80 AU (Fig. 11a), indicating a change from a divergent to a J.D. Richardson et al.
Fig. 10
Model predictions and observations of the magnetic field distributions convergent 2nd moment of the Tsallis distribution, as observed by Voyager 1. This result suggests the possibility of a "phase transition" or a relaxation effect from q > 5/3 at < 45 AU to q < 5/3 at > 80 AU. The model also predicts that the width w n = 1/(dBn) 1/2 of the Tsallis distribution increases linearly with increasing lag (Fig. 11b).
The Termination Shock
The solar wind is super-magnetosonic in the inner heliosphere, so as it approaches the heliopause a termination shock forms where the plasma slows down, is compressed, heated, and becomes subsonic. Voyager 1 crossed the TS in 2004 at 94 AU and V2 crossed in 2007 at 84 AU; these crossings revealed the size of our heliosphere. TS effects were observed upstream of the crossings. Voyager 1 discovered the termination foreshock region starting at 85 AU, in which the energetic particle intensities increase when magnetic field lines are connected to the TS. The particles energized at the TS flow inward along these field lines and populate the foreshock. Voyager 2 entered the foreshock region at 75 AU, about 9 AU before the TS crossing. Electron plasma oscillations were observed weeks prior to the shock, as would be expected in an electron foreshock associated with the TS (see Fig. 14). The foreshock is a dynamic region since changes in the plasma pressure and magnetic field direction change the pathlength between the spacecraft and TS and thus the particle intensities.
V1 crossed the TS during a tracking gap; therefore, it did not observe the actual shock, but several observations indicated that V1 had entered the heliosheath. The magnetic field strength increased by a factor of ∼3, consistent with expectations for the TS, and the variability of B increased dramatically (Burlaga et al. 2005). The low-energy particle intensities jumped at the TS, but the higher energy anomalous cosmic rays (ACRs) did not (Decker ). The first and last V2 TS crossing occurred in data gaps, but the TS moved across V2 three times while V2 had DSN tracking (Fig. 12), providing the first and only in situ TS data. The first V2 termination shock precursor in the plasma data were the three step-like speed decreases, shown in Fig. 12, that started 90 days before the TS. V R decreased from 400 to 300 km/s, removing ∼40% solar wind flow energy ahead of the shock. The last decrease, from 350 to 310 km/s, was coincident with a low-energy particle intensity increase; the inward pressure gradient from the particle increase is sufficient to produce the reduction in the solar wind speed (Florinski et al. 2009). Thus, the V2 TS is the first example of a particle-mediated shock, where the particles accelerated at the shock move upstream and change the shock structure.
The TS is dynamic and may reform on the scale of several hours. The TS looked like a classic super-critical, quasi-perpendicular shock in 2 of the 3 crossings (Burlaga et al. 2008); one is shown in Fig. 12. It has a foot region, formed by reflected ions, with larger B and lower V. The most rapid changes in the plasma and magnetic field were in the ramp region, which was followed by an overshoot of B and then fluctuations downstream. The high-resolution magnetic field data resolved the ramp structure, which had quasi-periodic fluctuations with length scales of about 1000 km (Burlaga et al. 2008). The other TS crossing was different ( Fig. 12), with two ramp-like features that suggest the TS was reforming (Burlaga et al. 2008). The shock strength was 2-3 and the TS speed was 60-100 km/s, similar to the speeds of planetary bow shocks (Richardson et al. 2008a) A surprise from the V2 TS was that the thermal plasma downstream of the shock was only heated to 10 5 K, compared with expectations of over 10 6 K ( Fig. 13). Only 20% of the solar wind flow energy heated the thermal solar wind plasma; most of the energy heated the PUIs (Richardson et al. 2008a). The solar wind thermal ions gain so little energy they remain supersonic in the heliosheath; the PUIs determine the sound speed, which is subsonic as required. At the TS, almost all the thermal solar wind ions pass directly through the shock potential (Zank et al. 1996), unlike at planetary bow shocks where up to 50% of the thermal ions are reflected (Richardson 1987). The PUI distribution is much broader in energy than the thermal ions and some of the PUIs are reflected. The flow energy of the solar wind goes mainly to the pickup ions. The PUI gain about 65% of the energy, thermal solar wind ions gain 20%, and more energetic ions gain 15% (Decker et al. 2008). However, energetic ions were accelerated only to lower energies. ACRs were not created at either the V1 or V2 TS crossing.
The TS is not symmetric in radial distance. The first evidences of asymmetry were the two crossings of the inner foreshock boundary. V1 entered the foreshock at 85 AU. When V1 crossed the TS at 94 AU, V2 was entering the foreshock region at 75 AU. The V2 TS crossing was at 84 Au, also 10 AU closer than the V1 crossing (Burlaga et al. 2008;Decker et al. 2008;Richardson et al. 2008a;Stone et al. 2008). Changes in the solar wind dynamic pressure account for only 2-3 AU of this difference. Models can reproduce the asymmetry if the VLISM magnetic field were of order 3-5 nT, much larger than the ∼1 nT pre-Voyager estimates but comparable to the observed VLISM B. The observed VLISM B near the HP was larger at V2 than V1, consistent with B being more compressed in the south due to the draping of the magnetic field. The field must also be at the right orientation to the LIC flow (Izmodenov et al. 2005;Opher et al. 2007;Pogorelov et al. 2009) to produce the asymmetry. As of 2021 (35 AU beyond the TS), the V1 magnetic field remains solar-like and the pristine field direction is not known. IBEX observations show the pressure is highest south of the HP nose, consistent with a closer TS and HP in the south (McComas and Schwadron 2014).
Plasma Waves Associated with the Termination Shock
The Voyager PWS instruments observed plasma waves associated with the TS Kurth 2005, 2008). Voyager 1 and 2 both observed electron plasma oscillations prior (upstream) to the TS crossing. At V1, these were observed up to ten months prior to the TS crossing whereas Voyager 2 observed them only about one month prior to the TS. The reason for this variation is not definitively known, but could be due to variations in the relative motion of the TS, the energy of the electron beams underlying the generation of the waves, and the specific geometry of the magnetic field connecting the TS to the spacecraft at the time the waves were observed. For both spacecraft, the plasma oscillations were detected in the 311 Hz spectrum analyzer channel that has a bandwidth of ±15%. The plasma oscillations occur at the electron plasma frequency f pe = 8980(n e ) 1/2 , where frequencies in Hz and electron densities n e in cm −3 . Hence, for 311 Hz, n e is 1.2 × 10 −3 cm −3 and is typical of plasma densities in the solar wind at 90 AU. An example of the Voyager 2 observations is given in Fig. 14 from Gurnett and Kurth (2008).
While Voyager 1 crossed the TS during a data gap, Voyager 2 observed multiple crossings, allowing the detection of plasma waves associated with the shock. An example of the V2 TS plasma waves is shown in Fig. 15 (Gurnett and Kurth 2008). As in planetary bow shocks, the wave signature of the TS is a broad spectrum below the local electron plasma frequency. Voyager does not have a wave magnetic field sensor; however, by analogy with planetary bow shocks, it is likely this feature primarily comprises electrostatic waves. For two of the Voyager 2 TS crossings the broadband signatures corresponded well with the ramp of the shock, suggesting the currents in the ramp drove these waves.
The Heliosheath
Investigating the global nature of the heliosheath, including its thickness and overall pressure balance, is fundamental to understanding the heliosphere as a whole. V1 and V2 found that the heliosheath pressure is dominated by suprathermal particles, which was verified by the Voyager Low Energy Charged Particle (LECP) and plasma subsystem (PLS) data , a combination of Voyager LECP particle data and Cassini/INCA energetic J.D. Richardson et al. The broadband signature starting at 00:11 corresponds to the ramp of the shock and is similar to the spectra observed at planetary bow shocks (from Gurnett and Kurth 2008) neutral atoms (ENAs) data , and recent MHD models (Opher et al. 2020). Rankin et al. (2019) calculate the total effective heliosheath pressure and compare it with IBEX observations to provide additional evidence that the heliosheath dynamics are driven by suprathermal energetic processes.
Voyager 1 moved through the heliosheath at northern heliolatitudes and Voyager 2 was in the south. Although they were both launched in 1977, they arrived at the termination shock at different times, 2004.95 and 2007.49, respectively. The magnetic field magnitudes and directions are shown in Fig. 16 (Burlaga et al. 2021a). The azimuthal angle, λ, of the magnetic field is scattered about 90°and 270°and the elevation angle, δ, is scattered about Fig. 16 Daily averages of the V1 (left) and V2 (right) magnetic field magnitude, azimuthal angle, and elevation angle in the heliosheath 0°, as predicted by the Parker magnetic field model. A sector structure, with alternating inward and outward magnetic fields, was observed by both spacecraft, but Voyager 2 observed a long period of unipolar magnetic fields when it was below the heliosphere current sheet (HCS). The average B did not change until very close to the HP crossing at either V1 or V2. The magnetic field intensity was strongly influenced by solar activity. Voyager 1 observed more MIRs and GMIRs than Voyager 2. Relatively few weak MIRs were observed by Voyager 2 when it was at lower latitudes than the HCS. Relatively strong magnetic fields were observed at Voyager 1 and Voyager 2 for several months as the spacecraft approached the heliopause, when they moved through the magnetic barrier. Figure 17 shows the V2 plasma velocity, density, and temperature across the heliosheath. The average speed |V| is roughly constant at 145 km/s until just before the HP, with excursions between 120 and 180 km/s. V R decreases slowly from 130 km/s near the TS to 80 km/s before the HP boundary region, where it decreases more rapidly and then goes to zero at the HP. V T increases away from the TS; it becomes hard to measure with PLS when the flow angle V surpasses 50°(beyond the field of view of the instrument) but the LECP data show it reaches 100 km/s near the HP. The RT angle increases to a plateau at about 55°in 2012 that extends to the HP. The flow angle out of the solar equatorial plane ρ changes from −10°n ear the TS to −25°near the HP with a maximum flow angle of −30°. The density initially falls by a factor of 2 after the TS in 2009, then increases in 2011. This may be a solar cycle affect since the lower density region corresponds to solar minimum in the outer heliosphere. The average density then stays fairly constant from 2011 until it increases before the HP. The T drops from 150,000°K near the TS to 50,000°K by 90 AU and then stays, on average, constant to the HP. MIRs are observed at 99, 109, and 116 AU with increases in B, V R , N, and T. These are driven by solar transients and propagate through the heliosheath. After the 116 AU MIR, a series of pressure pulses are observed with increases in N, T and the energetic particle intensity. Figure 18 compares the changes in N, T, keV ions, MeV ions, and GCRs. The correlations are very good for the all the parameters, suggesting the heliosheath is traversed by pressure waves with alternating compression and rarefaction regions. Before the HP there is a rise in B, N, T, and a decrease in V R , as discussed in the HP section. These changes suggest that pressure waves move through the heliosheath compressing magnetic flux tubes and the plasma and energetic ions within by similar amounts.
Flow Speeds in the Heliosheath
Since V1 does not have a working plasma instrument, the plasma density and temperature are not known and speeds are derived from particle anisotropies using the Compton-Getting J.D. Richardson et al. Fig. 17 The V2 plasma speed, radial speed, flow angles in the RT and RN planes, density and temperature in the heliosheath (CG) effect (Compton and Getting 1935;Gleeson and Axford 1968;Forman 1970;Ipavich 1974). The CG method uses energetic particle fluxes from different look directions to derive plasma flow speeds. Krimigis et al. (2011) report that V1 entered a stagnation region in 2010, about 8 AU before the HP, where V R went to 0 and was sometimes negative (inward) and V T and V N were small. The V1 CRS instrument observed > 0.5 MeV anisotropies during spacecraft rolls and derived V R consistent with the LECP V R , also showing a stagnation region near the HP ). This stagnation region was unexpected; one possible explanation is that HP instabilities distort the flow field (Borovikov and Pogorelov 2014). An additional puzzle was that, despite the speed decrease, B did not increase and the magnetic flux was not conserved .
The V2 PLS V R profile was also surprising. V R decreased only slowly until the HP crossing. The LECP V2 CG V R profile derived from 28-53 keV ion anisotropies differs from the PLS V R (see Fig. 19); PLS shows a slow steady decrease while LECP shows large, >100 km/s, excursions in V R . Starting in 2016, about 8 AU before the HP, the LECP CG V R averaged about half that observed by PLS. suggest this slowdown region is analogous to the V1 stagnation region. The V2 CRS V R using >0.5 MeV ions are again consistent with the LECP V R . These authors suggest that the CG calculations are flawed since they disagree with the direct plasma observations; however, another possibility is that PLS speeds are not correct. show that if the PLS V R profiles are used to calculate the magnetic flux, a conserved quantity (Parker 1963), then the magnetic flux at 1 AU and at V1 and V2 are similar. However, if the CG V R profiles are used then the magnetic flux in the heliosheath differs greatly from that at 1 AU. They also show that if the V1 V R profile were the same as that at V2, then magnetic flux conservation would hold at V1 as well. It is not understood why the different energy particle Fig. 18 A comparison of the changes of plasma density, temperature, keV and MeV heliosheath ions, and GCRs. For each quantity x, (x − x )/ x is plotted where x is the running one-year average. GCR changes are multiplied by 10 but the other plots have no normalization anisotropies from LECP and CRS give the same incorrect V R values, but the V2 speeds and magnetic flux conservation argue against the presence of a stagnation region.
Reconnection in the Heliosheath
The compression of the heliosheath as it approaches the HP brings oppositely directed magnetic field lines together and could drive reconnection in the heliosheath (Lazarian and Opher 2009;Drake et al. 2010;Pogorelov et al. 2009Pogorelov et al. , 2013, but the importance of this process is not clear. Opher et al. (2011) suggest that near the HP the heliosheath is comprised of magnetic bubbles formed by reconnection. Observations of the particle intensities in Fig. 20 show that in the unipolar (no reconnection) zone V2 LECP electron fluxes drop out and ion fluxes decrease (Hill et al. 2014). These fluxes are high in the sector zone where oppositely directed magnetic fields may reconnect and create magnetic bubbles that trap particles and keep the intensities high. Drake et al. (2017) predict B and N are correlated in the sector zone, but not in the unipolar zone, and data substantiate this prediction. However, the agreement of the magnetic flux values at 1 AU and in the heliosheath and the paucity of observations of magnetic D-sheets (Burlaga and Ness 2010;Burlaga et al. 2021b) argue against significant reconnection. The role of reconnection in the heliosheath is still not well understood.
Transients Reaching Beyond the Heliosphere
The Sun is very active and drives transient structures with effects that persist through the heliosphere and into the LISM. The largest of these events are coronal mass ejections (CMEs) Fig. 19 Left: the top panel shows the V2 radial speed measured by PLS (black) and derived from the CG effect using LECP (blue) and CRS (red) data. The middle and bottom panels compare the magnetic flux at 1 AU (Omni data set) with that calculated at V2 using the PLS and LECP CG speeds, respectively. Right: the top panel shows the V1 radial speed derived from the CG effect using LECP (blue) and cosmic ray subsystem (CRS) (red) data. The middle and bottom panels compare the magnetic flux at 1 AU (Omni data set) with that calculated at V1 using the PLS speeds and using the V2 LECP CG profile, respectively The white regions are where V2 was in a unipolar magnetic field region (below the heliospheric current sheet) and the gray regions are where V2 was in the sector region and saw both toward and away magnetic field directions. V1 is always in the sector region (Hill et al. 2014) which propagate through the heliosphere as interplanetary CMEs (ICMEs). ICMEs expand outward to about 15 AU until they reach equilibrium with the surrounding solar wind (von Steiger and Richardson 2006). ICMEs have been tracked through the heliosphere to the TS. These ICMEs can drive shocks which persist out to the HP (Kim et al. 2017;Liu et al. 2014;Gurnett et al. 2013). Corotating interaction regions (CIRs) form when high speed solar wind streams overtake slower streams, with a forward-reverse shock pair forming at the interface.
Both ICMEs and CIRs accelerate particles; ICMEs are more likely to drive large events, MIRs and GMIRS, that influence the outer heliosphere. More ICMEs occur at solar maximum. These ICMEs merge, with faster ICMEs overtaking slower ones, to form MIRs. These MIRs are quasi-periodic in the outer heliosphere near solar maximum, occurring roughly twice a year, with pressure enhancements of factors of 5-10. These MIRs run into the TS and move it outward, sending pressure pulses through the heliosheath. Figure 17 shows that these pressure pulses are characterized by highly correlated increases in the plasma density, pressure, and keV to MeV particle intensities as the whole region is compressed. The exception is B, whose magnitude is only weakly correlated with the pressure changes. These heliosheath pressure pulses eventually reach the HP, push the HP outward, and drive shocks into the LISM. These shocks generate electron beams which drive the plasma oscillations observed by PWS.
Thickness of the Heliosheath
The thickness of the heliosheath from the TS to the HP was 28 AU at V1 and 36 AU at V2. Steady state MHD simulations give widths of 55-65 AU (see Kleimann et al. 2022, this journal). This difference implies that time dependence is important or that pressure is missing from the heliosheath; mechanisms suggested to resolve this problem are solar cycle variations (Izmodenov et al. 2005(Izmodenov et al. , 2008, removal of hot heliosheath ions by charge exchange (Malama et al. 2006;Opher et al. 2020), inclusion of thermal conductivity (Izmodenov et al. 2014), and escape of ACRs across the HP (Guo et al. 2018). All produce insufficient heliosheath thinning, as discussed in Kleimann et al. (2022, this journal)
Heliosheath Magnetic Fields and Plasma: Solar Maximum
To contrast the magnetic observations in the heliosheath with those in the supersonic solar wind we discuss V2 observations from 2015, when V2 moved from 106.81 AU to 109.93 AU, latitude 30.9°S to 31.3°S and longitude 217.7°to 217.9° . The average B during 2015 was 0.126 nT, which is higher than the heliosheath average, because in 2015 Voyager 2 was observing solar maximum. At least 3 MIRs were observed during the first 140 days of the year and a GMIR was observed from day 260 to at least day 305. It is noteworthy that such strong interaction regions, with B exceeding 0.2 nT, survive out to distances approaching that of the heliopause.
In the GMIR in Fig. 21, the density and temperature were enhanced significantly and the speed increased, suggesting that the GMIR was still growing in strength. The GMIR was related to a large increase in the dynamic pressure, NV 2 , that might have produced a shock in the VLISM. The GMIR produced a large decrease in >70 MeV/nuc GCRs, as typically observed through the heliosheath and heliosphere beyond 10 AU (Burlaga et al. 1985). Figure 22 shows that the distribution of the magnetic field strength during 2015 was lognormal, which is characteristic of the distribution of B that is observed when the sun is active and produces ejecta and magnetic clouds that interact with one another to produce the strong magnetic fields in MIRs and GMIRs. The distribution of the azimuthal angle λ has two nearly equal height peaks at λ = 90°and 270°, corresponding to the Parker spiral magnetic field sector directions. The two peaks of λ indicate that the heliospheric current sheet extended to large latitudes in the southern hemisphere, relative to the latitude of Voyager 2, and was warped, producing four sectors in the magnetic field. The well-defined sector boundary that V2 crossed near day 70, 2015 was thick; the change in the direction of B lasted ∼7 days. A minimum variance analysis showed that B rotated smoothly through a current sheet in a plane that was tilted 47°with respect to the solar equatorial plane. The velocity did not change when Voyager 2 crossed the current sheet, consistent with a tangential discontinuity. The distribution of the elevation angle, δ, peaked at δ ∼ 0°, as expected for a spiral magnetic field. There was a change in δ to near −65°during an extended interval between days 150 and 210. These unusual angles might be attributed to a ripple in the The small-scale structure of the V2 2015 magnetic field can be quantified by plotting the increments of B(t) on a scale of one hour, dB1h (t) = B (t + 1 hr) − B(t). Figure 23(b) shows that dB1h(t) for the one-hour of B was very bursty. The dots in Fig. 22(a) show the distribution of the increments of dB1h(t). The Tsallis distribution provides an excellent fit to the distribution of increments (the coefficient of determination R 2 = 0.999), with a non-extensivity parameter q = 1.66 ± 0.03. Similar results for the distribution of daily averages give a fit to the Tsallis distribution with R 2 = 0.979 and q = 1.60 ± 0.17. Thus, on these small scales, the non-extensively parameter during the undisturbed conditions shown in Fig. 23 is similar to that observed during the quiet solar wind conditions in 2000 discussed earlier.
Heliosheath Magnetic Fields and Plasma: Solar Minimum
The non-extensive statistical mechanics of (Tsallis 1988(Tsallis , 2004a(Tsallis , 2004b(Tsallis , 2009) provide a description of the driven, open, non-equilibrium systems in the supersonic solar wind and heliosheath. This section discusses V2 observations of the 2010 "quiet" heliosheath magnetic field, when the spacecraft moved between 91.02 and 94.5 AU at latitudes 28.8°S to 29.3°S . The magnetic fields were carried by solar wind that left the southern coronal hole on the sun during 2008 and 2009, when solar activity was historically low (Ahuwalia and Ygbuhay 2011) and B at 1 AU was only 4.0 nT. Voyager 2 was south of the heliospheric current sheet (which was relatively close to the solar equatorial plane, since it was solar minimum) and it measured a relatively weak average magnetic field of 0.08 ± 0.04 nT. Thus, Voyager 2 observed a "minimum energy state" of the heliosheath during 2010. J.D. Richardson et al. The full width at half maximum was 0.060 nT for daily averages and 0.079 nT for the hourly averages. Both B distributions in Fig. 24 are accurately described by a Gaussian distribution (with coefficients of determination R 2 = 0.94 and 0.97, respectively), which is characteristic of B in the solar wind and the heliosheath when solar activity is very low. Tsallis (2004a,b) introduced the concept of the "q-triplet" = (q stat , q sen , q rel ) associated with the "non-extensively index" q stat obtained from the Tsallis distribution of increments of B, the parameter q rel related to the correlation coefficient, and the parameter q stat derived from the multifractal spectrum. Burlaga and Vinas (2005a) were the first to compute the q-triplet for a physical system, using solar wind data. They obtained the values q stat = 1.75 ± 0.06, q sen = −0.6 ± 0.02, and q rel = 3.8 ± 0.034 for observations made in the solar wind near 30 AU in 1989 and near 85 AU in 2002. They showed that the q-triplet had the properties associated with non-extensive statistical mechanics. For example, the deviation of the 3 q's from unity is a measure of the departure from thermodynamic dynamic equilibrium. Nevertheless, the fluctuations in the solar wind and heliosheath are in a metastable, quasistationary state.
The one-day increments of B, dB1d = B(t + 1 day) − B(t), plotted in Fig. 25, show that the increments are "spiky" or intermittent. An excellent fit (R 2 = 0.98) to the observations (the squares in Fig. 25b) was obtained with the Tsallis (q-Gaussian) distribution, shown by the solid curve, and the value q = 1.6 ± 0.01 was obtained from the fit. This value of q is the same as that observed during solar maximum conditions during 2015 discussed in Sect. 2, even though in 2010 the sun was inactive, with few sunspots! As a measure of the significance of q = 1.6, we note that q = 1 for a Gaussian distribution and q = 1.7 is associated with the onset of chaos in the z = 2 logistic map (Tsallis 2009(Tsallis , p. 1964. The non-extensively parameter q is seldom much larger than 1.6 in the heliosheath and solar wind. The parameter q derived from the distribution of increments of B is the same as the parameter q stat . In Boltzmann Gibbs statistical mechanics, the PDF describes the thermal equilibrium state characterized by the temperature T. In Boltzmann Gibbs statistical mechanics there is an exponential relaxation time to thermal equilibrium (exponential decay, with a relaxation time τ ). Tsallis statistics are associated with the q-exponential relaxation of microscopic quantities toward thermal equilibrium (q-exponential decay), with a relaxation parameter q rel that can be computed from the "correlation function" C(τ ) = q rel = (B(ti + τ ) − B(ti) ) × ((B(ti + τ ) − B(ti) ) / (B(ti) − B(ti) )2 that is plotted as a function of τ on a log-log scale in Fig. 25. The result is accurately described (R 2 = −0.99) by a linear fit with a slope s = −0.35 ± 0.02 on scales from 1 to 16 days. Thus, we find that q rel = 1 − 1/s = 3.9 (Tsallis 2009, p. 156). This result is in good agreement with the results that q rel = 3.98 in 1989 and q rel = 3.54 in 2002 obtained by Burlaga and Vinas (2005b). In Boltzmann-Gibbs statistical mechanics, the correlation function is an exponential, rather than a power law, related to q cor in Fig. 26 as predicted by non-extensive statistical mechanics which we find in the Voyager 2 data.
In Boltzmann Gibbs statistical mechanics there is exponential sensitivity to the initial conditions (strong chaos, described by an exponential growth characterized by zero Lyapunov exponents and a growth parameter q sen ). In Tsallis statistics the system is related to q-exponential sensitivity to the initial conditions described by q-exponential growth. This parameter is derived from the "multifractal spectrum" (Burlaga et al. 1993(Burlaga et al. , 2006Halsey et al. 1986;Hentschel and Procaccia 1983;Macek 2007;Macek and Szczepaniak 2008;Nauenberg 2003;Pirraglia 1993;Stanley and Meakin 1988;Tel 1988;Tsallis 2004b, 2009, and Tsallis and Brigatti 2004. The multifractal spectrum of daily averages of B observed by Voyager 2 during 2010 was computed using the methods described by Burlaga (1995) and Burlaga et al. (2006), and the results are plotted as points in Fig. 27 for 1 < τ < 8 days, where f is the multifractal distribution function, which is a function of the parameter α. The relevant quantities are the values of α at f = 0, but the observational points do not extend to f = 0. The observations were extrapolated to f = 0 by means of the quadratic and a cubic fit. Using the cubic fit one obtains the minimum αm = 0.73 ± 0.03 and the maximum αM = 1.35 ± 0.02, similar to the values obtained by Burlaga and Ness (2012) in the heliosheath. From these numbers one obtains q sen from the relation 1/(1 − q sen ) = 1/α min −1/α max (Lyra and Tsallis 1998), hence q sen = −0.5 ± 0.3, compared with the average value for the distant heliosphere q sen = −0.6 ± 0.2 obtained by Burlaga and Vinas (2005a).
In summary, during 2010 Voyager 2 observed a 1) q-Gaussian distribution of daily increments of B with q = 1.6, 2) a power law correlation (rather than an exponential correlation) on scales from 1 to 16 days with q rel = 3.9, and 3) a multifractal structure of B on scales from 1 to 8 days with q sen = −0.5 ± 0.3.) Figure 28 shows that the length of the curve (B(t), τ ) (which is treated as a fractal) as a function of lag τ in a log-log plot is a straight line (R = ±0.93) on scales from τ = 1 to 100 days. Thus, the length of the curve is a power law L ∼ τ −s , which corresponds to a frequency spectrum ∼ f −α where α = 3 − 2s = 1.1 ± 0.10 consistent with a "1/f" spectrum on scales from 1 to 100 days. This spectrum corresponds to an equal distribution of energy at all scales. This 2010 observation is the first demonstration of an f −1 spectrum associated with B in the heliosheath .
The Heliopause Region
The HP is the boundary between the solar wind plasma and the interstellar plasma. The location and nature of the HP were unknown before the Voyager crossings; it was thought to be a sharp tangential discontinuity in analogy with planetary bow shocks. This section describes the HP and its precursors upstream in the heliosheath. Figure 29 shows overviews of the HP regions at V1 and V2. V1 crossed the HP on day 238 of 2012 at 121.7 AU. V2 crossed it on day 318 of 2018 at 119.0 AU. The first clear evidences of the V1 HP crossing were an abrupt increase in magnetic field strength B, a decrease of heliosheath energetic particles, and an increase in the GCR counting rate at 2012.56 Krimigis et al. 2013;). These changes were followed by a recovery, a larger abrupt change, another recovery, then a final crossing after which B remained high and steady, the energetic particles disappeared, and the GCR intensities plateaued (see region T near 2012.6 in Fig. 29). Figure 31 shows that the direction of B did not change across the HP, contrary to expectations that the VLISM field would be tilted with respect to the solar field. The steady V1 B direction caused controversy about whether this boundary was the HP or a transition from closed field lines to field lines connected to the LISM, allowing heliosphere ions to escape into the LISM and GCRs to enter the heliosphere. The strong uniform magnetic fields, absence of energetic particles, the plateau of the GCRs, and the subsequent observation that the density was 0.08 cm −3 (Gurnett et al. 2013) were strong evidence that V1 had crossed the HP, but other hypotheses persist (Gloeckler and Fisk 2015). The V2 HP crossing had similar features, an increase in GCR intensity, an increase in B, and a decrease in heliosphere ion intensity . The magnetic field direction again did not change. V2 has a working plasma instrument which observed a sharp change in the plasma flux at the HP . The higher densities in the VLISM were again confirmed by PWS .
V1 and V2 observed HP boundary regions and complex structure before the HP crossings. Figure 30 compares the V1 and V2 HP crossings to illustrate which features are common and which may depend on time and/or position. The first precursor of the HP may be the decrease in CG V R speed at both V1 and V2 shown in Fig. 18 about 8 AU before the HP (Krimigis et al. 2013. As discussed above, neither B nor the PLS V R at V2 change at this boundary; the cause of the CG speed decrease is not known but may be a HP boundary effect. J.D. Richardson et al.
The Heliopause Boundary Region and Magnetic Barrier
The V1 HP crossing is preceded by the HP boundary region, a ∼1.3 AU wide region that started at day 125 of 2012 with a step increase in the GCR rates and a decrease in heliosheath electrons (Fig. 29) and ended at the HP with a second step increase of GCRs, an increase in B, and a dropout of low-energy ions to near background levels Krimigis et al. 2013;. The magnetic field lines in this region may be reconnected to the VLISM magnetic field enabling GCRs to enter the heliosheath and heliosheath electrons to exit.
V2 observed a similar boundary region with an increase in the GCR intensity and decrease in heliosheath electron flux on day 229 of 2018, so open field line boundary regions may be a common feature at the HP. At V2, the GCR increase was coincident with the beginning of the magnetic barrier, a region of increased B not observed at V1. The average B in the V2 magnetic barrier was ∼ 0.40 ± 0.06 nT, significantly greater than the average B of ∼0.13 nT from days 1-229 of 2018. The magnetic barrier had the strongest magnetic fields observed in the heliosheath and persisted for 80 days, from day 229 to day 309; these fields were comparable in strength to the magnetic field in the VLISM observed by Voyager 1. Neither the plasma parameters nor the LECP ion intensity changed at the start of the magnetic barrier. The V2 magnetic field increased to 0.7 nT at the HP, significantly larger than the 0.4 nT field observed at V1. The V2 magnetic field was predicted to be stronger than that at V1 to account for the difference in the TS locations.
The Plasma Boundary Region
V2 provided the first plasma data from the HP region. Figure 31 shows a plasma boundary layer began on day 150 of 2018, 1.5 AU ahead of the HP, where the plasma speed decreases by 30%, the density increases by a factor of 2, and T increases by 30%. The LECP 53-85 keV ion intensity increase correlates well with the plasma density suggesting the whole plasma flux tube is compressed, similar to the other pressure pulses observed in the heliosheath. The GCR slope increased slightly at the beginning of this region (Fig. 31), but the magnetic field did not change at the start of the plasma boundary layer but did decrease slowly from days 150-215 as N and T increased. The origin of the plasma boundary region is not known; compression of plasma as it approaches the HP could cause an increase in the density, temperature, and LECP ion flux as observed, but would also produce an increase in B which was not observed.
The Heliopause Transition Layer
The HP transition layer started on day 302, 2018, about 0.06 AU inside of the HP. In this boundary layer the density increases by a factor of two, the magnitudes of V R and V N decrease, the flow angle in the RT plane increases, and B increases by 30% . This region contains solar wind plasma that is modified by the HP boundary. The V2 HP was a sharp boundary where the outward plasma flux dropped to background levels, B increased, the GCR intensity increased, and the heliospheric particle intensity decreased in the 8-hour data gap on day 309. Figure 32 shows the particle intensity profiles across the HP . At V1 several HP precursors were observed which looked like partial HP crossings or encounters J.D. Richardson et al. Fig. 31 The plasma boundary layer. The panels show daily averages of the plasma speed, density, temperature, B, and GCR intensity. The dashed lines show the beginning of the plasma boundary layer, the start of the magnetic barrier, and the HP (from with flux tubes from the VLISM moving into the heliosheath. After the V1 HP, ACRs were observed in the VLISM for about 25 days. At V2 the leakage from the heliosheath into the VLISM was much more pronounced, especially at higher energies. A drop off in ACRs occurred 65 days after the HP but these ions did not drop to background levels until about 120 days after the HP. The GCRs at V1 very quickly approached their asymptotic value, but at V2 the GCR increase took about 30 days. The region where magnetic field lines are connected outside the HP was much larger at V2 than V1, perhaps because V2 was further from the nose of the heliosphere.
The Heliopause
The V1 HP crossing was marked by an increase in the magnetic field strength, a decrease of heliosheath energetic particle intensities, and an increase in the GCR counting rate. After the HP, B remained high and steady, the heliosheath particles disappeared, and the GCR intensities plateaued. However, the direction of the magnetic field did not change. PWS data confirmed that this boundary was the HP, observing the higher densities, ∼0.06 cm −3 , expected in the VLISM (Gurnett et al. 2013). Several HP precursors were observed at V1, with smaller decreases in B and the energetic particles and increases in the GCRs centered on days 212 and 230 of 2012 Krimigis et al. 2013;. The precursors may be flux tubes moving from the VLISM into the heliosheath.
The V2 crossing in Fig. 30 did not have precursors like those at V1. On day 309 of 2018, B sharply increased, the heliosheath energetic particle intensity decreased, the GCR counting rate increased, and the radially outward plasma currents dropped to background levels. PWS observations from days 35-55 of 2019 confirmed the density had increased to The biggest surprise of the V1 HP crossing was that the direction of B did not change Burlaga and Ness 2014). At V1 the magnetic field direction near the HP was nearly constant but different from the Parker field direction of 270°by about 20°in azimuth and 18°in elevation angle. Models disagreed on whether this lack of B rotation was a coincidence of the geometry where the HP was crossed or if the rotation of the VLISM B toward the Parker spiral direction were an intrinsic HP feature. This issue was resolved (Fig. 29) when B did not change direction at the V2 HP crossing and thus this lack of a rotation in B seems a usual, but not understood, HP characteristic .
At V2, the magnetic field azimuthal angle was very close to 270°and the elevation angle was about 20°. The strong uniform magnetic fields, the absence of energetic particles, the plateau of the GCRs and the cessatiopn of radial flow were strong evidence that V2 had crossed the HP. The lack of field rotation is still a puzzle and discussed in the modeling chapter (Kleimann et al. 2022). 6 The LISM
The LISM Temperature
The PLS instrument normally observes LISM currents in only one energy channel in one detector. This gives one observation but three unknowns, the plasma density, temperature, and speed into the cup. However, there are two times PLS can estimate the plasma temperature; when PWS measures the density and when the spacecraft rolls, giving data from different look directions. When the density is known from PWS, the measured currents in the 10-30 eV ion detector limit T and V to a 2D box in TV space. Using a reasonable range J.D. Richardson et al.
Fig. 33
Radio emissions were observed by both Voyager PWS instrument some time after the Saturn encounters. They consist of a low-frequency component (< 2.4 kHz) that shows little frequency drift and a series of higher frequency emissions that drift in frequency with time. Gurnett et al. (1993) developed the basic explanation of these emissions as radio waves mode converted from electron plasma oscillations at the electron plasma frequency in the source. Further, they argued for the source plasma density to be high enough to explain the frequencies of the emissions, the source had to be beyond the heliopause where a strong increase of density would be expected to balance the pressure from the much lower density and hot plasma in the heliosheath for V, T is estimated to be 30,000-50,000 K. When the spacecraft rolls PLS (sometimes) observes the variation of the currents with look angle. Fitting the currents vs. angle gives T, V, and N; the fit T is again on the order of 35,000 K . The temperatures are on the upper end of those expected from MHD models (Zank et al. 1996) and may suggest heating by reconnection in the PDL (Fuselier and Cairns 2017).
The Electron Density in the VLISM
The Voyager PWS instruments began making remote measurements of the density of the VLISM as early as 1983 through the detection of radio emissions as shown in Fig. 33 in the frequency range of 1.8 to 3.5 kHz from the vantage point of heliocentric radial distances of greater than about 13 AU (Kurth et al. 1984(Kurth et al. , 1987. At the time of these measurements, the nature of the source was not understood, although sources at or beyond the heliopause were hypothesized. The basic explanation for these emissions was formulated by Gurnett et al. (1993) as a second group of radio emissions was detected beginning in 1992. Both the events starting in 1983 and in 1992 showed similar structures, a low-frequency (<2.4 kHz) component with little drift in frequency and a higher frequency component displaying frequency drifts of order 1 kHz/yr. Another, significantly weaker event commenced in 2002. The temporal spacing of the events was similar to the 11-year solar cycle, which implied a solar influence in the occurrence of the emissions. The 1983 (1992) event was first detected within 412 (408) days of a very strong Forbush decrease. A Forbush decrease is a decrease in the flux of galactic cosmic rays at Earth caused by disturbances in the interplanetary magnetic field that impede the access of the cosmic rays to the inner heliosphere. This Gurnett et al. (1993) to explain the frequency drift of the radio emissions detected by Voyager inside the heliosphere. The propagation of a shock through a density gradient beyond the heliopause causes electron plasma oscillations at the plasma frequency that mode couple into radio emissions and propagate into the heliosphere, where they are detected by the Voyagers correlation led to the hypothesis that transients from the Sun propagate through the heliosphere, form GMIRs which hit the HP and drive shocks that propagation into the VLISM. Using observed shock speeds from in situ observations of transients at a range of heliocentric distances by Ulysses, Pioneers 10 and 11, and the Voyagers, and some simple assumptions about the thickness of the heliosheath and propagation speeds therein, a time-of-flight calculation gave an estimate for the distance to the heliopause of 116 to 177 AU (Gurnett et al. 1993). This estimate encompasses the distance of the observed heliopause crossings in the range of 120 AU by both Voyagers. Gurnett et al. (1993) explain the frequency drift of the radio emissions as owing to the propagation of the shocks through a density gradient beyond the heliopause (Fig. 34). The shocks accelerate electron beams which excite electron plasma oscillations at the local electron plasma frequency. The plasma oscillations then mode-convert into radio emissions that propagate freely away from the source. These radio waves were detected by Voyager outside 10 AU. Assuming the primary radio emission is at the electron plasma frequency f pe , the frequency of the radio emission gives the plasma frequency at the source. Hence, the frequency drift is a result of shocks propagating through the VLISM. With n e related to the electron plasma frequency (Hz) by n e = (f pe /8980) 2 , the VLISM electron densities at the radio sources range from about 0.04 cm −3 to 0.15 cm −3 . Voyager PWS measurements in the VLISM substantiated the earlier interpretations of the kHz radio observations. The Voyagers were at the right time and location to observe electron plasma oscillations driven by shocks propagating into the VLISM from solar transients. As of 2022, eight intense electron plasma oscillations events have been observed by Voyager 1 as shown in Fig. 35 (Gurnett and Kurth 2019). The frequencies of these emissions show that the VLISM electron density increases from 0.04 cm −3 near the heliopause to 0.12 cm −3 20 AU beyond the heliopause. Electron plasma oscillations are associated with an electron foreshock not unlike those ahead of planetary bow shocks in the solar wind; however, these shocks are associated with solar transients that are transmitted through the heliopause and propagate through the VLISM (Gurnett et al. 2015). These have been observed at a rate loosely averaging once per year.
The V2 PWS has detected two plasma oscillation events since it crossed the heliopause in 2018. The V2 instrument no longer produces high resolution wideband data, but the PWS spectrum analyzer channels allow determination of the VLISM plasma density (Kurth and Gurnett 2020). The two V2 measurements are plotted with PLS and PWS measurements J.D. Richardson et al.
Fig. 35
Detections of electron plasma oscillations by Voyager 1 after it crossed the heliopause. The increasing frequency of the events with increasing time and radial distance shows a radial gradient in the VLISM (Gurnett and Kurth 2019). These plasma waves were predicted to be present in the VLISM in the early 1980's and are responsible for the observed radio emissions from upstream of the termination shock up to beyond 146 AU in Fig. 36. A clear radial density increase is observed by both Voyagers despite the broad separation in their locations. Figure 37 shows that starting in 2015 a very weak line has been observed in the V1 PWS wideband observations at the electron plasma frequency that is likely due to thermal plasma oscillations (Burlaga et al. 2021b;Ocker et al. 2021). The densities obtained from the plasma oscillation events and the thermal plasma line are given in Fig. 36. As noted by Burlaga et al. (2021b), the plasma densities from the thermal plasma line show density increases at two pressure fronts recognized in the Voyager 1 magnetometer data. The ratios of the plasma densities and the magnetic fields before and after the shock in 2014 and at both pressure fronts are remarkably similar.
The Propagation of Solar Transients into the VLISM
As discussed above, solar transients are the primary drivers of the plasma oscillations observed in the VLISM. CMEs and CIRs are the two primary types of solar transients. Note that the propagation time of a solar transient from the Sun to the heliopause is several hundred days, during which many solar transients can occur. Therefore, interactions between the solar transients during the transit time and formation of an MIR occur in the outer heliosphere. A radio wave event observed by Voyager in the VLISM is most likely caused by merging of multiple transients.
MIRs in the outer heliosphere can form from the merging of a series of ICMEs (e.g., Burlaga 1995;Wang and Richardson 2002;. The Voyager measurements in the VLISM enable timing analysis of the propagation of solar transients into the VLISM. The radio and shock events observed at Voyager 1, as shown in Fig. 34, also allow investigations of the ultimate destiny of solar transients as they approach the outer heliosphere and how they affect the VLISM. The April-May 2013 radio wave event in Fig. 35 helped determine that Voyager 1 had crossed the heliopause, providing the first evidence that the density had increased to the ex- Fig. 36 A summary of plasma densities measured by the V2 PLS and the V1 and V2 PWS instruments beginning just inside the termination shock. The two plasma oscillation events detected by V2 after it crossed the heliopause show a radial density gradient remarkably similar to that observed by V1. Given that the two Voyagers are separated by 70°as viewed from the sun, the radial gradient is a large-scale feature of the VLISM, at least toward the nose of the heliosphere (Kurth and Gurnett 2020) pected VLISM levels. This event is hypothesized to have been produced by the 2012 March CMEs hitting the heliopause (Gurnett et al. 2013). Liu et al. (2014) made the first attempt to establish the timing of the propagation of solar transients into the VLISM, combining J.D. Richardson et al.
Fig. 38
MHD propagation of solar wind streams from the Earth to 120 AU. The upper left panel shows the observed solar speed at Wind, and the other panels show the predicted speeds at various distances. The shaded regions in the first panel represent the ICME intervals at the Earth, and the shaded region in the last panel indicates the period of the radio emission observed by Voyager 1. The predicted magnetic field is plotted at 80 AU to show the size of the MIR. Reproduced from Liu et al. (2014) wide-angle imaging observations from STEREO, in situ measurements, and MHD propagation of the measured solar wind disturbances. In 2012 March, the active region NOAA AR 11429, one of the biggest active regions in solar cycle 24, exhibited extraordinary activity (Liu et al. 2014). The active region emitted a series of large CMEs as it rotated with the Sun from the east to west. A cluster of shocks and ICMEs were observed near the Earth, and their propagation outward was predicted using an MHD model (see Fig. 38). The transient stream interaction results in the formation of a large MIR preceded by a shock in the outer heliosphere. The predicted arrival time of the shock and MIR at 120 AU is April 22 2013, which agrees with the April-May 2013 radio emission period and the time of a transient disturbance in galactic cosmic rays (Gurnett et al. 2013;Krimigis et al. 2013).
This MHD model does not include the transition across the termination shock and the heliosheath. Liu et al. (2014) use passage of a shock through the Earth's bow shock and magnetosheath as an analogy, to determine the speed of the shock in the heliosheath. Subsequent MHD simulations are fully three-dimensional (3D) and multi-fluid (e.g., Fermo et al. 2015;Kim et al. 2017;Guo et al. 2021). These recent models have the advantage of including the termination shock, heliosheath and heliopause, but face difficulties such as how to determine the 3D solar wind input, and reproduce correct locations of the heliospheric structures and precise timing with the Voyager 1 measurements. Nevertheless, Kim et al. (2017) reproduce some of the shocks in the magnetic field observed at Voyager 1, assuming an ad hoc 3D solar wind input with a prescribed polar coronal hole geometry. Their results indicate that merging of CIRs may also have played a role in the formation of some of the MIR shocks. Observationally, the timing of events at V1 and V2 can also be compared . V1 crossed the heliopause into the VLISM in 2012 when Voyager 2 was still in the heliosheath. From 2012.5-2016.5, solar maximum conditions persisted in the heliosheath with five MIRs observed at Voyager 2. These MIRs occur at a similar frequency to the transients observed in the VLISM by Voyager 1. Figure 39 shows that the timing between Voyager 1 and 2 measurements indicates that the transients observed in the VLISM by Voyager 1 may have been driven by the MIRs observed at Voyager 2.
Summary
The Voyager spacecraft provided the first survey of the heliosphere from the Earth into the LISM, making the first measurements of the outer heliosphere where PUIs dominate the thermal pressure, of the termination shock, the heliosheath, heliopause, and the VLISM. These observations provided many surprises and have left us with many puzzles for future missions to answer. The Voyager spacecraft have sufficient power to operate all instruments until the mid-2020s; after this time, the instruments will be turned off serially which will extend the useful life of the spacecraft to at least 2030. New Horizons is providing another view of the outer heliosphere and may cross the TS and enter the heliosheath, making the first direct measurements of the PUIs. IBEX (and soon IMAP) provide a global view of the heliosphere through ENA measurements. But more missions are needed! J.D. Richardson et al. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2022-06-02T15:29:58.574Z | 2022-05-31T00:00:00.000 | {
"year": 2022,
"sha1": "2821bce5be968d1c6b4a5c943bbd474c6ce12219",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11214-022-00899-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a43786306e325c7c85621d5a8d953d9e5972ed5d",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": []
} |
119593034 | pes2o/s2orc | v3-fos-license | On enveloping C*-algebras of Hecke algebras
We give a sufficient condition for a $^*$-algebra with a specified basis to have an enveloping $C^*$-algebra. Particularizing to the setting of a Hecke algebra $\h(G,\gm)$, we show that under a suitable assumption not only we can assure that an enveloping $C^*$-algebra $C^*(G, \gm)$ exists, but also that it coincides with $C^*(L^1(G,\gm))$, the enveloping $C^*$-algebra of the $L^1$-Hecke algebra. Our methods are used to show the existence of $C^*(G, \gm)$ and isomorphism with $C^*(L^1(G,\gm))$ for several classes of Hecke algebras. Most of the classes which are known to satisfy these properties are covered by this approach, and we also describe some new ones.
Introduction
A Hecke pair (G, Γ) consists of a group G and a subgroup Γ ⊆ G for which every double coset ΓgΓ is the union of finitely many left cosets. In this case Γ is also said to be a Hecke subgroup of G. Examples of Hecke subgroups include finite subgroups, finite-index subgroups and normal subgroups. Hecke subgroups are also sometimes called almost normal subgroups (although we will not use this terminology here) and it is in fact many times insightful to think of this definition as a generalization of the notion of normality of a subgroup.
Given a Hecke pair (G, Γ) the Hecke algebra H(G, Γ) is a * -algebra of functions over the set of double cosets Γ\G/Γ, with a suitable convolution product and involution. It generalizes the definition of the group algebra C(G/Γ) of the quotient group when Γ is a normal subgroup.
The interest in Hecke algebras in the realm of operator algebras was to a large extent raised through the work of Bost and Connes [4] on phase transitions in number theory, and since then several authors have studied C * -algebras which arise as completions of Hecke algebras. There are several canonical C *completions of a Hecke algebra ( [21], [9]) and the question of existence of a maximal one (i.e. an enveloping C * -algebra) has been of particular interest ( [4], [1], [5], [8], [21], [7], [12], [3], [9], [6]). One of the reasons for that, firstly explored Date: December 21, 2013 Research supported by the Research Council of Norway, the Nordforsk research network "Operator Algebra and Dynamics" and Fundação para a Ciência e Tecnologia grant SFRH/BD/43276/2008. by Hall [8], has to do with how well * -representations of a Hecke algebra H(G, Γ) correspond to unitary representations of the group G that are generated by their Γ-fixed vectors. It was shown by Hall [8] that for such a correspondence to hold it is necessary that the Hecke algebra has an enveloping C * -algebra, which does not always happen. It was later clarified by Kaliszewski, Landstad and Quigg [9] that such a correspondence holds precisely when an enveloping C * -algebra exists and coincides with other canonical C * -completions.
The problem of deciding if a Hecke algebra has an enveloping C * -algebra seems to be of a non-trivial nature, with satisfactory answers, arising from various distinct methods, known only for certain classes of Hecke pairs.
The main motivation for the present article is to give a unified approach to this problem for a large class of Hecke pairs. We recover most of the known cases in the literature but also several new ones. We achieve this by associating a directed graph to a Hecke algebra H(G, Γ), whose vertices are the double cosets and whose directed edges are determined by how products of the form (ΓgΓ) * * ΓgΓ decompose as sums of double cosets. We prove that finiteness of the co-hereditary set generated by a vertex ΓgΓ, i.e. the set of vertices one encounters by moving forward in the graph starting from ΓgΓ, implies that sup π π(ΓgΓ) < ∞ , where the supremum runs over the * -representations of the Hecke algebra. Thus, analysing these co-hereditary sets gives valuable information regarding the existence of enveloping C * -algebras. Moreover, we prove that if all double cosets generate finite co-hereditary sets, then the enveloping C * -algebra of H(G, Γ) exists and coincides with C * (L 1 (G, Γ)), the enveloping C * -algebra of the L 1 -Hecke algebra (one of the canonical C * -completions).
We develop certain tools, based on iterated commutators on the group G, that allow us to show that our assumptions hold in a variety of classes of Hecke pairs, and thus enable us to answer affirmatively the question of existence of enveloping C * -algebras of the corresponding Hecke algebras. Some of the new results we prove state that if a group G satisfies some generalized nilpotency property, then for any Hecke subgroup Γ the Hecke algebra H(G, Γ) has an enveloping C * -algebra which coincides with C * (L 1 (G, Γ)). These results will enable us to show, in a following article (see [16]), that for any G satisfying such properties, Hall's correspondence holds for any Hecke subgroup.
We also notice that the classes of Hecke algebras studied in the present work, and therefore most of those studied in the literature, satisfy a stronger property than just having an enveloping C * -algebra: they are in fact BG * -algebras. The standard reference for this class of * -algebras is Palmer [18], but we also give a short description in Section 1. The reason for considering this stronger property is not only because of how well-behaved these * -algebras are, but also because it is natural to consider BG * -Hecke algebras in the context of crossed products by Hecke pairs (see [15]).
The paper is organized as follows: in Section 1 we set up the conventions, notation, and background results regarding * -algebras, Hecke algebras and also directed graphs, that will be used throughout the article. In the setting of directed graphs the most important notion will be that of a co-hereditary set.
In Section 2 we associate a directed graph to any * -algebra with a given basis and prove our first main result, which states that we can put a bound on the norm of all representations of the elements of a finite co-hereditary set. As a consequence, if the * -algebra with the given basis is generated by its finite co-hereditary sets, it must have an enveloping C * -algebra.
In Section 3 we prove the second main result of this article: that in the case of a Hecke algebra H(G, Γ) if all double cosets generate finite co-hereditary sets, then an enveloping C * -algebra exists and it coincides with C * (L 1 (G, Γ)).
In Section 4 we present some tools for determining if a co-hereditary set generated by a double coset is finite. These methods will be then used in Section 5 to study the existence of an enveloping C * -algebra, and the isomorphism with C * (L 1 (G, Γ)), for several classes of Hecke pairs (G, Γ).
In Section 6 we show that the problem of existence of an enveloping C *algebra for H(G, Γ) can be reduced to the same problem but for a smaller Hecke subalgebra H(H, Γ), where Γ ⊆ H ⊆ G is an ascendant subgroup.
Finally in Section 7 we give some concluding remarks and state some open questions.
The present work is part of the author's Ph.D. thesis [15] written at the University of Oslo. The author would like to thank his advisor Nadia Larsen for the very helpful discussions, suggestions and comments during the elaboration of this work.
Preliminaries on * -Algebras
Let V be an inner product space over C. Recall that a function T : V → V is said to be adjointable if there exists a function T * : V → V such that T ξ , η = ξ , T * η , for all ξ, η ∈ V . Recall also that every adjointable operator T is necessarily linear and that T * is unique and adjointable with T * * = T . We will use the following notation: • L(V ) denotes the * -algebra of all adjointable operators in V • B(V ) denotes the * -algebra of all bounded adjointable operators in V .
Of course, we always have B(V ) ⊆ L(V ), with both * -algebras coinciding when V is a Hilbert space (see, for example, [18, Proposition 9.1.11]).
Following [18, Def. 9.2.1], we define a pre- * -representation of a * -algebra A on an inner product space V to be a * -homomorphism π : A → L(V ) and a * -representation of A on a Hilbert space H to be a * -homomorphism π : A → B(H ). As in [17,Def. 4 is a bounded operator for all a ∈ A. We now make a seemingly similar definition, but where the focus is on the elements of the * -algebra, instead of its pre- * -representations: Definition 1.1. Let A be a * -algebra. We will say that an element a ∈ A is automatically bounded if π(a) ∈ B(V ) for any pre- * -representation π : Easy examples of automatically bounded elements in a * -algebra are unitaries, projections, or more generally, partial isometries.
Given a * -algebra A let where the supremum is taken over all * -representations of A, will be called the universal norm of A. An element a ∈ A will be said to have a bounded universal norm if a u < ∞, and the set of all elements a ∈ A which have a bounded universal norm will be denoted by A u , i.e.
When A u = A the universal norm becomes a true C * -seminorm, being actually the largest possible C * -seminorm in A. The Hausdorff completion of A in the universal norm is then a C * -algebra called the enveloping C * -algebra of A, which enjoys a number of universal properties (see [18,Theorem 10.1.11] and [18,Theorem 10.1.12]). For this reason, when every element a ∈ A has a bounded universal norm, i.e. A u = A, it is said that A has an enveloping C * -algebra.
In general, a * -algebra does not necessarily have an enveloping C * -algebra. Perhaps the most basic example is that of a polynomial * -algebra in a single self-adjoint variable.
We now look at the relation between automatically bounded elements and elements with a bounded universal norm. It is known that every BG * -algebra has an enveloping C * -algebra ( [18,Proposition 10.1.19]), and the same proof yields this slightly more general result, that an automatically bounded element has a bounded universal norm: In particular if A is a BG * -algebra, then A has an enveloping C * -algebra.
Proof: Suppose a / ∈ A u . Then there is a sequence of representations {π i } i∈N , on Hilbert spaces {H i } i∈N , such that π i (a) → ∞. Consider now the inner product space V defined as the algebraic direct sum and the pre- * -representation π := i∈N π i of A on L(V ). It is clear by construction that π(a) / ∈ B(V ). Hence, a / ∈ A b .
Preliminaries on Hecke Algebras
We will mostly follow [10] and [9] in what regards Hecke pairs and Hecke algebras and refer to these references for more details.
Definition 1.4. Let G be a group and Γ a subgroup. The pair (G, Γ) is called a Hecke pair if every double coset ΓgΓ is the union of finitely many right (and left) cosets. In this case, Γ is also called a Hecke subgroup of G.
Given a Hecke pair (G, Γ) we will denote by L and R, respectively, the left and right coset counting functions, i.e.
We recall that L and R are Γ-biinvariant functions which satisfy L(g) = R(g −1 ) for all g ∈ G. Moreover, the function ∆ : G → Q + given by is a group homomorphism, usually called the modular function of (G, Γ). Remark 1.6. Some authors, including Krieg [10], do not include the factor ∆ in the involution. Here we adopt the convention of Kaliszewski, Landstad and Quigg [9] in doing so, as it gives rise to a more natural L 1 -norm. We note, nevertheless, that there is no loss (or gain) in doing so, because these two different involutions give * -isomorphic Hecke algebras. In particular, the question of existence of an enveloping C * -algebra is not perturbed by this.
The Hecke algebra has a natural basis, as a vector space, given by the characteristic functions of double cosets. We will henceforward identify a characteristic function of a double coset χ ΓgΓ with the double coset ΓgΓ itself. It will be useful to know how to write a product ΓgΓ * ΓhΓ of two double cosets in the unique linear combination of double cosets: Lemma 1.7. The expression for the product ΓgΓ * ΓhΓ of two double cosets in the unique linear combination of double cosets is given by: where C g,h (s) := #{wΓ ⊆ ΓhΓ : ΓgwΓ = ΓsΓ}.
Proof: Let us first check that C g,h (s) is well-defined. It is clear that C g,h (s) does not depend on the representatives h and s of the chosen double cosets, so it remains to verify that it is also independent on g. Given any other representative βgγ of the double coset ΓgΓ, with β, γ ∈ Γ, it is not difficult to see that the map gives a bijective correspondence between the sets {wΓ ⊆ ΓhΓ : ΓgwΓ = ΓsΓ} and {uΓ ⊆ ΓhΓ : ΓβgγuΓ = ΓsΓ}. Hence we have C βgγ,h (s) = C g,h (s). Now, to check the product formula we recall (for example from [9]) that where the sum runs over a set of representatives for left cosets in ΓhΓ. Let us fix a representative g for the double coset ΓgΓ and let S be the set of double cosets S := {ΓgwΓ : wΓ ∈ ΓhΓ/Γ}, i.e. the set of double cosets that appear as summands in (1). The number of times an element ΓsΓ ∈ S appears repeated in the sum (1) is precisely the number C g,h (s). Hence we can write Also, if a double coset ΓrΓ does not belong to S we have C g,h (r) = 0, thus we get The reader can find alternative ways of describing the coefficients of this unique linear combination in [10,Lemma 4.4]. In particular, the characterization (iii) of the cited lemma is very similar to the one we just described.
Remark 1.8. A direct computation or Lemma 1.7 imply that the double cosets that appear in the expression for ΓgΓ * ΓhΓ as a unique linear combination of double cosets are all of the form ΓgγhΓ, for some γ ∈ Γ. Conversely, all double cosets of the form ΓgγhΓ, with γ ∈ Γ, appear in this linear combination, because C g,h (gγh) = 0.
Another basic property of Hecke algebras which we will need is the following: given a Hecke pair (G, Γ) and a subgroup K such that Γ ⊆ K ⊆ G, then (K, Γ) is a Hecke pair and H(K, Γ) is naturally seen as a * -subalgebra of H(G, Γ). This is a particular case of [10,Lemma 4.9]. Definition 1.9. The L 1 -norm on H(G, Γ), denoted · L 1 , is given by The completion L 1 (G, Γ) of H(G, Γ) under this norm is a Banach * -algebra.
• pC * (G)p -The corner of the full group C * -algebra of the Schlichting completion (G, Γ) of the pair (G, Γ), for the projection p := χ Γ . We will not describe this construction here since it is well documented in the literature ( [21], [7], [9]) and because we will not make use of Schlichting completions in this work.
• C * (G, Γ) -The enveloping C * -algebra (if it exists!) of H(G, Γ). When it exists, it is usually called the full Hecke C * -algebra.
These different C * -completions of H(G, Γ) are related in the following way, through canonical surjective maps: As was pointed out by Hall [8, Proposition 2.21], a Hecke algebra does not need to have an enveloping C * -algebra in general , with the Hecke algebra of the pair (SL 2 (Q p ), SL 2 (Z p )) being one such example, where p is a prime number and Q p , Z p denote respectively the field of p-adic numbers and the ring of p-adic integers.
Preliminaries on Directed Graphs
Recall that a simple directed graph G := (B, E) consists of a set B, whose elements are called vertices, and a subset E ⊆ B 2 , whose elements are called edges. An edge is thus a pair of vertices (a, b), which we see as directed from a to b. Since we are only interested in directed graphs that are simple, i.e. such that there is at most one edge directed from one vertex to another, we will henceforward drop the word simple and simply write directed graph.
Let us now set some notation. Let G := (B, E) be a directed graph. If the ordered pair (a, b) belongs to E we say that b is a successor of a. It is easy to see that an arbitrary intersection of co-hereditary sets is still a co-hereditary set. Hence, we can talk about the co-hereditary set generated by a subset X ⊆ B of vertices: Definition 1.11. Let G := (B, E) be a directed graph and X ⊆ B a set of vertices. The co-hereditary set generated by X is the smallest co-hereditary set that contains X.
Given a directed graph G := (B, E) and a set of vertices X ⊆ B, we will denote by S(X) the set of all the successors of all elements of X, i.e.
Similarly, we define the n-th successor set of X inductively as follows: In this way, the 0-th successor set is simply the set X, the 1-st successor set is S(X), the 2-nd successor set is S(S(X)), etc. We will often consider X to be a singleton set X = {b}, and in this case we will use the notation S(b) instead of S({b}). The following result follows easily from the definitions: The co-hereditary set generated by X is the set n∈N0 S n (X).
Remark 1.13. The sets of vertices we are going to consider in our applications will be sets with specific additional structure (for instance, the set of vertices will typically be a basis of a vector space), and we are interested in proving results of the type: all elements of the co-hereditary set generated by X have a certain property P . To do so, we use a certain form of "induction". Namely, if we prove that all elements of X have the property P , and if we prove that the property P is preserved upon taking successors, then by Lemma 1.12 and the usual induction on N, all elements of the co-hereditary set generated by X will also satisfy P .
Graph Associated with a * -Algebra
Let A be a * -algebra. Suppose that we are given a finite set of elements where λ ij ∈ C for each i, j ∈ {1, . . . , n}. We claim that the elements b 1 , . . . , b n are automatically bounded, and this fact will pave the way for our study of existence of enveloping C * -algebras: Theorem 2.1. Let A be a * -algebra and {b 1 , . . . , b n } ⊆ A a finite set of elements satisfying relations as in (2). Then the elements b 1 , . . . , b n are automatically bounded. In particular they have a bounded universal norm.
In order to prove Theorem 2.1 we will need the following lemma: Proof: Let us denote by β the real number and let B be the set defined by We have Hence, it is enough to prove that the set B is bounded. As it is well known, linear functions in R grow faster than square roots, thus it is clear that the set Since S is only defined for elements in (R + 0 ) n , the pre-image by S of a bounded set in R is also a bounded set in (R + 0 ) n . We conclude that B, and therefore B, is bounded.
Proof of Theorem 2.1: Let {b 1 , . . . , b n } ⊆ A be a finite set in A satisfying relations as in (2) and B ⊆ (R + 0 ) n the set defined by Let π : A → L(V ) be a pre- * -representation and ξ ∈ V a vector such that ξ = 1. We have that Hence it follows that π(b 1 )ξ , . . . , π(b n )ξ ∈ B. Since the definition of the set B is independent of π and ξ, and since by Lemma 2.2 we know that B is bounded in R n , it follows that In practice though, Theorem 2.1 can be difficult to apply, as in general one is not given a set of elements {b 1 , . . . , b n } satisfying the prescribed relations, especially if the structure of the * -algebra A is not well understood. For this reason we will describe a more algorithmic approach to Theorem 2.1 where the set {b 1 , . . . , b n } is not given from the start, but it is instead constructed step-bystep starting from one element b 1 . This method will be explained through the language of graphs and will be especially useful when applied to Hecke algebras, where knowledge from the Hecke pair can many times be used to show that sets of elements {b 1 , . . . , b n } satisfying (2) abound.
Let A be a * -algebra and B a basis of A as a vector space. Given a basis element b 0 ∈ B we will denote by Φ b0 the unique linear functional Φ b0 : Definition 2.3. Given a * -algebra A with a specified basis B, we define its associated graph as the directed graph G := (B, E), whose set of vertices is the set B and whose set of edges is the set Thus, given a vertex a ∈ B, its successors are precisely those basis elements that have non-zero coefficients in the unique expression of a * a as a linear combination of elements of B, i.e. if a * a = k 1 b 1 + · · · + k n b n , where each k i ∈ C is non-zero and the basis elements b i are all different, then the successors of a are precisely b 1 , . . . , b n . Proposition 2.4. Let A be a * -algebra with basis B and G its associated graph. If X ⊆ B is a finite co-hereditary set in G, then all elements of X are automatically bounded. In particular, all elements of X have a bounded universal norm.
Since X contains the successors of all its elements, we must necessarily have for some elements λ ij ∈ C (possibly being zero). It then follows form Theorem 2.1 that all elements b 1 , . . . , b n are automatically bounded and in particular have a bounded universal norm.
Corollary 2.5. Let A be a * -algebra, B a basis for A and G its associated graph. If A is generated as a * -algebra by the elements of the finite co-hereditary sets of G, then A is a BG * -algebra. In particular A has an enveloping C * -algebra.
Proof: Let B 0 be the set of elements of the finite co-hereditary sets of G. By We can interpret the above corollary in the following (equivalent) way: suppose we have a * -algebra A with a basis B. Suppose additionally that we have a particular set B 0 ⊂ B which generates A. If all the elements of B 0 generate finite co-hereditary sets of the associated graph, then A has an enveloping C *algebra. Let us now give a couple of immediate examples: Example 2.6. Let A be a finite-dimensional * -algebra. If we take any basis B, the associated graph necessarily has finitely many vertices (and edges). Thus, the co-hereditary set generated by any b ∈ B is finite.
Example 2.7. Let G be a discrete group, C(G) its group algebra with basis {δ g ∈ C(G) : g ∈ G}. Since in the group algebra we have δ * g * δ g = δ e , the only successor of δ g in the associated graph is δ e . Since δ e is the only successor of itself, the co-hereditary set generated by δ g has only two elements, δ g and δ e . Some non-trivial examples, arising from Hecke algebras, will be computed later in section 5.
A sufficient condition implying the isomorphism
In Corollary 2.5 of the previous section we obtained a sufficient condition for a * -algebra to have an enveloping C * -algebra, namely when it is generated by elements of the finite co-hereditary sets (with respect to a given basis). In this section we will improve this result in the case of a Hecke algebra H(G, Γ): under a suitable assumption we will not only assure an enveloping C * -algebra of C * (G, Γ) exists, but we will also be able to identify it with C * (L 1 (G, Γ)). Throughout this section and henceforward (G, Γ) will denote a Hecke pair. We will always consider the canonical basis in the Hecke algebra H(G, Γ), consisting of double cosets {ΓgΓ : g ∈ G}. This section is devoted to the proof of the following result: In order to give a proof of Theorem 3.1 we will make use of several lemmas.
In particular the following equality is also satisfied for any f ∈ H(G, Γ) such that f (ΓgΓ) ≥ 0 for all ΓgΓ ∈ Γ\G/Γ:
Proof:
We have that The second claim in this lemma follows directly from the first statement: Lemma 3.3. Let n ∈ N and A = [a ij ] be an n× n matrix, whose entries satisfy: a ii ∈ R + and a ij ∈ R − 0 for all i = j. If there are vectors d = (d 1 , . . . , d n ) and z = (z 1 , . . . , z n ) both in (R + ) n satisfying the system then A is non-singular.
Proof: Let z ∈ (R + ) n be a solution to the above system. Suppose that Ker A = {0}. Then, the set of solutions to the system (4) contains a line L. Consider now the set S of all the (finitely many) points which are the intersections of L with the canonical hyperplanes of the form x i = 0, and take a point y ∈ S (not necessarily unique) which is closest to z. The point y is the intersection of L with one of the hyperplanes x i = 0, say x i0 = 0 with 1 ≤ i 0 ≤ n. Since y = (y 1 , . . . , y n ) is in L, it is also a solution of the system (4) and therefore must satisfy implying that there exists at least one number y k which is negative. But on the other hand, the open segment between z and y lies inside (R + ) n because z ∈ (R + ) n and this segment does not intersect any hyperplane x i = 0 (by choice of the point y). Thus the entries of y = (y 1 , . . . , y n ) are all non-negative, which is a contradiction. Therefore Ker A = {0}.
In preparation for the next lemma we set some notation. Given two vectors a = (a 1 , . . . , a n ) and b = (b 1 , . . . , b n ) ∈ R n , we will write a ≤ b whenever a i ≤ b i for every 1 ≤ i ≤ n. We will denote the zero vector by 0 = (0, . . . , 0). Also, given a set of vectors S ⊆ R n , we will denote by C(S) the cone generated by S, i.e. the set of all linear combinations with coefficients in R + 0 of the elements of S.
Lemma 3.4. Let n ∈ N and A = [a ij ] be an n × n matrix whose entries satisfy: a ii ∈ R + and a ij ∈ R − 0 for all i = j. Assume that there are vectors d = (d 1 , . . . , d n ) > 0 and z = (z 1 , . . . , z n ) > 0 satisfying the system Az = d.
Then, if
Ay ≥ 0 , for some y ∈ R n , we must have y ≥ 0.
Proof: As we are in the conditions of Lemma 3.3, the matrix A is nonsingular. First we claim that {y : Ay ≥ 0} = C(A −1 e 1 , . . . , A −1 e n ), where e 1 , . . . , e n ∈ R n are the canonical unit vectors. The inclusion ⊇ is obvious, while the inclusion ⊆ follows from the fact that if Ay ≥ 0 then we can write Ay as a positive linear combination of e 1 , . . . , e n . Thus, to prove this lemma it suffices to prove that A −1 e k ≥ 0 for every 1 ≤ k ≤ n, and we will show this by induction on n. The case n = 1 is obvious since a 11 ∈ R + . Let us now assume that the result holds for n− 1, and prove it for n. Let B k be the matrix obtained from A by deleting the k-th row and column. Since Az = d, it follows readily that Since the right hand side of (5) is a vector in (R + ) n−1 , and moreover the entries of the matrix B k satisfy the conditions in the statement of the lemma, we can use the induction hypothesis on the matrix B k . Let v := (v 1 , . . . , v k−1 , v k+1 , . . . , v n ) ∈ R n−1 be a solution to the equation which exists by Lemma 3.3 (the reason for the chosen indexing of the entries of v will become clear in the remaining part of the proof). The induction hypothesis tells us that v ≥ 0. We also have that By the induction hypothesis again, we have Consider now the vector v ∈ R n given by v : We now notice that d k − n i =k a ki v i > 0, because all the a ki ∈ R − 0 for k = i, v i ≥ 0 as we saw before, and d k > 0. We have already proven that z i − v i ≥ 0, for i = k, from which it readily follows that z − v ≥ 0. We can now conclude that
Proof of Theorem 3.1:
We already know that if all double cosets generate finite co-hereditary sets, then H(G, Γ) has an enveloping C * -algebra. Thus, it remains to see that this enveloping C * -algebra is the enveloping C * -algebra of L 1 (G, Γ), and for this we only need to show that a u ≤ a L 1 , for any a ∈ H(G, Γ). Actually we only need to prove (6) when a is a double coset a = ΓsΓ, since the result for a general a ∈ H(G, Γ) follows from the following argument: if we write a in the unique linear combination of double cosets, a = n i=1 λ i Γs i Γ, then we have Let therefore ΓsΓ be a double coset and {Γs 1 Γ, . . . , Γs n Γ} the finite co-hereditary set it generates. By Lemma 1.7 we have where the coefficients λ ij are given by .
Let B be the set Let us also denote by C the subset of B determined by It follows immediately from the triangle inequality applied to (7) that the universal norm (in fact, any C * -norm) satisfies Γs 1 Γ u , . . . , Γs n Γ u ∈ B. Moreover, from Lemma 3.2, the L 1 -norm satisfies Thus, Γs 1 Γ L 1 , . . . , Γs n Γ L 1 ∈ C. For ease of reading we will denote by z := (z 1 , . . . , z n ) the point Γs 1 Γ L 1 , . . . , Γs n Γ L 1 . The idea for the remaining part of the proof is to argue that z ∈ C is the point with the largest coordinates in the whole set B.
For each 1 ≤ i ≤ n let g i : (R + 0 ) n → R be the function The tangent hyperplane to the graph of g i at the point (z 1 , . . . , z n ) is given by the equation which, using the fact that (z 1 , . . . , z n ) is a zero of g i , we can reduce to We claim that 2z i − λ ii > 0. To see this we notice that Let us now take A = [a ij ] to be the n × n matrix whose entries are given by a ij := −λ ij for i = j, and a ii := 2z i − λ ii , thus a ij ∈ R − 0 for i = j and a ii ∈ R + . We can easily see from (8) that Az = z 2 , where z 2 = (z 2 1 , . . . , z 2 n ). Consider now the set W defined by We claim that W contains the set B. To see this, let (y 1 , . . . , y n ) ∈ B. We then have and thus (y 1 , . . . , y n ) ∈ W . In other words, if y ∈ B, then Ay ≤ z 2 . We can rewrite this inequality as: Noting that we are under the conditions of Lemma 3.4, because the entries of A satisfy the required conditions and Az = z 2 , we conclude that 0 ≤ z − y, i.e. y ≤ z. Thus, we conclude that z has bigger coordinates than any other point in B.
As we know, we have ( Γs 1 Γ u , . . . , Γs n Γ u ) ∈ B, so by the above we must have Γs i Γ u ≤ z i = Γs i Γ L 1 for any 1 ≤ i ≤ n. Thus, in particular, ΓsΓ u ≤ ΓsΓ L 1 , for the initial double coset ΓsΓ. Since all double cosets generate finite co-hereditary sets we conclude that this inequality holds for any double coset ΓsΓ, and as we explained in the beginning of the proof, this implies that the enveloping C * -algebra of H(G, Γ) is C * (L 1 (G, Γ)).
Methods for Hecke Algebras
The basis of our study of enveloping C * -algebras of Hecke algebras will be Corollary 2.5 and Theorem 3.1. Our goal is to apply these results to several classes of Hecke pairs, but so far we have not given any hint on how to actually ensure that a given double coset generates a finite co-hereditary set. The objective of this section is to provide some tools, based on iterated commutators, to help us accomplish this task.
Given a group G we will denote by [s, t] the commutator of s, t ∈ G, i.e. Let us now return to Hecke pairs (G, Γ). We will be mostly interested in commutators of the form [g, γ 1 , . . . , γ n ], where g ∈ G and γ 1 , . . . , γ n ∈ Γ, and the reason for that is given by the following result: Proof: We will choose such a sequence {γ n } n∈N inductively on n ∈ N. Suppose n = 1. Since Γx 1 Γ is a successor of ΓgΓ, it must be of the form Γx 1 Γ = Γg −1 γgΓ for some γ ∈ Γ (see Remark 1.8). Now we notice that
Hence, since we can extend any finite sequence γ 1 , . . . , γ n satisfying the stated conditions to a sequence γ 1 , . . . , γ n , γ n+1 still satisfying the stated conditions, it follows that there must be an infinite sequence {γ n } n∈N with the desired requirements.
We will now establish a sufficient condition to ensure the finiteness of the co-hereditary set generated by an element ΓgΓ based on the iterated commutators we considered above: ii) Γx n+1 Γ is a successor of Γx n Γ, for all n ≥ 0.
, for all n ≥ 0. In particular, we have that Γx i Γ = Γx j Γ for i = j, implying that the set {Γx n Γ : n ∈ N} is infinite.
By Proposition 4.1 there exists a sequence {γ n } n∈N ⊆ Γ such that Γx n Γ = Γ[g, γ 1 , . . . , γ n ]Γ for all n ≥ 1. But, by assumption, the number of double cosets in {Γ[g, γ 1 , . . . , γ n ]Γ : n ∈ N} is finite. Thus we arrive at a contradiction and therefore the co-hereditary set generated by ΓgΓ must be finite. There are different classes of Hecke pairs that satisfy conditions a) and b) of the above corollary. As we shall see in more detail in the next section, condition a) is satisfied by groups satisfying certain generalized nilpotency properties, whereas b) is satisfied when Γ is a subnormal subgroup of G, for example.
Classes of Hecke Pairs
We will now use the methods developed in the previous sections to study the existence of enveloping C * -algebras for several classes of Hecke algebras. Many of the well known results about the existence of a full Hecke C * -algebra for some classes of Hecke pairs will be recovered in a unified approach and some new classes will also be described. The isomorphism C * (G, Γ) ∼ = C * (L 1 (G, Γ)) will also be established in many of the considered classes.
It should also be noted that all the classes of Hecke algebras considered here are in fact BG * -algebras, since our methods can be traced back to Corollary 2.5, but since the focus is mostly on the existence of C * (G, Γ) we will not mention this in every case.
This section is organized as follows: the classes of Hecke pairs from 5.1 to 5.4 have been studied in the operator algebraic literature and results about the corresponding full Hecke C * -algebras are known. The results about the remaining classes, 5.5 to 5.12, are essentially new, with the results for the classes 5.5, 5.6 and 5.7 generalizing known results in the literature.
The classes we consider are presumably all different (in the sense of containment), with the notable exceptions of 5.5 which is a particular case of 5.6, and 5.1 which is a particular case of 5.7.
We would like to remark that the results discussed in this section illustrate how our methods apply for natural classes of Hecke pairs and that we have not, by any means, exhausted all the possible classes of Hecke pairs one can study through these methods.
Γ has Finite Index in G
When Γ has finite index in G, the pair (G, Γ) is automatically a Hecke pair, and the Hecke * -algebra is finite dimensional (actually, H(G, Γ) is finite dimensional if and only if Γ has finite index in G). As we have seen in Example 2.6, the co-hereditary set generated by a double coset is finite because the graph of H(G, Γ) is itself finite. Hence, Theorem 3.1 tells us that C * (G, Γ) exists and C * (G, Γ) ∼ = C * (L 1 (G, Γ)).
Of course this example, investigated by Hall [8,Section 4.2], is well-known and completely understood, because a finite dimensional * -algebra is automatically complete for any * -algebra norm. Hence we necessarily have and all these C * -algebras are isomorphic to H(G, Γ), without having to invoke our Theorem 3.1.
With our methods we can show that C * (G, Γ) exists, since the Hecke algebra is in fact generated by finite co-hereditary sets. To see this, we first notice that, for t ∈ T , we have ΓtΓ = tΓ. Hence, we also have for every s, t ∈ T , which means that the Hecke * -algebra is generated by the set of double cosets {ΓtΓ : t ∈ T }. Taking s = t in equality (9) we see that Thus, the only successor of the double coset ΓtΓ is Γ. Since Γ is the only successor of itself, it follows that the co-hereditary set generated by ΓtΓ has only two elements, ΓtΓ and Γ, and is therefore finite. We conclude that H(G, Γ) is generated by finite co-hereditary sets and therefore C * (G, Γ) exists by Corollary 2.5.
Iwahori Hecke Algebras
Let (G, Γ) be a Hecke pair such that H(G, Γ) is an Iwahori Hecke algebra (see [8,Definition 5.12] for a precise definition of this concept). Sets of generators and relations have been given for this class of Hecke algebras, but for our purposes we will only need to know that: 1. There is a set S ⊆ G of elements of order two such that H(G, Γ) is generated (as a * -algebra) by Γ and the double cosets ΓsΓ, with s ∈ S.
2. for every s ∈ S the following relation holds: For the remaining relations in H(G, Γ), of which we will not make any use in this work, we refer the reader to Hall's thesis [8,Section 5.3.1].
It was proven by Hall [8, Proposition 2.24], through an estimate on the spectral radius of certain elements, that an Iwahori Hecke algebra has an enveloping C * -algebra (actually Hall proved this for the case (SL n (Q p ), B), with B ⊆ SL n (Q p ) an Iwahori subgroup, but her proof is completely general).
We can also conclude this from our methods, by proving that H(G, Γ) is generated by finite co-hereditary sets. By point 1) we only need to see that each double coset ΓsΓ with s ∈ S generates a finite co-hereditary set. So let ΓsΓ ∈ H(G, Γ) with s ∈ S. Since s has order two we see that ΓsΓ is self-adjoint and therefore relation 2) can be rewritten as Hence, the successors of ΓsΓ are only Γ and ΓsΓ itself. Thus, the co-hereditary set generated by ΓsΓ has only two elements, Γ and ΓsΓ, and is therefore finite. We conclude that H(G, Γ) is generated by finite co-hereditary sets, and is therefore a BG * -algebra and has an enveloping C * -algebra. [9,Corollary 6.11] it is known that, for G = SL 2 (Q p ) and Γ an Iwahori subgroup, we necessarily have The analogous result for SL n (Q p ) with n ≥ 3 is still open, as far as we know.
Γ is a Protonormal Subgroup of G
We recall that Γ is a protonormal subgroup of G (in the sense of Exel [6]), if for every s ∈ G we have Subgroups with this property are also called conjugate permutable subgroups in the literature.
It was proven by Exel ([6, Proposition 12.1]) that when Γ is a protonormal subgroup of G the enveloping C * -algebra C * (G, Γ) exists. Moreover, it is completely clear from his proof that C * (G, Γ) ∼ = C * (L 1 (G, Γ)), since the bound he uses for the universal norm is actually the L 1 -norm. Our methods can also recover this result, because in fact any double coset ΓgΓ generates a finite cohereditary set. We will actually prove that the co-hereditary set generated by ΓgΓ consists only of ΓgΓ and S(ΓgΓ) and is therefore finite. In other words, we will prove that S n (ΓgΓ) ⊆ S(ΓgΓ) , for every n ∈ N. It suffices to prove that S 2 (ΓgΓ) ⊆ S(ΓgΓ). The elements of S 2 (ΓgΓ) are of the form Γ[g, γ 1 , γ 2 ]Γ, where γ 1 , γ 2 ∈ Γ, by Proposition 4.1. We have that Since Γ is a protonormal subgroup there exist θ, ω ∈ Γ such that γ 1 gγ −1 2 g −1 = gθg −1 ω. Thus, we get and therefore By Remark 1.8, Γg −1 ωγ −1 1 gΓ ∈ S(ΓgΓ). This finishes the proof.
Γ is Subnormal in G
Hecke pairs (G, Γ) in which Γ is normal in a normal subgroup of G have been widely studied in the literature, in particular when G is a semi-direct product ( [5], [13], [12], [9]), and it is known that in this case H(G, Γ) has an enveloping C * -algebra and moreover (see, for example, [9, Theorem 6.13]).
We are now going to prove that when Γ is a subnormal subgroup of G, C * (G, Γ) exists and C * (G, Γ) ∼ = C * (L 1 (G, Γ)). Recall that Γ is subnormal in G if there are subgroups H 0 , H 1 , . . . , H n such that We claim that when Γ is subnormal in G, all double cosets ΓsΓ generate finite co-hereditary sets. To see this we will use Corollary 4.3. Let s ∈ G and {γ k } k∈N ⊆ Γ. We will prove by induction that [s, γ 1 , . . . , γ k ] ∈ H k for 1 ≤ k ≤ n. For k = 1 this follows from the following observation: Now, let us prove that k ⇒ k+1. For simplicity, let us write x k := [s, γ 1 , . . . , γ k ], which by induction hypothesis is an element of H k . Thus, we have Thus, for any sequence {γ k } k∈N we have [s, γ 1 , . . . , γ n ] ∈ Γ, which by Corollary 4.3 b) implies that ΓsΓ generates a finite co-hereditary set. Since this is true for all double cosets, Theorem 3.1 tells us that C * (G, Γ) exists and C * (G, Γ) ∼ = C * (L 1 (G, Γ)).
Remark 5.2. It is known that any subgroup Γ of a nilpotent group G is necessarily a subnormal subgroup (see, for example, [11, §62]). Hence already from this we can conclude that the Hecke algebra of any Hecke pair (G, Γ), with G a nilpotent group, has an enveloping C * -algebra (which coincides with C * (L 1 (G, Γ))). In fact, this holds for any group G whose subgroups are all subnormal. Groups with this property form a class that strictly contains the class of nilpotent groups ([20, Theorem 6.11]). We will prove similar results for other classes of groups which strictly generalize the class of nilpotent groups. Example 5.3. Let G be the group of n × n upper triangular matrices with 1's on the diagonal and with entries in Q and let Γ be the subgroup of those matrices with entries in Z. It can be checked, although we will not do so here, that (G, Γ) forms a Hecke pair. The subgroup Γ is subnormal with where H k is the subgroup of matrices in G whose first k − 1 upper diagonals have entries in Z. The group G is nilpotent and its 3 × 3 version is the rational Heisenberg group discussed in [9, Example 11.7].
Γ is Ascendant in G
Recall that Γ is said to be ascendant in G if there is a normal series {H i } i∈N0 , that ends in the group G, in the sense that i∈N0 H i = G. Of course, the series is finite precisely when Γ is subnormal in G.
We will now prove that if Γ is ascendant in G, then every double coset generates a finite co-hereditary set, therefore implying that C * (G, Γ) exists and is isomorphic to C * (L 1 (G, Γ)).
Let ΓsΓ be any double coset in H(G, Γ), with representative s ∈ G. Since Γ is ascendant, s must belong to one of the subgroups H n , with n ∈ N 0 . Of course, Γ is a subnormal subgroup of H n , and as we saw in the subnormal case, this implies that the co-hereditary set generated by ΓsΓ is necessarily finite.
Γ has Finitely Many Conjugates in G
Suppose Γ has finitely many conjugates in G, or equivalently, the normalizer of Γ has finite index in G. Then, C * (G, Γ) exists and C * (G, Γ) ∼ = C * (L 1 (G, Γ)) because any double coset generates a finite co-hereditary set. To see this, let ΓgΓ be a double coset and let g −1 1 Γg 1 , . . . , g −1 n Γg n be the conjugates of Γ. With the possible exception of ΓgΓ itself, any element in the co-hereditary set generated by ΓgΓ is a successor of another element. Hence, by Remark 1.8, any such element is of the form where x ∈ G and γ ∈ Γ. We can then write x −1 γx = g −1 i θg i , for some i ∈ {1, . . . , n} and θ ∈ Γ, and therefore Γx −1 γxΓ = Γg −1 i θg i Γ. Thus, apart possibly from ΓgΓ, all elements in the co-hereditary set generated by ΓgΓ are successors of some Γg i Γ, with 1 ≤ i ≤ n, by Remark 1.8 again. Thus, this co-hereditary set must be finite.
G is Finite-by-Nilpotent
Recall that a group G is called nilpotent if its lower central series stabilizes at {e} after finitely many steps, i.e. if the normal series defined inductively by is such that G k = {e}, for some k ∈ N.
Recall also that a group G is said to be finite-by-nilpotent if G has a finite normal subgroup K such that G/K is nilpotent, i.e. if G is an extension of a finite group by a nilpotent group. In particular, all nilpotent groups are finiteby-nilpotent (taking K = {e}). Moreover, the class of finite-by-nilpotent groups is strictly larger than the class of nilpotent groups, as every finite group belongs to the former class but not to the latter.
Finite-by-nilpotent groups also admit a nice description in terms of their lower central series: it is known that finite-by-nilpotent groups are precisely those whose lower central series stabilizes at a finite group.
We are now going to show that for any Hecke pair (G, Γ) where G is finite-bynilpotent, every double coset ΓsΓ generates a finite co-hereditary set, implying that C * (G, Γ) exists and coincides with C * (L 1 (G, Γ)).
Let s ∈ G and {γ k } k∈N ⊆ Γ. It is clear that [s, γ 1 , . . . , γ k ] ∈ G k . Since the series {G k } eventually stabilizes at a finite subgroup, it follows directly from Corollary 4.3 a) that ΓsΓ generates a finite co-hereditary set. This concludes the proof.
G is Hypercentral
Recall that a group G is said to be a hypercentral group (also called a ZA-group) if its upper central series, possibly continued transfinitely, stabilizes at the whole group G. For a rigorous definition of this concept, we refer the reader to [19, section 12.2] for example. Another characterization of hypercentral groups, which is the one we will use, is given by the following result: Theorem 5.4 (Lemma, page 219, §63, [11]). A group G is hypercentral if and only if it satisfies the following property: for any s ∈ G and any sequence {x n } n∈N ⊂ G there is a k ∈ N such that [s, x 1 , . . . , x k ] = e .
We will now prove that if (G, Γ) is a Hecke pair with G a hypercentral group, then every double coset ΓsΓ generates a finite co-hereditary set, so that C * (G, Γ) exists and C * (G, Γ) ∼ = L 1 (G, Γ). This is a direct application of Corollary 4.3 a), taking F = {e}, given the characterization of hypercentral groups of Theorem 5.4.
Remark 5.5. The class of hypercentral groups also strictly contains the class of nilpotent groups (see Example 5.6), and moreover it is known that every hypercentral group is locally nilpotent (but not vice-versa). Thus, we have found another class of groups G, satisfying a nilpotent-type property, for which the Hecke algebra H(G, Γ) of any Hecke pair (G, Γ) has an enveloping C * -algebra (which coincides with C * (L 1 (G, Γ))).
Example 5.6. Let Z 2 ∞ be the 2-quasicyclic group, i.e. the group of all the 2 n -th roots of unity for all n ∈ N. This group is the Pontryagin dual of group of 2-adic integers. The group Z/2Z acts on Z 2 ∞ by mapping an element to its inverse. The generalized dihedral group is a group which is hypercentral, but not nilpotent.
G is an F C-group and Γ is Finite
Recall that a group G is said to be F C if every element s has finitely many conjugates, i.e. the set C s := {t −1 st : t ∈ G} is finite. It can be seen that every subgroup Γ ⊆ G of an F C-group is a Hecke subgroup, because and the last union is finite.
F C groups are a generalization of both finite and abelian groups, and share many common properties with these classes. They were extensively studied by B. H. Neumann and others, starting with the article [14]. The analogous class of groups in the locally compact setting (groups in which the conjugacy class of any element has compact closure) is usually denoted by F C − and has also been widely studied, since it is a direct generalization of both compact and abelian locally compact groups (see [18,Chapter 12] for an account).
When G is a F C-group and Γ ⊆ G is a finite subgroup, we can prove that every double coset ΓsΓ generates a finite co-hereditary set, so that C * (G, Γ) exists and C * (G, Γ) ∼ = C * (L 1 (G, Γ)). To see this, let s ∈ G and {γ k } k∈N ⊆ Γ. Also, let Γ = {θ 1 , . . . , θ n } and for each 1 ≤ i ≤ n let us denote by S i ⊆ N the set Of course, the sets S i are mutually disjoint and their union is N. We have that Since there are only finitely many conjugates of θ −1 i , it follows that the set {Γ[s, γ 1 , . . . , γ k−1 , θ i ]Γ : k ∈ S i } is finite, and therefore {Γ[s, γ 1 , . . . , γ k ]Γ : k ∈ N} is finite. Thus, by Theorem 4.2, the co-hereditary set generated by ΓsΓ is finite.
G is Locally-Nilpotent and Γ is Finite
Recall that a group G is said to be locally-nilpotent if every finitely generated subgroup of G is nilpotent.
Let G be a locally-nilpotent group and Γ a finite subgroup. The pair (G, Γ) is automatically a Hecke pair since Γ is finite. We are now going to prove that each double coset ΓsΓ generates a finite co-hereditary set, implying that C * (G, Γ) exists and coincides with C * (L 1 (G, Γ)). To see this, let s, Γ ⊆ G be the subgroup generated by s and Γ. This subgroup is finitely generated, hence nilpotent. Thus, as we have proven above, ΓsΓ ∈ H( s, Γ , Γ) ⊆ H(G, Γ) generates a finite co-hereditary set.
G is Locally-Finite and Γ is Finite
Recall that a group G is said to be locally-finite if every finitely generated subgroup of G is finite.
Let G be a locally-finite group and Γ a finite subgroup. The pair (G, Γ) is automatically a Hecke pair since Γ is finite. We are now going to prove that each double coset ΓsΓ generates a finite co-hereditary set, implying that C * (G, Γ) exists and coincides with C * (L 1 (G, Γ)). To see this, let s, Γ ⊆ G be the subgroup generated by s and Γ. This subgroup is finitely generated, hence finite. Thus, as we have proven above, ΓsΓ ∈ H( s, Γ , Γ) ⊆ H(G, Γ) generates a finite co-hereditary set.
An interesting feature of Hecke pairs arising from locally finite groups is that they give rise to AF Hecke algebras. In that regard we have the following result: Proposition 5.7. Let (G, Γ) be a Hecke pair where G is countable and Γ is a finite subgroup. Then H(G, Γ) is an AF * -algebra if and only if G is locally finite.
Proof: (⇐=) Assume G is locally finite. Since G is assumed countable, let us fix an enumeration of its elements G = {g 1 , g 2 , . . . } and for each n ∈ N let us define H n as the subgroup H n := Γ, g 1 , . . . , g n . It is clear that {H n } n∈N forms an increasing sequence of finitely generated subgroups, such that H n = G. Moreover, since G is locally finite, each H n is a finite group which contains Γ. Hence, we have a sequence of finite dimensional Hecke algebras {H(H n , Γ)} n∈N ⊆ H(G, Γ) satisfying H(H n , Γ) = H(G, Γ). Thus, H(G, Γ) is an AF * -algebra.
(=⇒) Assume that H(G, Γ) is an AF * -algebra. Then any element f ∈ H(G, Γ) lies in a finite dimensional * -subalgebra, and is therefore algebraic over C. It then follows from [10, Proposition 2.6] that G is locally finite.
Example 5.8. Similarly to Example 5.6, let p be a prime number and Z p ∞ be the p-quasicyclic group (which is the Pontryagin dual of the group of p-adic integers). The generalized dihedral group G := Z p ∞ ⋊ Z/2Z , is locally finite (but not locally nilpotent unless p = 2).
Reduction Techniques
Suppose we have Γ ⊆ N for some normal subgroup N of G, i.e. Γ ⊆ N G .
We notice that, since N is normal in G and Γ ⊆ N , we have s −1 γ −1 i s ∈ N , i.e. all elements Γs −1 γ −1 i sΓ ∈ H(N, Γ). Also, by restriction, π gives rise to a * -representation of H(N, Γ). Hence, we must have where · u,N is the universal norm in H(N, Γ). Since the inequality in (10) holds for every * -representation π, we see that ΓsΓ has a bounded universal norm. Thus, H(G, Γ) has an enveloping C * -algebra. A similar computation as the above, but with pre- * -representations instead, shows that if H(N, Γ) is a BG * -algebra, then so is H(G, Γ). | 2012-11-28T21:27:35.000Z | 2012-01-03T00:00:00.000 | {
"year": 2012,
"sha1": "2e343a5866578b0c671563fb31e39d9b324c53c8",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jfa.2013.03.011",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "2e343a5866578b0c671563fb31e39d9b324c53c8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
262543658 | pes2o/s2orc | v3-fos-license | Green synthesis of nanohydroxyapatite trough Elaeagnus angustifolia L. extract and evaluating its anti-tumor properties in MCF7 breast cancer cell line
Background One of the most common types of cancer in women is breast cancer. There are numerous natural plant-based products, which exert anti-tumoral effects including Elaeagnus Angustifolia (EA). It modulates cell-cycle process, heat-shock proteins expression, anti-proliferative properties, apoptosis induction, blocking of angiogenesis, and cell invasion inhibition. The current study aimed to synthesize and evaluate the anticancer effects of hydroalcoholic EA extract (HEAE), Nanohydroxyapatite (nHAp) and nHAp synthesized trough EA (nHA-EA) in MCF-7 breast cancer cell line. Methods In the present study, HEAE preparation and green synthesis of nHA-EA was done and phase composition, functional groups, and crystallin phase of nHA-EA and nHAp were determined using Fourier-transform infrared (FTIR) and X-ray diffraction (XRD). The characteristics of synthesized nanoparticles including structural and morphological parameters were investigated using scanning electron microscopy (SEM) and Transmission electron microscopy (TEM) techniques. Then, by using MTT-assay (Dimethylthiazoldiphenyltetrazolium), the in vitro cytotoxic and half maximal inhibitory concentration (IC50) of EA extract, nHAp, and nHA-EA in the MCF-7 breast cancer cell line was evaluated. Next, we assessed the expression of apoptosis-related genes Bax, Bcl2 and p53 using quantitative reverse-transcriptase polymerase-chain-reaction (qRT-PCR) and migration of MCF-7 cells by scratch assay. Results The FTIR results demonstrated formation of nHAp and its interaction with HEAE during synthesis process. The XRD results of the synthesized nanoparticles showed similar XRD pattern of nHA-EA and nHAp and purity of synthesized nanomaterials. The average IC50 of HEAE, nHAp, and nHA-EA extract after treatment of cancer cells for 24 h was 400 µg/mL, 200 µg/mL, and 100 µg/mL, respectively. Our results revealed that nHA-EA significantly reduced the migration and invasion of the MCF-7 cells, in comparison to the nHAp and EA extract. Moreover, level of Bax/Bcl2 and p53 was significantly higher in the nHA-EA extract group in comparison to the EA extract and nHAp group. Conclusion Taken together, our results demonstrated that bioactive constituents of EA medicinal plant in form of nHA-EA particles, can effectively exerts potential anticancer and chemo preventive effect against breast cancer growth and can be proposed as a promising beneficial candidate for BC therapy. However, further investigations are required to discover what bioactive compounds are responsible for the chemo preventive effect of this extract.
Introduction
One of the most common types of cancer in women is breast cancer (BC) [1], and it is considered a leading cause of death in patients with cancer all around the world [2].Despite the recent advances in BC diagnostic and therapeutic methods, due to the high rate of mortality and chemoresistance, BC cancer patients' treatment is still a matter of debate [3].
There are numerous natural plant-based products and extracts, which exert anti-tumoral effects and could be useful for the design and development of new anti-cancer drugs [4].Elaeagnus Angustifolia (EA) (also called oleaster, Russian olive, wild olive, silver berry), is a tree, that belongs to the Elaeagnacea (Araliaceae) family, and its fruits are characterized by small size and reddishbrown color.Different type of EA is commonly growing in Asia, Europe, and some regions of North America [5,6].The fruit of EA has been utilized as a medicinal plant, which acts as a potential therapeutic agent in numerous disorders through modulation of the immune system and oxidative stress balance [5].It also exerts anti-cancer, antibacterial, and antifungal effects and has gastro-and hepatoprotective efficacy [7][8][9][10].It has been shown that EA fruit is a rich source of different vitamins, carbohydrates, proteins, and minerals [6,11,12].Moreover, EA fruits are a good source of beneficial compounds such as coumarins, tannins, phenolic acids, and flavonoids [5,13,14].Different studies are showing that EA exerts anticancer effects via modulating the cell-cycle process, heatshock proteins expression, anti-proliferative properties, apoptosis induction, blocking of angiogenesis, and cell invasion inhibition [15][16][17].
Nanomedicine has drawn a lot of interest because of its various and effective utilization in medicine, particularly in drug delivery [18].Due to the high dispersion stability of nanoparticles (NPs), they have attracted considerable research interest in biomedical applications and drug delivery systems [19,20].The biosynthesis of NPs, using natural sources such as plants, without the utilization of any hazardous substances, reduces potential health and environmental threats [21,22].Moreover, plant extract allows the preparation of NPs with controlled and defined shape and size [23].
Hydroxyapatite (HAp, Ca 10 (PO4) 6 (OH) 2 ), has various biomedical applications in many areas of medicine, due to its high bioactivity and biocompatibility.According to the method applied for HAp preparation, various mechanical properties bioactivity level, and dissolution behavior in the biological environment is expected [24].It has been shown that HAp exerts anti-cancer effects via increasing drug release and consequently greater growth inhibition properties.Taken together, HAp enhances the chemotherapeutic efficacy of various agents including Cisplatin, Methotrexate, and Adriamycin.This could be because HAp mediates drug penetration into the tumor and improves drug delivery [25] improvement.It has been shown that the green synthesis of NPs has opened up new possibilities in material development, and there is research conducted on the green synthesis of HAp [26].
A different source of flavonoids including, catechin, epicatechin, gallocatechin, epigallocatechin, kaempferol, quercetin, luteolin, isorhamnetin, and isorhamnetin-3-0-β-D-galactopyranoside have been isolated from EA [27].Furthermore, various phenolic components such as 4-hydroxybenzoic acid and caffeic acid have also been found in EA [28].It has been shown that flavonoids contain phenolic hydroxyl groups, they may play a role in metals chelating efficiency, lowering lipid peroxidation process, and increasing antioxidant and free radical scavenging capacity [29,30].According to these findings, in the current study, nano hydroxyapatite (nHAp) was synthesized with the EA extract, which shows reducing/ stabilizing effects, as well as capping properties.This synthesis method is inexpensive and simple, has long-lasting stability, and is appropriate for macro-scale procedures [7,[31][32][33].
Therefore, the current study aimed to synthesize and evaluate the anticancer effects of hydroalcoholic EA extract (HEAE), nHAp, and nHAp synthesized trough EA (nHA-EA) in MCF-7 breast cancer cell line.The synthesized nanoparticles were characterized, and their structural, morphological, and optical properties were determined via different analytical tools, including x-ray diffraction (XRD), fourier-transform infrared (FT-IR), scanning electron microscopy (SEM), and transmission electron microscopy (TEM).
Materials & methods
In the present study, all chemicals and reagents used to synthesize nHAp and nHA-EA were of analytical grade.Calcium nitrate tetrahydrate [Ca (NO3)2•4H2O], diammonium hydrogen phosphate [(NH4)2 HPO4], sodium hydroxide (NaOH), and were purchased from Sigma Aldrich.All chemicals solution was prepared using deionized water.
Plant material and extract preparation
The Russian Olive was obtained from South Khorasan, Birjand, Iran.It was identified as Elaeagnus Angustifolia L. by Dr. F. Askari (Assistant Professor of Traditional Pharmacy, School of Pharmacy).The voucher specimen was deposited in the Herbarium Center of the School of Pharmacy, Birjand University of Medical Sciences (221).
Hydroalcoholic EA extract preparation & green synthesis of nHA-EA
We have discussed synthesis of nHAp and nHA-EA in the previous studies [29,34].Briefly, EA pulps were obtained from fresh fruits and HEAE was prepared using the maceration procedure as below.40 gr of dried EA pulp powder with 320 mL methanol and 80 ml distilled water were mixed.Finally, the extracts were filtered and concentrated using a rotating vacuum and HEAE was stored at 4 °C.
nHA-EAs were synthesized using the sol-gel technique, using calcium nitrate tetrahydrate and diammonium hydrogen phosphate, (molar ratio: 1.67).V/V HEAE (10 ml, 10%), diammonium hydrogen phosphate (5 ml, 0.3 M), and calcium nitrate tetrahydrate (15 ml, 0.3 M) were dissolved in ionized water.The Calcium nitrate tetrahydrate solution was introduced to the HEAE solution, then stirred upon slow warming to 50 °C, for 0.5 h.Then, diammonium hydrogen phosphate solution was added to the above-mixed solution at the flow rate of 1 ml/min.NaOH solution was used for pH adjustment (pH = 11).At 50 °C for 90 min, the suspension was agitated (Fig. 1).The resulting solution was subjected to the centrifuge and then gently rinsed with deionized water and ethanol (nHA-EA).Finally, nHAp solution without HEAE was synthesized under the same situations described for comparison.
Characterization
The phase composition, functional groups, and crystallin phase of nHA-EA and nHAp were determined using FTIR and XRD.The characteristics of synthesized nanoparticles including structural and morphological parameters were investigated using SEM and TEM techniques.
Cell culture
The human MCF-7 cells were purchased from the Pasteur institute (Tehran, Iran).The cells were cultured in dulbecco's modified eagle's medium (DMEM) supplemented with 10% heat-inactivated fetal bovine serum (FBS) and 1% streptomycin/ penicillin and maintained at 37 °C in a 5% CO2 atmosphere.
MTT-assay
The MTT-assay (Dimethylthiazoldiphenyltetrazolium) was used to assess the in vitro cytotoxic and half maximal inhibitory concentration (IC 50 ) of EA extract, nHAp, and nHA-EA in the MCF-7 breast cancer cell line.Cells were treated with different concentrations of EA extract, nHAp, and nHA-EA for 24 h.Then, the medium was removed from the wells and MTT substrate (Sigma, Germany) was added to each well, and the plate was incubated for 4 h.Then, the resulting crystals were dissolved in dimethylsulfoxide (DMSO) and incubated for 1 h.Finally, the absorbance was measured at 540 nm using a microplate reader.The experiments were performed in triplicate.
Scratch assay
Briefly, MCF-7 cells were seeded in a 12-well plate at a concentration of 4 × 10 5 cells /well.Then, a narrow scratch was created with a 100 μl sterile pipette tip through a monolayer of adherent cells growing on the bottom of a cell culture plate, 24 h after cell seeding.Then, cell debris was removed.Plates were treated and after incubation at different time points were imaged and analyzed by using Digimizer 5.4.9 software.
Quantitative reverse-transcriptase polymerase-chain-reaction (qRT-PCR)
In this study, a quantitative RT-PCR assay was used to measure transcript levels of different genes.First, total RNA was extracted from the cells using the Pars Tous kit, (Tehran, Iran) according to the manufacturer's instructions.Then, RNAs were converted to complementary DNA (cDNA) using the commercial cDNA synthesis kit (Parstous kit).Quantitative RT-PCR was performed with specific forward and reverse primers for target genes (Table 1).The cDNA amplification was performed by using the StepOne instrument (Applied Biosystems, Foster City, CA).The gene expression levels were normalized to a housekeeping control gene, glyceraldehyde-3-phosphate dehydrogenase (GAPDH).
Statistical analysis
All values are expressed as mean ± standard error of the mean.GraphPad Prism (version 9) was used to conduct the analyses.Statistical comparisons were determined using student's t-test or one-way analysis of variance (ANOVA) followed by tukey's multiple comparison test.The differences were considered to be statistically significant at P < 0.05.
FTIR
The FTIR technique in the middle infrared spectral (400-4000 cm −1 ) wavelength was used to evaluate the functional groups of HEAE, nHAp and nHA-EA (Fig. 2).In the HEAE spectra, the continued peak at 3367 cm −1 is defined to O-H vibration and peaks at 1060-1283 cm −1 are the C-O-C characteristic peaks.In the nHAp, the intense peak around at 564-601 and 1030 cm −1 indicated the PO4 3− bands.Moreover, the O-H vibrations on the water molecule were found around 3000 to 3600 cm −1 .In the nHA-EA spectra, the extended peak around at 3443 shows water absorption in the synthesized nanoparticles.nHA-EA spectra analysis showed distinct bands of PO4 3− at 564, 604, and, 1029 cm −1 .The extra peak at 870 showed the presence of C = O stretching of carboxylic acid, and clearly suggests the interaction between carboxyl group and nHAp.These FTIR functional groups findings clearly demonstrates formation of nHAp and its interaction with HEAE during synthesis process.
Cell proliferation
In the current study, an MTT assay was performed to evaluate the in vitro cytotoxic and IC 50 of EA, nHAp, and nHAEA extract in the MCF-7 breast cancer cell line.Our results indicated that the average IC 50 of EA, nHAp, and nHAEA extract after treatment of cancer cells for 24 h was 400 µg/mL, 200 µg/mL, and 100 µg/mL, respectively (Fig. 6).
Scratch assay
Regarding the potential effects of EA nanoparticles on breast cancer cell migration, in the present study, we examined the breast cancer cells invasiveness inhibitory effects of nHA-EA using the scratch assay.Our results revealed that nHA-EA significantly reduced the migration and invasion of the MCF-7 cells, in comparison to the nHAp and EA extract (Fig. 7).
qRT-PCR
To determine the anti-cancer effects of EA extract, nHAp, and a combination of EA extract and nHAp in the MCF-7 cells, we evaluated the expression level of Bax/ Bcl2 and p53 as key markers in the carcinogenesis process.Our results revealed that the level of Bax/Bcl2 and p53 are significantly higher in the nHA-EA extract group in comparison to the EA extract and nHAp alone (Fig. 8).
Discussion
In the past few years, there have been reports of nanoscale HAp that exhibit superior biocompatibility compared to regular HAp.Furthermore, these nanoscale HAp particles have shown anti-cancer properties and play a crucial role in controlling the behavior of breast cancer cells [35].There are multiple methods for synthesizing nHAp, such as co-precipitation, hydrothermal, sol-gel, and others.Additionally, the utilization of metal nanoparticles extracted from plants is considered environmentally friendly.As a result, the use of natural materials in the production of nHAp has captured the interest of numerous researchers, contributing to the development of a significant field of study within the realm of nanotechnology science [29].
A wide range of natural plant-derived products and extracts possess anti-tumoral properties, presenting potential value in the creation and advancement of novel anti-cancer nanoparticles.There are growing body of evidences showing that natural products have an important role in treatment of wide variety of human disease, including types of cancer [30,36].EA, as an herbal medicine plant with different properties, have been used extensively for a long time to treat different disorders [13,33].There are various bioactive constituent in EA, including phenolic acids and flavonoids, which exerts a critical role in cancer development and progression Fig. 2 FTIR spectra of HEAE, nHAp and nHA-EA inhibition [13,27].It has been shown that bioactive compounds of EA could modulate different biological processes of cells including cell cycle progression, apoptosis and DNA repair [28,37].In recent years, EA has received considerable attention for cancer therapy because of its promising effectiveness and potential therapeutic effects as chemo preventive and antitumor natural product.It has been shown that flavonoids with different signaling pathways which are correlated with carcinogenesis, including cellular proliferation, apoptosis, angiogenesis, and metastasis.Moreover, apigenin, as a phytoestrogen aglycone has shown to suppress apoptosis, cell cycle and invasion in malignant cells, alone or in combination with other chemotherapeutic agents [38].
In this study, nHAp was synthesized with EA extract, which could act as a reducing, stabilizing, and capping agent.This synthesis is easy, cheap, durable, and suitable for largescale processing.To determine the anticancer effects of nHA-EA in MCF-7 cells were treated with nHA-EA.According to result of this study, nHA-EA had more growth inhibitory properties on MCF-7 breast cancer cell line than other groups.Our results revealed that, nHA-EA decreased the viability and proliferation of MCF-7 cells and significantly increased the expression level of p53 and Bax/Bcl2 genes.
Flavonoids have been demonstrated to increase the expression of p53 and induce cell cycle arrest specifically in the G2/M phase in cancer cells.Additionally, they are recognized for their ability to inhibit the expression of Ras proteins and modulate heat-shock proteins in different types of cancers, particularly in leukemia and colorectal cancer.
The flavonoids in the EA extract surround the nHAp like a cap.As a result, they may increase the anti-cancerous properties of nHAp.Quercetin, a prominent flavonoid found in EA, is a significant anti-proliferative compound.In addition, it plays a role in enhancing tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) by increasing the expression of Bax and suppressing the activity of Bcl2 protein [39].
These results are in accordance with previous studies showing the suppressive effects of EA on different type of cancer including glioblastoma and breast cancer [38,40].Although, the results of the current study are not consistent with other investigation, showing that EA extract did not exerted any significant antiproliferative activity against cancerous cells, which may be due to the different concentration of extract and its characteristics [41].
Uncontrolled cell division and growth and escape from apoptosis are well-known hallmarks of malignant cells and important mechanism involved in cancer treatment resistance; therefore, targeting specific cancer deregulated pathways could be an important way for providing novel therapeutics strategies [42,43].
Moreover, we revealed that the ethanolic extract of EA could decrease migration of MCF-7 cells and induces apoptosis in this cell line.Our results are in agreement with other studies that confirmed EA exerts its anti-tumor impacts via upregulation of p53 and Bax/Bcl2.revealed that these constituents could regulate various biological pathways including apoptosis, DNA repair, inflammatory processes and cell cycle progression [13,33,44].p53 regulates the expression of Bcl2, a proapoptotic factor, in which triggers apoptosis process [45].Moreover, there are studies showing that EA extract induces apoptosis via different mechanisms including, human epidermal growth factor receptor 2 gene (HER2) and Jun N-terminal kinases (JNK) inhibition [39].Thus, these results support the role for EA extract in the Bcl2-associated intrinsic pathway of apoptosis via p53 upregulation.
It is well known that invasion and migration, which are key underling mechanisms of metastasis plays an important role in breast cancer disease.Metastasis is a process in which malignant cells leaving primary tissue, disseminate, circulate and induces secondary tumors in distant sites [46].
In the present study, analysis of scratch assay revealed that, nHA-EA significantly inhibited migration and invasion in MCF-7 cell line.The anti-migratory and anti-metastatic potential of EA herbal plant has been confirmed in a study conducted by Jabeen et.al.showing that, EA extract inhibited the invasiveness of HER2-positive breast cancer cell lines.They showed that, this effect is associated with JNK signaling suppression and mesenchymal-epithelial transition (MET) inhibition.They also reported that suppression of JNK signaling pathway is associated with induction of apoptosis process [39].In another study conducted by Saleh et.al, it has been found that EA extract suppresses the invasiveness and migration of oral carcinoma cell lines.They reported that this extract inhibits cell invasion via increasing the amplification of E-cadherin, an important modulator of MET process.The also indicated that this reduction in migratory properties of oral carcinoma cell lines is mediated via inhibition of extracellular-signal-regulated kinases 1 and 2 (ERK1/ERK2) pathway [47].
Conclusion
Taken together, our results demonstrated that bioactive constituents of EA medicinal plant in form of nHA-EA particles, can effectively exerts potential anticancer and chemo preventive effect against breast cancer growth and can be proposed as a promising beneficial candidate for BC therapy.Moreover, we showed that nHA-EA triggers cell apoptosis in MCF-7 cells via increasing proapoptotic gene.Additionally, our findings indicate the role of nHA-EA as a natural product in targeting p53 as a key marker in BC treatment.However, further investigations are required to discover what bioactive compounds are responsible for the chemo preventive effect of this extract.
Fig. 1 A
Fig. 1 A schematic representation of the nHA-EA preparation process
Fig. 6
Fig. 6 MTT assay images show cytotoxic effects and IC 50 values of EA, nHAp and nHA-EA
Fig. 7
Fig. 7 The effects of EA, nHAp and nHA-EA on the migration of the MCF-7 cells after 24 h
Table 1
Primers used for real-time polymerase chain reaction | 2023-09-26T14:18:59.339Z | 2023-09-26T00:00:00.000 | {
"year": 2023,
"sha1": "e98eb5e7a277b9945a4f31b83c4422fc96fa0557",
"oa_license": "CCBY",
"oa_url": "https://bmccomplementmedtherapies.biomedcentral.com/counter/pdf/10.1186/s12906-023-04116-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e10a8c01e9725dc7a04b911895d38c9662992631",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55960503 | pes2o/s2orc | v3-fos-license | Closing the Cycle: How South Australia and Asia Can Benefit from Re-Inventing Used Nuclear Fuel Management
A large and growing market exists for the management of used nuclear fuel. Urgent need for service lies in Asia, also the region of the fastest growth in fossil fuel consumption. A logical potential provider of this service is acknowledged to be Australia. We describe and assess a service combining approved multinational storage with an advanced fuel reconditioning facility and commercialisation of advanced nuclear reactor technologies. We estimate that this project has the potential to deliver a net present value of (2015) AU$30.9 billion. This economic finding compares favourably with recent assessment based on deep geological repository. Providing service for used nuclear fuel and commercialisation of next generation nuclear technology would catalyse the expansion of nuclear technology for energy requirements across Asia and beyond, aiding efforts to combat climate change. Pathways based on leveraging advanced nuclear technologies are therefore worthy of consideration in the development of policy in this area.
Introduction: Addressing a Need
Humanity faces a daunting challenge this century: to rapidly phase out the use of fossil fuels to mitigate climate change whilst simultaneously delivering a secure, long-term energy supply for modern society. Nuclear fission has an enormous and proven potential to supply reliable baseload electricity and displace fossil fuel power plants and, at a deployment rate in some nations, commensurate with the demands for clean energy this century (Qvist & Brook 2015). The fundamental advantages of nuclear power (a compact and near-zero-carbon energy source with energy dense fuel) remain critically important in many Asian markets, which are experiencing continued growth in population and electricity demand (Nuclear Energy Agency 2012; International Atomic Energy Agency 2014). One of the most enduring obstacles to accelerated expansion of nuclear electricity generation has been the uncertainty surrounding the management of used nuclear fuel. There is approximately 270,000 tonnes of heavy metal (tHM) of used nuclear fuel in storage worldwide (World Nuclear Association 2015). In addition, approximately 12,000 tHM of used nuclear fuel is produced each year (World Nuclear Association 2015). Recent estimates suggest this will exceed 1 million tHM by 2090 (Cronshaw 2014;Cook et al. 2016).
There is no multinational spent fuel repository available today (Feiveson et al. 2011). The International Atomic Energy Agency states that a disposal service for used fuel would be an attractive proposition for smaller nuclear nations and new market entrants (International Atomic Energy Agency 2013). For instance, the mature energy market of Singapore has near-total reliance on imported natural gas for electricity (Energy Market Authority 2015a, b) to serve a developed population of 5.4 million residents. A moderate-sized nuclear sector (approximately 10 GW installed) (Energy Market Authority 2015a, b) offers high-certainty decarbonisation with enhanced fuel security. Fast-growing demand in the developing-nation market of Indonesia means electricity use is expected to almost triple from 2011 to 2030, predominantly based on coal (International Energy Agency 2013) with 25 GW of new coal generation planned from 2016 to 2025 (PWC Indonesia 2016). An approved, regional solution to used fuel management might catalyse acceleration of energy investment away from fossil fuels in this region and toward nuclear fission, with commensurate benefits in reduced greenhouse gas emissions and reduced air pollution.
Countries with already established nuclear power programmes also require services. Japan has accumulated US$35 billion for the construction and operation of a nuclear repository (World Nuclear Association 2014). South Korea faces impending shortages of licensed storage space for used nuclear fuel (Dalnoki-Veress et al. 2013;Cho 2014) and has expressed an urgent need for more storage (Kook 2013). In 2015, Taiwan Power Co. sought public bids worth US$356 million for offshore used fuel reprocessing services, at a price of nearly US$1,500 kgHM À1 (Rosner & Goldberg 2013), to be funded from its Nuclear Back-End fund, which currently totals US$7.6 billion (Platts 2015).
Australia, in contrast to its near neighbours in Asia, has long been considered a logical jurisdiction for the management of used nuclear fuel thanks to a convergence of factors. 1 Highly stable geology, finance, institutions and politics promote confidence in the international community. Australia has the advantage of respected nuclear regulatory bodies in the Australian Radiation Protection and Nuclear Safety Agency and the Australian Safeguards and Non-Proliferation Office and a 50-year history of successful operation of a research reactor and associated facilities (run by ANSTO). Australia has been ranked first in the world for the last three years for nuclear security (Minister for Foreign Affairs 2016). Australia ˈs institutions retain the justified confidence of the international community.
The establishment of the South Australian Nuclear Fuel Cycle Royal Commission in 2015 resulted in a detailed examination of the potential for Australiaˈs expanded involvement in the nuclear fuel cycle. Its terms of reference included exploring opportunities that may lie in the back end of the fuel cycle, as well as the potential for generation of electricity from nuclear reactors.
The Royal Commission delivered findings in May 2016 (Nuclear Fuel Cycle Royal Commission 2016). It ruled out any involvement in the development of advanced nuclear technologies in South Australia in the short term, including reactor technologies capable of recycling used nuclear fuel. Related investigations of the used fuel management and disposal market were thus limited in scope to geological disposal concepts. However, the same analysis identified the potential future pathway of used fuel for 'new generations of nuclear reactors' that could 'both provide an 1. A major research programme in the 1990s by Pangea Resources identified Australia as the optimal siting for a multinational geological waste repository for spent nuclear fuel. The proposal failed to find support among the Australian Government and public and was abandoned. For more information, see the World Nuclear Association webpage International Nuclear Waste Disposal Concepts. income stream and avoid some significant costs', choosing to leave this as un-modelled upside (Cook et al. 2016). These decisions left potentially viable pathways unexamined. Given that (i) the cost of a geological disposal facility has been estimated at AU$33.4 billion (Cook et al. 2016); (ii) the lead time to emplacement in geological disposal is estimated at 28 years (Cook et al. 2016); and (iii) the demonstrable need for global-scale generation of clean electricity and heat, we argue it is important for any jurisdiction to explore, from the outset, pathways that consider the recycling of used fuel and the development of advanced nuclear reactors. If sufficiently large economic benefits can be demonstrated, an argument can be formed for inclusion of advanced nuclear technology deployment in policy options for managing the back end of the nuclear fuel cycle.
Given the component parts of a comprehensive recycling solution to used fuel management are either well established or ready for commercialisation, we sought to investigate a pathway not considered by the Royal Commission, namely, whether the implementation of such an integrated solution might be economically beneficial by defining a project and assessing the business case. In this paper, we discuss the proposed project and the outcomes of our assessment of the business case.
Forming a Viable Solution
Although technically well supported, the securing of a radiotoxic waste product in the form of used nuclear fuel, in geological disposal, for potentially hundreds of centuries presents a worrying philosophical problem for any society to face. We therefore chose to assess the economic viability of an alternative technical pathway based on • an above-ground independent spent fuel storage installation (ISFSI) (discussed later) to be developed synergistically with • modern, full-fuel recycling fast neutron nuclear reactors and low-cost, high-certainty disposal techniques for eventual waste streams.
An ISFSI refers to a stand-alone facility for the containment of used nuclear fuel in dry casks for a period of decades (Casey Durst 2012). Cumulative international experience in interim management of used nuclear fuel provides a vast technical and operational record of practices (International Atomic Energy Agency 2007; Werner 2012). Recent ruling from the US Nuclear Regulatory Commission stated that used nuclear fuel may be stored safely in an ISFSI legally for around a century. (Werner 2012). The advantages of this approach have been documented along with operational and maintenance requirements (Bunn et al. 2001;Hamal et al. 2011;Rosner & Goldberg 2013), the physical resilience of the containment (Lee et al. 2014) and the end-of-life considerations (Howard & van den Akker 2014). One identified advantage is retaining flexibility to deploy alternative solutions such as fuel recycling.
All constituent heavy-metal elements of used nuclear fuel, other than about 3-5 per cent of fission products (the isotopes that are created from uranium after it has been fissioned in a reactor), can be recycled as fuel for a fast neutron reactor. This first requires electrolytic reduction for converting oxide fuel to metal and removing most of the fission product gases, followed by electrorefining to further cleanse the fuel of fission products and, finally, segregating the main metals (uranium, plutonium, minor actinides) for the fabrication of new fuel rods (Argonne National Laboratories/Merrick and Company 2015). The viability of this process, known as pyroprocessing, was established many years ago at the level of high-capacity testing (Argonne National Laboratories/US Department of Energy Undated). Research and investigation into pyroprocessing has continued to the present day at Idaho National Laboratories (Simpson 2012). This ongoing research process has permitted refinement of the process towards commercialisation. Detailed design and costing is available of a commercial-scale oxide-to-metal fuel conversion and refabrication facility, demonstrating the feasibility of a closed fuel recycling facility operating at a rate of 100 t year À1 (Argonne National Laboratories/Merrick and Company 2015). Such a facility is included as a component in our project.
The impact of such developments on the goals of nuclear non-proliferation must be examined carefully. Safeguarding nuclear actions is rendered more effective by technologies with intrinsic technical barriers to nefarious use. Materials directly usable for weapons cannot be produced by pyroprocessing. The plutonium product is inherently co-mingled with minor actinides, uranium and 'hot' trace fission products (Hannum et al. 1996) because of the separation being electrolytic and not chemical. Pyroprocessing is thus far more proliferation resistant than the existing aqueous-chemical plutoniumuranium extraction processes (known as PUREX, which has been used since the 1940s). Recycling processes take place via remote handling in hot cells. This presents physical-radiological barriers that increase the ease of monitoring and provide the fuel with a 'self-protecting' barrier that results in difficulty of access and diversion of the fissile material (Till & Chang 2011). Furthermore, the responsible centralisation of the used fuel material in a single approved location with international oversight would assuredly deliver a net security benefit at the global scale (Evans & Kawaguchi 2009).
Pairing the recycling technology with an advanced fast neutron reactor unlocks the full benefits of the used fuel material. One example of this technology is the Power Reactor Innovative Small Module (PRISM) from GE Hitachi (2014). Each pair of PRISM modules offers 622 MWe of dispatchable, nearzero-carbon 2 generation by making use of two nuclear reactors of 311 MWe each. This size provides no barrier to connection in the Australian National Electricity Market, including in smaller regions like South Australia (Electranet 2012). With flexibility in core configuration, the PRISM can offer a conversion ratio (transmutation of fertile to fissile isotopes of actinide elements) of <1 or >1, providing an effective, direct route to net consumption and rapid elimination of long-lived material or alternatively rendering existing used fuel a potentially vast source of further energy (Hannum et al. 1996;Triplett et al. 2010). Following a fuel cycle, the recycling facility cleans the metal fuel and re-casts new metal fuel pins with the addition of make-up material from the used fuel stockpile (Argonne National Laboratories/US Department of Energy Undated). The removed impurities, mostly fission products, are small in mass and shortlived, rendering management and disposal well-within institutional capabilities .
With the inherent safety properties that accompany the use of metal fuel and metal coolant (Wade et al. 1997;Triplett et al. 2010;Till & Chang 2011;International Atomic Energy Agency 2012;Brook et al. 2014), PRISM has the necessary design attributes of a successful nuclear energy system that could be feasibly deployed in the near term and provides sufficient data for consideration and assessment in our project.
It is important to consider why other nations may not be actively pursuing this technology commercialisation pathway. Densely populated, fast-growing economies across Asia need the reliable clean energy output that a functioning nuclear sector offers, in order to support broader economic development. The pursuit of solutions to the back end of the fuel cycle is not, of itself, a priority particularly while current generation nuclear fuel remains low cost and reliable in supply. For other nations, the level of interest in implementing a technology-based solution may be higher.
2. In this context, zero-carbon refers to the point of generation. While all generation sources have embedded carbon dioxide emissions from across the life cycle, nuclear reactors are among the least carbon-intensive energy sources across the full life cycle. The reactors under discussion here, that recycle fuel rather than mining it, will be even lower in life cycle emissions. Life cycle emission results from the National Renewable Energy Laboratory are found at http://www.nrel.gov/analysis/sustain_lca_results.html However, idiosyncrasies of geology, climate and geopolitics render them less suitable to housing such a group of facilities, with high barriers to implementation. Finally, a compelling commercial case may be weak on a nation-by-nation basis, whereas aggregating the proceeds of multiple national used fuel budgets at one multinational facility changes that commercial equation.
Determining the Business Case
Our project thus merges (i) an ISFSI; (ii) a fuel recycling facility; and (iii) metal fuelled, metal cooled fast breeder reactors based on the PRISM design. For eventual disposal of fission products, our project assumes the use of deep borehole disposal (Brady et al. 2012). The full details of the business case assumptions are provided in Data S1.
In order to capture a range of potential outcomes, we estimated the business case for nine scenarios and selected three illustrative scenarios (low, mid and high) based on a range of assumptions for key variables. These scenarios are defined in Table 1. The capital and operating costs for all scenarios are shown in Tables 2 and 3, respectively, and described in further detail in Data S1. These assumptions were applied to determine net present value (NPV) of the integrated process, including disposal of fission products in deep boreholes, over a 30-year project life at a 4 per cent discount rate. The impact of different discount rates ranging from 1 to 10 per cent is shown in Data S2. The NPV outcomes at 4 per cent discount rate are shown in Figure 1.
The business case reveals a multibillion dollar NPV in all scenarios except the illustrative low scenario. The illustrative mid-range scenario delivers NPV of AU$30.9 billion at 4 per cent discount rate.
Comparing Findings with the Royal Commission
In the analysis supporting the final report of the Royal Commission (Cook et al. 2016), a similar project was assessed, predicated on first establishing above-ground storage for used nuclear fuel. Key differences in the favoured scenario modelled by the Royal Commission include • greater assumed volumes of material to be stored, that is, a bigger project • higher assumed base case 'price to charge' for acceptance of used fuel • longer assumed period for accepting used fuel material • no integrated commercialisation of recycling and advanced reactor technology • no revenues related to the sale of electricity from nuclear power plants • establishment of permanent geological disposal facility • revenues from the acceptance of intermediate level waste. A compare-and-contrast between the base case of our analysis and the base case of the Royal Commission is given below.
As shown in Table 4, as well as recommending a much larger role in accepting used fuel, the Royal Commission directs revenue (at a capital expenditure of AU$33.4 billion) towards geological disposal, while our concept directs revenue toward recycling and clean electricity generation (at a capital expenditure of <$10 billion). Both projects delivered NPV in the tens of billions. The larger NPV of the Royal Commission project is substantially explained by (i) the much larger assumed revenues from accepting 2.3 times more used fuel material; (ii) accepting intermediate level waste for disposal; and (iii) the higher assumed price paid (AU$1.75 million ton À1 ) for the used fuel material (our assumed base case price was AU$1.37 million ton À1 ). In Table 5, the results of our analysis are updated to reflect the higher assumed price for used fuel acceptance identified by the Royal Commission. The NPV changes from AU$30.9 billion to AU$44.1 billion.
On the basis of this analysis, we argue that commercial development of advanced nuclear reactors, treated as principally a recycling facility paired with an ISFSI, is economically viable immediately. Deploying advanced nuclear reactors for their recycling capabilities represents an innovative approach to both the development and deployment of low-carbon energy technologies and the resolution of longstanding challenges related to used nuclear fuel.
Limitations and Uncertainties
The novel nature of this business case involves inevitable uncertainties. Our transportation costs were based on inclusive estimates for a national facility serving the United States using ground transport only. In addition to such ground transport costs, ocean-going transport will be required to South Australia. Recent work suggests ocean transport costs to South Australia of AU$7,500 to AU$37,500 tHM À1 (Cook et al. 2016) with this range covering a range of potential customer nations. Present value outcomes of this study will not be materially altered by these inclusions that assessed 'price to charge' across a range of approximately AU$1.3 million tHM À1 .
The lack of services, globally, for the management of used nuclear fuel means that the assumed 'price to charge' was based on desktop sources. This is an obvious limitation; such a market is not yet established and tested. However, more recent willingness-to-pay analysis supported a higher base case price than that used in our analysis (Cook et al. 2016), suggesting that any uncertainty is likely to be positive for the present value outcomes of our proposed pathway ( Table 5). The sensitivity of our project to the assumed capital expenditure of the nuclear reactors was tested in a cost overrun scenario (Data S4), which found positive NPV in all but the low scenario.
Conclusion
The South Australian Nuclear Fuel Cycle Royal Commission provided an important opportunity for an evidence-based reappraisal of the opportunities available in serving the back end of the nuclear fuel cycle. However, the analysis undertaken under that process chose a deliberately constrained pathway that neglected to examine opportunities based on advanced nuclear technologies and recycling of used nuclear fuel. Our proposal identifies the opportunity for an integrated financial project to commercialise new technologies that allow the complete recycling of used nuclear fuel, with the production of abundant, nearzero-carbon clean electricity (and industrial heat) as a result. If implemented, this would make an important contribution in the fight against climate change, nuclear proliferation and containment of pollution while potentially offering (2015) AU$30-44 billion in present value. Implementation of an integrated solution could also play a vital role in shifting the balance of energy decision-making, particularly in the fast-growing Asian region, away from polluting fossil fuels and towards clean, near-zero-carbon nuclear generation by providing assurance of responsible and secure centralised management of used nuclear fuel. | 2018-12-07T22:08:21.985Z | 2017-01-17T00:00:00.000 | {
"year": 2017,
"sha1": "1f389bc239e7f9e165c1835e42e390cfbfc3e097",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/app5.164",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "9ee665fc5156d9d4d1d990d5409c909e74e652d4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
} |
247223626 | pes2o/s2orc | v3-fos-license | Corylus avellana L. Aroma Blueprint: Potent Odorants Signatures in the Volatilome of High Quality Hazelnuts
The volatilome of hazelnuts (Corylus avellana L.) encrypts information about phenotype expression as a function of cultivar/origin, post-harvest practices, and their impact on primary metabolome, storage conditions and shelf-life, spoilage, and quality deterioration. Moreover, within the bulk of detectable volatiles, just a few of them play a key role in defining distinctive aroma (i.e., aroma blueprint) and conferring characteristic hedonic profile. In particular, in raw hazelnuts, key-odorants as defined by sensomics are: 2,3-diethyl-5-methylpyrazine (musty and nutty); 2-acetyl-1,4,5,6-tetrahydropyridine (caramel); 2-acetyl-1-pyrroline (popcorn-like); 2-acetyl-3,4,5,6-tetrahydropyridine (roasted, caramel); 3-(methylthio)-propanal (cooked potato); 3-(methylthio)propionaldehyde (musty, earthy); 3,7-dimethylocta-1,6-dien-3-ol/linalool (citrus, floral); 3-methyl-4-heptanone (fruity, nutty); and 5-methyl-(E)-2-hepten-4-one (nutty, fruity). Dry-roasting on hazelnut kernels triggers the formation of additional potent odorants, likely contributing to the pleasant aroma of roasted nuts. Whiting the newly formed aromas, 2,3-pentanedione (buttery); 2-propionyl-1-pyrroline (popcorn-like); 3-methylbutanal; (malty); 4-hydroxy-2,5-dimethyl-3(2H)-furanone (caramel); dimethyl trisulfide (sulfurous, cabbage) are worthy to be mentioned. The review focuses on high-quality hazelnuts adopted as premium primary material by the confectionery industry. Information on primary and secondary/specialized metabolites distribution introduces more specialized sections focused on volatilome chemical dimensions and their correlation to cultivar/origin, post-harvest practices and storage, and spoilage phenomena. Sensory-driven studies, based on sensomic principles, provide insights on the aroma blueprint of raw and roasted hazelnuts while robust correlations between non-volatile precursors and key-aroma compounds pose solid foundations to the conceptualization of aroma potential.
INTRODUCTION
European hazelnut (Corylus avellana L.) belongs to the Corylus genus, Betulaceae birch family, and is one of the 25 existing hazelnut species (Erdogan and Mehlenbacher, 2000); originally of the Black sea region, C. avellana L. has been cultivated since Roman times (Boccacci and Botta, 2009), but intensive production started to expand in the 1930s in the Langhe region in Piemonte (North-West of Italy) due to the demand from the confectionery industry, and since 1964 in Turkey (Bozoglu, 2005). C. avellana L. is the main species of interest for industrial applications due to high-quality characteristics such as larger kernels and thinner shells (Erdogan and Mehlenbacher, 2000).
Nowadays, hazelnuts represent a relatively small, yet consistent, market portion in constant growth both in developed countries and in emerging economies, which is projected to grow by over 10% in the next 5 years. According to the FAO (2019), during 2019 more than 1 million tons of in-shell hazelnuts were harvested. The global production during 2017-2019 is visually summarized in Figure 1. Turkey is the main producer since it covers more than 67% of the global production, followed by Italy (≈12%), Azerbaijan (≈5%), and the USA (≈4%).
The Turkish production is mainly located in two areas along the Black Sea: the eastern area, which accounts for 60% and includes the provinces of Samsun, Ordu, Giresun, and Trabzon, and the western area, named Akçakoca, which accounts for the other 40% and includes the provinces of Sakarya, Zonguldak, Bolu, and Düzce. Turkish hazelnuts are usually supplied as a regional blend (e.g., Akçakoca blend and Giresun blend), and depending on the area of interest, different cultivars are more abundant: in the eastern area (Giresun/Ordu) the most prominent cultivars are the Tombul, Çakildak, Mincane, and Palaz, while in the western area (Akçakoca) the leading cultivars are the Karafindik, Mincane, Çakildak, and Foşa (ISLAM, 2018). Among these cultivars, Tombul is the most abundant and has been described as the best Turkish cultivar in terms of overall kernel quality (Balik et al., 2018).
Italy is the second-largest producer with four main production areas: Campania (≈32%), where the cultivars are the Mortarella, San Giovanni, and Tonda di Giffoni; Piemonte (≈30%), with the cultivar Tonda Gentile Trilobata; Lazio (≈25%), with the cultivars Tonda Gentile Romana and Nocchione; and Sicilia (≈11%) where the cultivars are primarily used as fresh products and do not have an industrial interest.
In Azerbaijan, the main cultivar is the Ata-Baba which accounts for about 80% of the total hazelnut production and is concentrated in the Qabala and Qakh districts in the northwest of the country.
In the USA the production is concentrated in Oregon, on the pacific coast, which accounts for ≈99% of the total production. Barcelona is the main cultivar (≈60%), but many cultivars from artificial breeding are continuously developed to improve the resistance to Phytocoptella avellanae, a mite affecting local trees. Table 1 reports a summary of the main cultivars and harvest regions of the four largest producers. Table 2 summarizes the main characteristics of the most relevant hazelnut cultivars. It is important to highlight that industrially appealing hazelnut cultivar characteristics include thin shell, easy cuticle removal (high blanching rate), high shelling yield (kernel/nut ratio >45%), globular kernels, and a kernel caliber ideally around 13 mm (Caramiello et al., 2000).
This review focuses on the chemistry of the volatilome of high-quality hazelnuts; in particular, it systematically presents information about the distribution of potent odorants and keyaroma compounds in raw and roasted hazelnuts and critically discusses them given the increasing market demand for highquality products for the confectionery industry.
The impact of post-harvest practices, storage conditions, and roasting (i.e., the key-technological process) is examined for their impact on volatiles signatures and the development of positive and negative sensory attributes. The correlation between primary metabolites and key-odorants signatures, by the aroma potential concept (Cialiè Rosso et al., 2018Rosso et al., , 2021, is also discussed as a new perspective for the application of modern omics strategies to hazelnut research (Miguel et al., 2011;Stilo et al., 2021a).
HAZELNUTS COMPOSITION AND ITS CORRELATION TO SENSORY PROPERTIES
A characteristic composition with a peculiar balance between primary and specialized (formerly referred to as secondary) metabolites is at the basis of the pleasant sensory profile and potential health benefits of hazelnuts. Taste-active components, interacting with chemoreceptors located in the oral cavity, trigger basic taste sensations (i.e., sweet, acid, bitter, salty, and umami); they are generally connoted by lower volatility accompanied by a high polarity and water solubility. In hazelnut, major taste active compounds are free amino acids, sugars, organic acids (i.e., primary metabolites), phenolic acids, and condensed tannins (i.e., specialized plant metabolites) (Alasalvar et al., 2010(Alasalvar et al., , 2012b. Some of these non-volatile constituents belonging to the class of specialized metabolites (i.e., phenolic derivatives in aglycones or glycosides) can also trigger trigeminal sensations (i.e., chemestesis) while eliciting velvety and astringency sensations (Schieberle and Hofmann, 2011).
On the other hand, aroma-active components are characterized by low water solubility, medium-to-low polarity, and molecular weight below 300 Da. The interaction of odoractive volatiles with the array of Olfactory Receptors (ORs) in the olfactory epithelium activates a complex signals pattern, i.e., the Receptor Code (Firestein, 2001;Breer et al., 2006;Audouze et al., 2014;Dunkel et al., 2014). These olfactory stimuli activate the nervous system for cognitive mechanisms of learning and experience, and as the ultimate event the olfactory perception.
Aroma perception alone is responsible for up to 80% of the whole hedonic profile (Dunkel et al., 2014) of food. The synergy between taste and aroma perception, also referred to as flavor (Schieberle and Hofmann, 2011), is at the basis of positive consumer experience and of the hedonic quality of hazelnuts.
This review has hazelnut volatilome as a primary focus and, within the bulk of its detectable volatiles, those potent odorants capable of eliciting positive or negative sensations perceivable during hazelnut consumption.
The next section briefly summarizes the bulk composition of raw hazelnuts by presenting major primary and specialized metabolites classes with a focus on those components that are also potent odorant precursors. for 50-73% of the total composition (Köksal et al., 2006), followed by 10-22% of carbohydrates and 10-24% of proteins; moisture (≈5%) and ashes (≈3%) complete the profile.
Lipids
The lipid fraction is characterized by a saponifiable portion further classified in a polar fraction that accounts for 1.2% of the total amount and is represented by phosphatidylcoline, phosphatidylethanolamine, and phosphatidylinositol (Alasalvar et al., 2003b), and an apolar fraction of about 98.8%, consisting of triacylglycerols of which the principal contributor is oleic acid (18:1ω9) with 77.5-82.95%, followed by linoleic acid (18:2ω6) that is responsible for the 7.55-13.69% and by palmitic acid (16:0) that constitutes the 4.85-5.79% of the total. A summary of the most abundant fatty acids present in 16 varieties/cultivars are illustrated in Table 3 (Köksal et al., 2006).
The unsaponifiable fraction is mostly characterized by sterols, which differ in composition between cultivars and geographical origin, and decreases along with the shelf-life Ilyasoglu, 2015;Ghisoni et al., 2020). The three most abundant sterols are β-sitosterol, accounting for 85% of the total content, campesterol at 5-7%, and 5-avenasterol, which achieves on average 2-4% of the total composition (Ilyasoglu, 2015). The sterol fraction plays an important role in authentication and frauds counteraction, it is used to identify olive oils adulterated with hazelnut oils of lower quality (Parcerisa et al., 1999;Zabaras and Gordon, 2004), as it might be further used in the discrimination of different hazelnut cultivars (Ghisoni et al., 2020).
The lipid fraction is at the basis of major sensory defects developed during hazelnut storage (Kinderlerer and Johnson, 1992;Alasalvar et al., 2003b;Azarbad and Jeleń, 2015;Ghirardello et al., 2016;Cialiè Rosso et al., 2018); unsaturated fatty acids are prone to autoxidation forming hydroperoxides derivatives further degraded in secondary products with lower polarity, molecular weight and odor threshold (OT) (Belitz et al., 2009). Moreover, TAGs are hydrolyzed and free fatty acids (FFAs) are more easily oxidized by the enzymatic activity, mainly promoted by endogenous esterases and lipases (Cialiè Rosso et al., 2021).
Proteins and Free Amino Acids
The hazelnut protein content is on average 18% in weight (Kamal-Eldin and Moreau, 2009). Proteins can be classified into two main groups: albumins and globulins with an aminoacidic composition where glutamic acid (2.84-3.71 g/100 g), arginine (1.87-2.21 g/100 g), and aspartic acid (1.33-1.68 g/100 g) are the more abundant reaching 30% of the total fraction (Durak et al., 1999;Köksal et al., 2006;Ramalhosa et al., 2011). Table 4 shows in more detail the free amino acid profiles of hazelnuts from different geographical areas. Caligiani et al. (2014) mapped the primary metabolome of selected raw and roasted hazelnuts from Tonda Gentile Trilobata (Piemonte TGT, Italy), Tonda Giffoni (Lazio, Italy), and Turkish varieties (Caligiani et al., 2014) by nuclear magnetic resonance (NMR) spectroscopy. Tryptophan was the discriminant marker for the Turkish hazelnuts while higher concentrations of choline and acetic acid characterized TGT samples.
Besides their role as markers for authentication, amino acids are also non-volatile precursors of several potent odorants responsible for pleasant notes in roasted hazelnuts. In a recent study, Cialiè Rosso et al. ( , 2021 investigated the amino acidic patterns in raw and roasted hazelnuts while observing linear correlations between aroma precursors in raw hazelnuts and potent odorants in lab-scale roasted hazelnuts.
Sugars reach 17% of the total composition; they include di-saccharides (sucrose, stachyose, raffinose), monosaccharides (glucose and fructose), and polyalcohols (myo-inositol). The total sugar content of hazelnut is, on average, around 3.58 g/100 g, and the sucrose achieves about 74% of the total (Alasalvar and Shahidi, 2008;Sciubba et al., 2014). Sugars contribute directly and indirectly to hazelnut sensorial profile: they are responsible for the sweet taste of raw nuts and represent fundamental precursors of aroma active compounds since they react within the Maillard reaction framework and degrade during roasting (Cristofori et al., 2008;Kiefl, 2013;Taş and Gökmen, 2018). Bonvehí and Coll (1993) studied the carbohydrates fraction and its variations as a function of varieties/cultivar and harvest region; their findings indicated that starch and fiber were almost stable while soluble sugars were highly variable within varieties, with the mountain harvested varieties containing a higher sucrose amount.
Phenol signatures were effectively exploited for cultivar discrimination by Ciarmiello et al. (2014) who analyzed 29 European cultivars and, based on the total polyphenolic content and the qualitative composition, defined some potential markers for quality control.
Phenols and polyphenols, also present in hazelnut perisperm cuticle, are responsible for the taste and chemesthetic attributes (bitterness, astringency, velvety sensations), and during roasting may form phenolic volatiles eliciting smoky and phenolic odors, the latter being distinctive of high-quality cultivars (Burdack-Freitag and Schieberle, 2012;. Another class of specialized metabolites is that of monoterpenoids: they are biosynthesized from C5 precursors (i.e., dimethyl allyl pyrophosphate and isopentenyl pyrophosphate) in plants and are present in raw and roasted hazelnut kernels as distinctive native signatures. The most Köksal et al. (2006).
HAZELNUT VOLATILOME
The volatilome, also referred to as volatome (Phillips et al., 2013;Broza et al., 2015), "contains all of the volatile metabolites as well as other volatile organic and inorganic compounds that originate from an organism" (Amann et al., 2014), super-organism, or ecosystem. In line with this definition, all volatile metabolites present in the volatilome belongs to the sample's metabolome, although in this complex fraction it could be present degradation components or exogenously formed compounds not generated by plant metabolic processes [e.g., environmental contaminants, compounds formed by bacteria and molds metabolic processesmicrobial cloud (Meadow et al., 2015), etc.].
The volatilome is therefore a distinct entity from the metabolome, it gives access to a higher level of information about many biological phenomena related to plant and food quality through its multiple chemical dimensions (Giddings, 1995). The ultimate analytical solutions to investigate food volatilome are those adopting high-resolution separations (e.g., mono-dimensional gas chromatography−1DGC; heartcut two-dimensional gas chromatography -H/C-2DGC; or comprehensive two-dimensional gas chromatography -GC×GC) combined to low-resolution or high-resolution mass spectrometry (MS). Insights on analytical strategies for comprehensive investigations of food volatiles are outside the scope of this review; however, for interested readers, here follow some reference papers of interest (Sides et al., 2000;Tranchida et al., 2013;Cordero et al., 2015Cordero et al., , 2019Franchina et al., 2016;Pedrotti et al., 2021;Stilo et al., 2021a). The food volatilome is a complex mixture of volatiles belonging to several different chemical classes as a function of the main metabolisms and reactions contributing to its expression. Regarding hazelnuts, native volatiles are those formed along with the terpenoid biosynthesis, from isopentenyl pyrophosphate and dimethyl allyl pyrophosphate as precursors (Dewick, 1986). They are a direct expression of the plant phenotype although recent findings correlated their presence to bacteria and mold development during storage (Stilo et al., 2021b).
Other important volatiles present in the hazelnut volatilome are secondary products of lipid oxidation. They are degradation products (β-scission and hydroperoxide epi-dioxide decomposition) of fatty acids hydroperoxides formed by lipids autoxidation. Within this group, several low molecular weight carbonyl derivatives (linear saturated aldehydes, unsaturated aldehydes, methyl-ketones), hydrocarbons, alcohols, and shortchain fatty acids can be found (Kinderlerer and Johnson, 1992;Belitz et al., 2009). When post-harvest practices do not properly stabilize kernels before storage, fruit germination might occur as well as bacteria and molds could find optimal conditions to grow (water activitya w , temperature, and substrates availability) increasing volatilome chemical complexity. As a consequence, primary and secondary alcohols, carboxylic acids from fermentation reactions (Cialiè Rosso et al., 2018), lactones by cyclization of hydroxyl substituted fatty acids, furans by glucose and reducing sugars degradation, and some aromatic derivatives (benzaldehyde, phenyl ethyl alcohol, phenyl ethyl acetaldehyde, alkylated phenol derivatives) by non-volatile precursors like amino acids and phenolic compounds can be found (Cialiè Rosso et al., 2018).
The next paragraphs report and discuss major findings of the hazelnut volatilome by focusing on specific functional variables with a strong correlation to sensory quality. Alasalvar et al. (2004Alasalvar et al. ( , 2012a dedicated several research projects to delineate distinctive signatures of potent odorants in highquality hazelnuts from Turkey. In a study on Giresun and Tombul hazelnuts (harvest year 2001), authors applied dynamic headspace analysis (DHA) combined with gas chromatography coupled with mass spectrometry (GC-MS) for high informative profiling of volatiles. A total of 79 volatile compounds were identified by matching linear retention index (I T ) and EI-MS spectral signatures with authentic standards and/or data from commercial databases. A total of 39 compounds were detected in raw hazelnuts; of them, 32 were identified as ketones (10), aldehydes (8), alcohols (5), aromatic hydrocarbons (5), and furans (4). Some of these components, as potent odorants, were positively correlated with specific and peculiar odor qualities revealed by descriptive sensory analysis (DSA) run by a trained panel.
Raw hazelnut volatilome mapping greatly improved, in terms of the number of volatiles detected and the method's sensitivity, with the application of GC×GC-TOF MS combined with high concentration capacity (HCC) sampling (Bicchi et al., 2004). Cialiè Rosso et al. reported the reliable identification of 133 volatiles in raw hazelnuts from Ordu (Turkey), and Lazio (Tonda Gentile Romana, Italy); most of them were already cross-mapped in other cultivars/origins (e.g., Tonda Giffoni, Tonda Gentile Trilobata, Mortarella, Akçakoca, Giresun, Trabzon, and Chile) by GC×GC-qMS (Cordero et al., 2010;Nicolotti et al., 2013a). Table 5 reports a consensus list of characteristic volatiles together with their experimental I T on polar columns, odor qualities, and OTs in oil or air.
Effect of Post-harvest and Storage on Volatilome Signatures
The evolution of raw hazelnut volatilome along shelf-life was explored by Cialiè Rosso et al. (2018) in a study on commercial samples of Tonda Gentile Romana and on Ordu hazelnuts harvested in 2014. Samples were subjected to traditional sundrying (D1 -≈30/35 • C) or artificial drying (D2) at low temperatures (≈18/20 • C) in industrial plants. To study the effect of storage conditions, 5 • and 18 • C ± 0.1 were tested in combination with atmosphere composition as regular (NA: 78% N 2 -21% O 2 ) or modified (MA 99% N 2 -1% O 2 ).
Volatiles and potent odorants were sampled by headspace solid-phase microextraction (HS-SPME) and analyzed by GC×GC-TOF MS equipped with a thermal modulator. Analytical conditions enabled a suitable sensitivity by including in the fingerprinting/profiling process potent odorants and several key-aroma compounds (Burdack-Freitag and . The pattern of 133 known analytes, identified by matching EI-MS fragmentation patterns with those collected in commercial and in-house databases and I T calculated on the 1 D (±15 units of tolerance), was explored by multivariate statistics to highlight relevant features (i.e., components) with meaningful variations along storage time and as a function of storage conditions. Explorative Principal Component Analysis (PCA) on analytes' response data indicated a natural conformation of sample groups according to cultivar/geographical origin, followed by the impact of a secondary variable, post-harvest drying conditions. Supervised univariate analysis by Fisher ratio (F), highlighted as relevant variables for post-harvest a series of linear and branched alcohols (2-heptanol, 2-methyl-1-propanol, 3-methyl-1-butanol, 2-ethyl-1-hexanol, benzyl alcohol), several esters (ethyl acetate, butyl butanoate, 2-methyl-butyl propanoate), and acetic acid. Most of them were already associated with nut ripening or fermentation (Zhou et al., 2013). Of interest, 3-methyl-1-butanol (i.e., isoamyl alcohol), a well-known fermentation product in must and wines, is formed from L-leucine. 2-methyl-1-propanol has instead L-valine as a precursor, while 2-heptanol is formed in tomatoes during ripening by β-ketoacids hydrolysis and subsequent decarboxylation (Fridman, 2005), and 2-ethyl-1hexanol has been found in fermented soybean (Han et al., 2001).
By observing the evolution of 37 potent odorants within the detectable volatilome, Cialiè Rosso et al. (2018) confirmed the dominant role of drying conditions above cultivar/origin. Moreover, most potent odorants, with OTs up to 2,500 µg/L, were closely correlated (r > 0.800) to storage time. Of them, 1-heptanol (green, chemical), 2-octanol (metal, burnt), 1-octen-3-ol (mushroom), (E)-2-heptenal (fatty, almond), hexanal (leaflike, green), heptanal (fatty), octanal (fatty), and nonanal (tallowy, fruity) are of great interest, since they might impart unpleasant notes in hazelnuts stored within 12 months. Some of the selected potent odorants showed increasing trends over time, achieving their maximum abundance at 12 months of storage. Those correlated to lipid oxidation, i.e., degradation products of fatty acids hydroperoxides (i.e., hexanal, octanal and (E)-2heptanal) eliciting fatty and green-leafy notes (Pastorelli et al., 2007;Ghirardello et al., 2013), had a marked increase over time with higher relative ratios in samples subjected to sun drying. For those samples dried at lower temperatures in industrial plants, i.e., Tonda Gentile Romana, limited oxidation was detected, with amounts of hexanal and octanal at 2.6 and 2.8 times lower compared to standard drying conditions. Moreover, 2-octanol and 1-octen-3-ol, formed by linoleic acid hydroperoxides cleavage promoted by fungal lipoxygenase/hydroperoxide lyase enzymes (Hung et al., 2014), are likely responsible for the metallic and mushroom-like notes. Their relative abundance was higher in Ordu samples accompanied by a marked increase with storage conducted in less protective conditions (18 • C and NA 78% N 2 -21% O 2 ). On the other hand, the same analytes were below method LOD in Tonda Gentile Romana hazelnuts dried at low temperatures.
Results on oxidative stability/instability are likely correlated to hazelnuts fatty acids profiles reported in several studies However, recent findings on the evolution of FFAs along with shelf-life (Cialiè Rosso et al., 2021) suggest that post-harvest has a decisive impact on esterases/lipases activity; FFAs are more prone to oxidation than those esterified in TAGs, registering 10 times faster oxidation kinetics (Frega et al., 1999).
Potent odorants profiling indicated the decisive effect of postharvest drying in preserving hazelnut quality and oxidation status; moreover, the marked development of potent odorants with unpleasant notes, formed by autoxidation of essential fatty acids, interestingly evokes the hypothesis that key flavor-related volatiles in vegetable food are generated from essential nutrients and health-promoting components (e.g., amino acids, fatty acids, and carotenoids), while informing the actual nutritional value of the product (Goff and Klee, 2006).
Hazelnut Spoilage Volatiles Signatures
Hazelnuts quality might be degraded during growing, harvesting, and storage (Giraudo et al., 2018), resulting in some physical and sensorial defects. The industry implements several quality control procedures for incoming batches to check for physical damage: insect-damaged, rotten, twin, and yellowed hazelnut kernels (Mehlenbacher et al., 1993;Caligiani et al., 2014;Belviso et al., 2017;Göncüoglu Taş and Gökmen, 2017;Yuan et al., 2018). Visual inspection by trained staff on representative samples is, nowadays, the quality control procedure of choice in the hazelnut value chain. This approach, highly time-consuming, might be strongly influenced by the level of experience and sensibility of the operator (Giraudo et al., 2018).
Complex carbohydrates undergo enzymatic hydrolysis when attacked by worms, bacteria, or molds, to provide them an available source of metabolic energy. The autocatalytic oxidation of unsaturated fatty acids and the hydrolysis of lipids in free fatty acids are also triggered. These reactions result in a negatively altered aroma and taste, i.e., the rotten defect of the fruits (Giraudo et al., 2018). Physical damage, such as dark kernels with white/brown spots, arises from insect bites that transfer saliva enzymes to the nut (e.g., proteinases, esterases, lipases, and amylases). Furthermore, damaged fruit can be more easily attacked by Aspergillus and Penicillium species, the most common fungi found after rainy seasons (Pscheidt et al., 2019), whose metabolic activity might negatively impact the sensory properties of the nut. Amrein et al. (2010Amrein et al. ( , 2014 identified prenyl ethyl ether (PRE) as the cause of a solvent off-note in ground-haversted hazelnuts. The authors applied a sensory-oriented strategy based on olfactometry coupled to GC (i.e., GC-O), followed by odor value calculation and spiking experiments.
Solid-phase microextraction (SPME) and simultaneous distillation-extraction (SDE) were adopted to extract and isolate volatiles from hazelnut batches showing the solvent off-note. Based on GC-O, performed by a trained panel, an odor active region showing a metallic solvent-like aroma impression was identified. The presence of prenyl ethyl ether was confirmed by GC-MS and retention data.
Quantification results on a series of major volatiles referred that linear saturated aldehydes (hexanal, octanal, and nonanal), reported in many studies as responsible for rancid offnotes, did not show meaningful differences between defective and control samples. Interestingly, in defective samples that reported higher amounts of prenyl ethyl ether, several terpenes (myrcene, limonene, and valencene) were also present in high concentrations.
The authors went ahead to find the possible formation pathway for prenyl ethyl ether. Prenyl alcohols can be formed by Aspergillus, Rhizopus, Penicillium, Eurotium, Mucor, and Fusarium, therefore model experiments with contamination of hazelnuts with the abovementioned molds were conducted, unfortunately without success. To date, it is assumed that this solvent metallic component might be formed as a consequence of mold contamination in presence of unknown co-factors that play a key role in triggering metabolism activation (Amrein et al., 2010(Amrein et al., , 2014. Although the cause route of many spoilage defects is still unknown, instrumental methods capable of detecting odorant patterns responsible for perceivable off-odor are of great interest. Moreover, by the implementation of quantitative analytical workflows, quality screening is more objective, and highlyinformative chemical analysis can comprehensively cover many needs, e.g., spoilage detection, rancidity, and authentication markers (Cordero et al., 2010;Kiefl et al., 2012;Cialiè Rosso et al., 2018). Stilo et al. (2021b) developed an effective strategy to detect odorant patterns in selected spoiled hazelnuts showing perceivable defects. A sensory panel screened by flash profiling (FP) (Dairou and Sieffermann, 2002;Delarue and Sieffermann, 2004) Ordu and Akçakoca samples, harvested in 2015 and 2016, at different shelf-life stages. Samples (n = 29) were therefore classified into seven sensory classes: "good quality" (OK samples) were those eliciting positive attributes and none of the negative attributes arising from the consensus list; "Defected" samples were sub-classified in five different groups based on the predominant off-flavor perceived: Mold, Mold-rancid-solvent, Rancid, Rancid-stale, Rancid-solvent, and Uncoded KO.
Volatiles were extracted and analyzed with high resolution fingerprinting by HS-SPME followed by GC×GC-TOF MS; 350 untargeted and targeted features were tracked and aligned over all the samples. By unsupervised statistics, i.e., hierarchical clustering based on Pearson correlation, a series of informative volatiles showed a strong correlation with Mold and Moldrancid-solvent classes. Volatile fatty acids (hexanoic, heptanoic, octanoic, and nonanoic acid), lactones (γ-hexalactone, δheptalactone, γ-heptalactone γ-octalactone, γ-nonalactone), 1nonanol, and 3-nonen-2-one were distinctive and enabled independent clustering of spoiled hazelnuts from good (OK) samples. On the other hand, OK samples were connoted by higher amounts of short-chain linear alcohols (2-pentanol and 2heptanol), butyl ether, butyl benzoate, and 4-heptanone, odorants mainly correlated to positive attributes of balsamic, fruity, and herbal notes.
Results on up-or down-regulation of specific volatiles between spoiled and OK samples are illustrated on histograms in Figure 2. Mold and Mold-rancid-solvent samples were characterized by higher amounts of saturated and unsaturated aldehydes, linear alcohols, and carboxylic acids. Butanal, decanal, and lactones were higher in Mold, while pentanoic acid was more abundant in the Mold-rancid-solvent. In addition, Mold-rancidsolvent class exhibited higher amounts of 3-penten-2-one and 3-octen-2-one, responsible for earthy and musty notes.
Fatty acids and lactones dominate the differential distribution in Mold and Mold-rancid-solvent samples: these chemical classes are correlated to the fatty acid degradation promoted by different mold genus (e.g. Aspergillus, Penicillium, Rhizopus, Fusarium etc.), developed during post-harvest (Memoli et al., 2017). Lactones are formed by fatty acid hydroxylation in odd or even positions, followed by β-oxidation and chain reduction. As a function of hydroxyl group position, lactonization results in γ-, δ-or ε-lactones (Romero-Guido et al., 2011). Their presence in moldy samples, likely contaminated by fungi, is in keeping with literature data; molds produce lactones by enzymatic catalysis with the degradation of hydroxyl fatty acids; when the native substrate is not available, they implement the hydroxylation step (Romero-Guido et al., 2011) to enable lactones formation. Moreover, molds produce lipolytic enzymes (i.e., lipases and esterase) to promote TAGs hydrolysis resulting in a higher amount of high molecular weight lactones produced by higher homologs released from TAGs (Memoli et al., 2017). FIGURE 2 | Histograms illustrating the % response ratios for analytes with larger variation between spoiled hazelnuts class-image pairs. OK samples are adopted as reference classes for comparisons. Error bars correspond to ± SD over analytical replicates.
POTENT ODORANTS AND KEY-AROMA COMPOUNDS RESPONSIBLE FOR HAZELNUTS AROMA QUALITY
In the conceptualization of molecular sensory science, i.e., sensomics (Schieberle and Hofmann, 2011), key-odorants are those odor active compounds whose amount in the sample exceeds their OT (OAV > 1), and for which omission in aroma recombination experiments confirm the key-role in eliciting the distinctive aroma, i.e., the aroma blueprint, of the original product. It has to be noted that recent findings did not exclude that so-called interferents, i.e., volatiles with lower or marginal odor activity, might play a role in modulating key-odorants perception while contributing to the overall aroma (Charve et al., 2011;Cordero et al., 2019;Bressanello et al., 2021).
In keeping with these observations, the next paragraphs report major research findings dealing with the identification of potent odorant patterns in raw and roasted hazelnuts. By literature selection, sensory-driven investigations were prioritized because of the intrinsic biological validation they offer concerning the complex phenomenon of olfactory perception.
The application of sensomic concepts was successful in this direction, Burdack-Freitag and , , and unraveled the aroma code of hazelnuts (raw and roasted) by focusing on different cultivars/origins. They covered the Italian production by studying Tonda Gentile Romana (Lazio, Italy), Tonda Gentile Trilobata (Piemonte, Italy), Tonda Giffoni (Campania, Italy), and Turkey high-quality blend from the Akçakoca region.
Sensomic studies on hazelnut sensometabolome (Kiefl, 2013), inspired the definition of a performance parameter known as Limit of Odor Activity Value (LOAV), defined as the ratio of the respective OT and the analytical limit of quantification (LOQ). This concept, applied to aroma compounds of Tonda Gentile Trilobata hazelnuts, enabled the evaluation of the method's ability in detecting all key odorants screened by 6 | Key-aroma compounds are defined through the sensomics approach in raw and roasted hazelnut samples from cultivars Tonda Gentile Trilobata (G), Tonda Gentile Romana (R), and Akçakoca blend (A).
GC-O and AEDA. In particular, in the reference study , authors were able to quantify 30 potent odorants by SIDA with GC×GC-TOF MS, although just 15 of them achieved a LOAV ≥ 1, meaning that the method was unable to assign the appropriate relevance (i.e., keyodorant ranking) to some analytes present at a sub-ppb level.
Roasted Hazelnuts Aroma
While raw hazelnuts are characterized by a general weak aroma , the roasting process enhances many existing odor notes (e.g., nutty-fruity, sweet-caramel, and malty) by triggering several reactions on non-volatile precursors and primary metabolites, while generating new, yet intense, sensations like roasty, pop-corn like, and coffee-sulfuric . The chemical reactions triggered by the thermal treatment produces common chemical patterns in many foods; Dunkel et al. (2014) for example, revising the combinatorial odor codes of many foods, identified a strong network structure based on processing technologies (e.g., dry-roasting, fermentation, etc.). Volatiles patterns and odor codes of dry thermally processed foods (roasted, deep-fried, baked) have in common many traits due to the activation of the Maillard reaction (Hofmann and Schieberle, 1998;Van Boekel, 2006;Göncüoglu Taş and Gökmen, 2017), which occurs between reducing sugars and amino acids, and sugars degradation (i.e., caramelization).
Sensory tests were conducted with a trained panel with 24 judges (details in . A QDA and projective mapping experiments were performed on raw and 23 min. roasted samples to evaluate similarities and differences among samples. QDA evaluated the aroma intensity of eight attributes (coffee-like, sulfury; nutty, fruity; smoky, phenolic; malty; sweet, caramel-like; roasty, popcorn-like; fatty; earthy, green) by a seven-point scale in a range between 0-3. For the projective mapping, panelists were instructed as follows: "Two samples should be placed very near if they seem identical, and two samples should be placed distant to one another if they seem different to you; this should be done according to your own criteria; do not hesitate to express strongly the differences you perceive by using the most part of the screen (total space)" .
Aroma profiles of raw hazelnuts, as shown by the spidergraph resulting from the QDA (Figure 3A), were very similar, as also stated by other authors (Seyhan et al., 2007). On the other hand, roasted samples showed some meaningful intensity differences. The roasted Tonda Gentile Trilobata from Piedmont (G), reported weaker coffee-like and roasty odor notes ( Figure 3B) accompanied by an intense nutty perception. However, by comparing concentration profiles for key-odorants among all analyzed samples (Table 4), sensory differences and odorant patterns could not be easily correlated.
Projective mapping experiments showed great correlation, as a primary variable, to the roasting degree, resulting in three main clusters defined by raw hazelnuts, mild-to-optimal roasting, and over-roasting conditions. Moreover, some samples were classified into separate groups. Assessors intuitively ordered samples according to the roasting degree along the first dimension (i.e., horizontal axis) of the project map while using the second dimension (i.e., the vertical axis) to discriminate hazelnuts by a hedonic scale. Further experiments, recombining odorant patterns for optimally roasted hazelnuts in a sunflower oil medium, confirmed the pre-eminent role of identified key odorants and suggested some synergies and suppression phenomena between them.
Accurate quantification of aroma compounds by SIDA, or suitable approaches (Sgorbini et al., 2019), combined with GC×GC-TOF MS proved to be highly effective to decode the aroma blueprint at a molecular level due to the very low LOAVs achievable Nicolotti et al., 2013b). However, besides the distinctive patterns of key odorants that clearly evoke hazelnut aroma qualities, to date researchers did not find any other volatile capable of consistently explaining hedonic differences across different cultivars. Sensoryoriented strategies, accompanied by high-resolution separations (i.e., multidimensional analytical approaches) and data mining represent the ultimate solution to unravel the combinatorial code of olfaction behind food sensory perception.
The role of so called interferent volatiles should be better elucidated since they can modulate odorant pattern perception, enabling differential activation of the Receptor Code. Moreover, the presence of chiral odorants in enantiomeric excess or distinctive ratios might influence the hedonic profile; for example, 5-methyl-(E)-2-hepten-4-one (i.e., filbertone) enantiomers are characterized by different odor qualities and OTs. The next paragraph reports some insights on filbertone chiral recognition and functional properties.
Filbertone as Hazelnuts Individualist Key-Odorant
The molecule configuration is crucial in determining its aroma perception: enantiomers may differ in the aroma intensity, as it is the case of menthol and camphor, or even in the flavor itself, as it is for 3-methylthiobutanal (i.e., methional) where the (R)configured molecule elicits the typical odor of cooked potatoes, while the (S)-configured stereoisomer is odorless (Weber and Mosandl, 1997;Zawirska-wojtasiak, 2006). Such characteristic is fundamental with hazelnuts too since filbertone (i.e., 5-methyl-(E)-2-hepten-4-one), the key-odorant contributing to the nutty aroma (Jauch et al., 1989;Alasalvar et al., 2004;Burdack-Freitag and Schieberle, 2012;, might be present as R or S enantiomer(s) on the chiral center on C5 (Puchl'ová and Szolcsányi, 2018).
Filbertone amounts greatly vary among cultivars, higher levels were reported for Tonda Gentile Trilobata cultivar independently by harvest area (Piedmont Italy vs. Georgia) (Cialiè Rosso, 2020), justifying the fact that this cultivar is particularly appreciated by consumers and defined as the gold standard in the confectionery industry. Jauch et al. (1989) were the first that effectively discriminated by enantioselective GC (ES-GC) the R-and S-filbertone while also describing marked olfactory differences in terms of odor intensity and quality, with the S-enantiomer characterized by metallic, fatty, and pyridine perceptions, while the R-one by buttery and chocolate-like notes; moreover, the R-enantiomer has a 10-fold lower odor threshold. The R-and S-enantiomers are not equally abundant in the kernel: Ruiz del Castillo et al. (2003) investigated the enantiomeric distribution in both raw and roasted hazelnuts obtaining between 70 and 90% of enantiomeric excess (ee) for the S-filbertone depending on the variety regarding raw hazelnuts (e.g., Turkish hazelnuts showed 54-56% ee, whereas Italian ones revealed 62-63% ee) (Jauch et al., 1989). Roasted samples, instead exhibited an ee of only 17% for the S-enantiomer. Nevertheless, the filbertone concentration increased by 35-fold during roasting. The differential increase of the R-enantiomer during roasting could be likely due to a thermal pathway (Güntert et al., 1991;Blanch and Jauch, 1998) whose precursors are still unknown.
CORRELATIONS BETWEEN PRIMARY METABOLITES AND AROMA COMPOUNDS: THE CONCEPT OF AROMA POTENTIAL
The concept of aroma potential was firstly introduced by Cialiè Rosso et al. (2018) in a study focused on hazelnut volatilome evolution along with shelf-life. The concept arose by the observation that post-harvest drying conditions appeared fundamental to inactivate exogenous and endogenous enzymes, providing more stable kernels throughout their shelf-life, independently of storage conditions (e.g., atmosphere composition, temperature, and time). Volatile patterns evolution indicated that lipid oxidation and spoilage occurred more decisively on those samples exposed to a less efficient drying (i.e., sun-drying vs. industrial drying at low temperatures), stored at normal atmosphere and ambient temperature.
Researchers analyzed volatiles patterns from hazelnuts roasted at lab-scale (160 • C-15 min) after storage in specified conditions (see Section Effect of Post-Harvest and Storage on Volatilome Signatures) (Cialiè Rosso et al., 2018). Secondary products of hydroperoxide cleavage (i.e., linear saturated aldehydes -from C5 to C10-, unsaturated aldehydes -(E)-2-heptenal, (E)-2 octenal ad (E)-2 decenal, short-chain fatty acids -pentanoic, octanoic, and nonanoic acid, and linear alcohols -from C5 to C8) were close to the method LOD in freshly roasted hazelnuts (T0) or in those stored in a modified atmosphere. An increasing trend over storage time was also observed for kernels stored in a normal atmosphere as a function of temperature (5 or 18 • C), as additional stress factors.
These results suggested the correlation of non-volatile precursors with potent odorants characterizing roasted hazelnut aroma; as for lipid fraction, degradation reactions and kernel viability would have had an impact on primary metabolites known to form under roasting conditions key-aroma substances.
The Pearson correlation coefficient (r) was adopted to evaluate positive or negative correlations between primary metabolites and key-informative volatiles. The study applied advanced fingerprinting strategies combining untargeted and targeted features information from GC×GC-TOF MS analyses of hazelnut polar extracts followed by oximation-silylation on amino acids, reducing sugars, polyols, and organic acids (Cialiè Rosso et al., , 2021. Samples analyzed were from Tonda Gentile Trilobata, Tonda Gentile Romana, and Ordu. Results showed good correlations (p-values < 0.05) within primary metabolites, also indicating that Tonda Gentile Trilobata and Tonda Gentile Romana samples showed a higher amount of some non-volatile precursors and primary metabolites compared to the Ordu blend. On the other hand, interesting correlations were established between primary metabolites and volatiles eliciting aroma qualities. These correlations were further explored and tested for their linearity. The coefficients of determination (R 2 ) of the linear regression were estimated for the relation between the precursor(s) as the independent variable (x) and key-volatiles as the dependent variable (y). Figure 4 reports regression functions between 3-methylbutanal and leucine-Leu (R 2 0.9577), 2-methylbutanal and Isoleucine-Ile (R 2 0.9284), 2,3-butanedione and 2,3-pentanedione and fructose/glucose derivatives (R 2 0.8543 and 0.8860), between 2,5dimethylpyrazine and alanine-Ala (R 2 0.8822), and pyrroles and the sum of ornithine-Orn and Alanine-Ala (R 2 0.8604).
Results, although preliminary due to the limited sample set available, posed a solid foundation for the aroma potential concept; a comprehensive yet quantitative fingerprinting of hazelnut primary metabolome could be used as an effective tool to predict the aroma potential of hazelnuts (Cialiè Rosso et al., 2018).
CONCLUSIONS AND FUTURE PERSPECTIVES
Modern investigation approaches inspired by "omics" strategies have the potentials to address most of the challenges posed by the study of complex food metabolome/volatilome in relation to biological phenomena (Miguel et al., 2011;Capozzi and Bordoni, 2013). In the case of hazelnuts, the effect of functional variables at the basis of phenotype expression, reaction to extreme climate events and local pedo-climatic conditions, changes during storage, odor quality, and hedonic profile have been observed, interpreted, and in several cases rationally modeled after effective and reliable exploration of the compositional complexity of entire volatilome.
Sensory guided strategies have identified key-aroma patterns evoking the unique and distinctive hazelnut flavor (Hofmann and Schieberle, 1995). However, it is still underexplored the role of ancillary odorants, i.e., those that do not exceed their odor perception threshold in the sample, that by interacting with ORs in the olfactory epithelium, might modulate the overall aroma perception with effects on hedonic properties .
Moreover, efforts are needed to better understand the volatilome expression under the effect of key-functional variables, known to have a negative impact on sensory quality. Besides the autoxidation of lipids, responsible for the generation of potent odorants with unpleasant notes [e.g., (E,E)-2,4nonadienal and (E,E)-2,4-decadienaldeep-fried; (Z)-2-octenal and (Z)-2-nonenalfatty; hexanalgreen; octanalfatty, soapy], insights are expected on enzymatic degradation of lipids.
The evolution of free and esterified fatty acids along shelf life might have an impact on lipid oxidation stability. A better understanding of lipase/esterase activity could guide targeted actions to inhibit enzymes activity during post-harvest and storage (Cialiè Rosso et al., 2021).
The interconnection between primary and specialized nonvolatile metabolites patterns and volatiles generated during storage and industrial processing (i.e., dry-roasting), should be better explored. In this context, interesting outcomes could support the development of predictive models for odorant formation and aroma potential expression (Cialiè Rosso et al., 2018 to be used as decision-makers at an industry level. The industrial need for effective solutions to practical problems related to hazelnut quality management urges the adoption of multidisciplinary yet systemic approaches [i.e., omics strategies (Wishart, 2008;Ulaszewska et al., 2019)] giving access to a higher level of information closer to a better understanding of complex phenomena (Nanda and Das, 2011).
AUTHOR CONTRIBUTIONS
SS: data curation, validation, and writing-original draft, review and editing. FS, MC, and EL: writing review and editing. CB: funding acquisition, conceptualization, supervision, and writing review and editing. NS: project administration, conceptualization, and writing review and editing. GC and GG: conceptualization and writing review and editing. CC: funding acquisition, project administration, conceptualization, supervision, and writing-original draft, review and editing. All authors contributed to the article and approved the submitted version. | 2022-03-04T14:36:02.683Z | 2022-03-03T00:00:00.000 | {
"year": 2022,
"sha1": "cacbeb641da4dff655aa7a3dabf7f2da38412ddd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "cacbeb641da4dff655aa7a3dabf7f2da38412ddd",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231598541 | pes2o/s2orc | v3-fos-license | Uncovering dynamic evolution in the plastid genome of seven Ligusticum species provides insights into species discrimination and phylogenetic implications
Ligusticum L., one of the largest members in Apiaceae, encompasses medicinally important plants, the taxonomic statuses of which have been proved to be difficult to resolve. In the current study, the complete chloroplast genomes of seven crucial plants of the best-known herbs in Ligusticum were presented. The seven genomes ranged from 148,275 to 148,564 bp in length with a highly conserved gene content, gene order and genomic arrangement. A shared dramatic decrease in genome size resulted from a lineage-specific inverted repeat (IR) contraction, which could potentially be a promising diagnostic character for taxonomic investigation of Ligusticum, was discovered, without affecting the synonymous rate. Although a higher variability was uncovered in hotspot divergence regions that were unevenly distributed across the chloroplast genome, a concatenated strategy for rapid species identification was proposed because separate fragments inadequately provided variation for fine resolution. Phylogenetic inference using plastid genome-scale data produced a concordant topology receiving a robust support value, which revealed that L. chuanxiong had a closer relationship with L. jeholense than L. sinense, and L. sinense cv. Fuxiong had a closer relationship to L. sinense than L. chuanxiong, for the first time. Our results not only furnish concrete evidence for clarifying Ligusticum taxonomy but also provide a solid foundation for further pharmaphylogenetic investigation.
Scientific Reports
| (2021) 11:988 | https://doi.org/10.1038/s41598-020-80225-0 www.nature.com/scientificreports/ deletions were specifically observed in the LSC of L. sinense, prominently responsible for its genome contraction (they were verified together with other Indels of a size greater than 20 bp based on PCR amplification and Sanger sequencing, Supplementary Table S3). The overall GC content of these species was almost equivalent and showed an uneven distribution across the whole CP genome with an average GC content of approximate 37.60% that was most enriched in IRs from 47.78 to 47.80% followed by LSC, about 35.99%, and lowest in SSC, from 31.09 to 31.14%, which imputed the location of transfer RNA (tRNA) and ribosomal RNA (rRNA), of which the GC content reached 55% (Supplementary Table S4). In addition, a similar GC percentage ~ 46.79% in homologous protein-coding regions (CDS) among these species was discovered to be consistent with the identity of the overall GC content. Within CDS, the AT content of each position in the triplet codon displayed the canonical bias of the CP genome, distinguished from nuclear and mitochondrial DNA 45 that a higher AT percentage was observed at the third position in involved species, up to 70.30%, along with a sharp decrease in the second and first positions (Supplementary Table S4). Compared to noncoding regions, coding sequences accounted for ~ 56.90%, showing more conserved features that encoded an identical set of 126 functional genes (Fig. 1), of which 113 were unique, harboring 79 proteincoding genes, 30 tRNA and 4 rRNA, with a coincidence of genomic organization in terms of the gene order and orientation (Table 1). Of these, 13 genes were duplicated, including three protein-coding genes, four rRNA genes and six tRNA genes, all of which resulted from IR duplication. Similar to many angiosperms, introns were discovered in 19 genes comprising 12 protein-coding genes and 6 tRNA genes. Among them, clpP, ycf3 and rps12 contained two introns, especially rps12, a trans-spliced gene, of which the 5′ end was located in the LSC region, whereas two replicated 3′ ends were contained within IRa and IRb regions, respectively. In addition, trnK-UUU possessed the longest intron that contained matK.
Codon usage and RNA editing sites. Since usage bias of synonymous codons is widespread in organisms, it plays a vital role in evolution. Knowledge of codon preference could greatly help in understanding the selection pressure on gene expression 46,47 and improve the translation efficiency using major codons 48 . Here, beyond the major initiator codon, in these seven species, alternative start codons were discovered in two distinct genes where ACG was used as a start codon for ndhD and GTG for rps19. Using alternative codon initiating is www.nature.com/scientificreports/ a ubiquitous phenomenon in eudicot plants, while previous reports also pointed out that RNA editing could restore ACG to the conventional start codon 49,50 . Overall, the 79 distinct protein-coding genes in each of the seven species were composed of 23,446-23,482 triplet codons. Of those encoded amino acids, in all presented species, the most abundant was leucine (4.14-4.17%), and the least abundant was cysteine (0.24%), which is similar to most of the reported CP genomes of angiosperm plants. The relative synonymous codon usage (RSCU) value analysis demonstrated that almost every amino acid with a synonymous codon showed a usage bias (Supplementary Table S5, Supplementary Fig. S1). Interestingly, A-or T-ended codons accounted for nearly half of the synonymous codons with commonly higher RSCU values in contrast to the other half that ended with C or G. Possibly, those reported preferences are driven by the mutational pressure in the A/T composition bias of the CP genome [51][52][53] . RNA editing events have been proved universally in CP genomes since first reported 54 . Regarding the current CP genomes, a total of 56 potential RNA editing sites from 32 genes were predicted in each species (Supplementary Table S6). In all seven CP genomes, the event of S converting to L occurred with predominant frequency; by contrast, R converting to C occurred with the lowest frequency, which is in accordance with a previous investigation that the change of S to L becomes more frequent as the number of amino acids increases 55 . Repeat structure and simple sequence repeat analyses. Simple sequence repeats (SSRs), known as microsatellite sequences, consist of tandem short repeat units, ubiquitously distributed across the CP genome, mostly with the nature of uniparental inheritance and non-recombination 56 . Owing to the high degree of polymorphism, co-dominance and efficiency of amplification, SSRs are valuable molecular markers for mining population genetics and phylogenetic studies 57 . On average, 46 SSRs (from 42 to 49) with two motif types were identified in each species (Supplementary Table S7, Supplementary Fig. S2). SSR motifs presenting a heterogeneity frequency were predominantly rich in A/T bases. Of these SSR repeat units, 13% were detected in proteincoding regions. To capture the dynamic evolution of CP genomes within Ligusticum and Apioideae, the SSR characteristics of available CP genomes of representative plants in Apioideae were also investigated. Interestingly, C/G units had higher variability within Ligusticum, and from early-diverging lineage of Apioideae (Daucus carota) to Peucedaneae (Angelica gigas), SSR characters exhibited an increasing and prolonged tendency of SSRs, primarily in mononucleotide A/T rather than other motifs. Furthermore, the differences in SSR motif numbers among those species further demonstrated the potential of using cpSSR markers in genetic analysis among genera of Apioideae.
On average, 39 long repeats accounting for ~ 0.8% of CP genomes were detected in presented species (Supplementary Table S8, Supplementary Fig. S2), with an apparent species-specific distribution, and none were located in protein-coding regions. In contrast to SSRs, the number of four kinds of long repeats (forward, reverse, complementary and palindromic repeats) in Ligusticum and Apioideae displayed a significant change in certain species, ranging from 31 to 89 without a constant pattern. For instance, forward repeats were significantly enriched in L. chuanxiong cv. Gansu compared to its relatives, the proportion of large-sized repeats and palindromic repeats sharply increased in L. tenuissimum, and complementary repeats occasionally disappeared in Ligusticum. Moreover, repeats of L. tenuissimum were wholly distributed in adjacent regions of IR boundaries, which were likely to lead to its IR expansion as that in previous reports, many evidences showed the repeat sequence contributing to plastome structural variation.
IR contraction and expansion. The absence of one copy of three genes, commonly duplicated and situated in the vicinity of junction sites of IR and LSC, was detected, deserving a more thorough examination. Subsequently, the evolutionary trajectories of the contraction and expansion of IR within Ligusticum ( Supplementary Fig. S3) and Apioideae ( Fig. 2) were investigated. The border of SSC/IRa crossed by ycf1 maintained a relatively conserved state, in which a nearly constant fluctuation of ca. 100 bp was observed throughout the evolution of Apioideae. Symmetrically, the SSC/IRb boundary, located in the pseudogene fragment ψycf1 and neighboring ndhF, showed an erratic shift, resulting in inconsistent deviation of the border to ndhF and fluctuation in ψycf1 length. Notably, multiple dynamic expansions and contractions indicated that the junction of the LSC/IRb endpoint moved from spanning rps19 in D. carota (putative ancestral IRb/LSC boundary) to rpl2 in Anethum graveolens, followed by lineage-specific IR contraction to ycf2, causing Seselinae and Peucedaneae to lack one copy of ycf2, but both copies were present in L. tenuissimum. Considering the phylogenetic topology that L. tenuissimum was a sister to L. chuanxiong, belonging to the clade comprising species of the tribes Seselinae and Peucedaneae, and an enhanced expansion footprint in L. tenuissimum compared to the ancestor extending rps19 into IR, we remain parsimonious in proposing an independent IR expansion reoccurring in L. tenuissimum. In addition, within the seven presented Ligusticum spp., an on-going shift of the LSC-IR junction and an alleviated dislocation of SSC were demonstrated whereas severe reduction hardly occurred.
Furthermore, we investigated the synonymous substitution rate of genes, ycf2 and rpl2 which are duplicated and de-duplicated because of IR contraction and expansion. Exceeding our expectation, the Ks value was highly variable among lineages (Supplementary Fig. S4) but without significant correlation with the copy number variation that resulted from IR contraction and expansion, which usually accelerates synonymous substitution 58,59 . Comparative genomic divergence and structure arrangement. In the view of subsequently taking full advantage of hidden mutation information in CP genomes for assisting phylogenetic inference and species identification, an in-depth investigation of the genomic structure and sequence divergence among Ligusticum and Apioideae was performed. Exceptionally, the CP genome sequences of L. chuanxiong cv. Yunnan and L. sinense cv. Fuxiong were identical, and the two were definitely two different species: one diploid with flowers and seeds and one triploid reproduced solely by means of vegetative propagation 29 S5) and sequence identity at the genome-scale level. The nucleotide diversity value (Pi) across the genomes ranged from 0 to 2.5% and nearly 45.6% of the compared regions showed 100% identity (Fig. 3). A higher divergence in the LSC region and lower divergence in IR were demonstrated, implying general conservatism of IR in contrast to other regions, which is in congruence with characteristics for the majority of angiosperms. Furthermore, the most divergent loci were located in petA-psbJ-psbL with mean Pi = 0.023, and six Genes transcribed clockwise and counter clockwise are presented above and below of components, respectively. The distance between the end or start coordinate of a given gene and the border sites are indicated. The black box in phylogenetic tree denotes the branch-specific IR contraction and gray box denotes branch-specific IR expansion. These features are not to scale. www.nature.com/scientificreports/ additionally different hypervariable regions (Pi > 0.004) were determined, including pebH-petB, trnL-trnH-psbA, accD-psaI-ycf4, ycf4-cemA, psbH-petB and ycf1, providing potential candidate regions for genus-specific barcode marker mining. Recently, increasing researches revealed wide prospects of using Indel marker derived from CP genomes for species identification and authentication of herbs 22,25,60 . Here, the insertion and deletion were detected to be located mainly in noncoding regions especially marked in psbB-petB and trnM-psbD. Intriguingly, Indels have also been found frequently occurring in ndhB and ycf2 coding regions next to the boundary of IRb/ LSC ( Supplementary Fig. S6), which indicates intensive changes around corresponding junctions; this is compatible with common phenomena found in closely related species 61 . Despite conserved synteny in gene order and orientation of CP genomes among Apioideae ( Supplementary Fig. S7), a higher sequence divergence and length variation, specifically in trnL-psbA and the regions between rpoB and psbD, where protein-coding genes are rarely situated, were disclosed. In contrast, the IRb/LSC boundary of Apioideae was more conserved (Supplementary Fig. S8) compared to the Ligusticum genus in consideration of the comparison among Apioideae using genetically distant related species. If the same underlying mechanisms within Apioideae were involved, accordingly, a high divergence would be detected.
Phylogenetic analyses.
To address the relationship among the seven medicinal species, phylogenetic analysis was carried out using entire CP genome sequences because of an extreme synteny among those CP genome and limited parsimony informative characters in the CDS of 79 shared protein-coding genes where scarcely 129 sites were found. Three inference methods, including Bayesian inference (BI), maximum likelihood (ML) and maximum parsimony (MP), were employed along with Seseli montanum as an outgroup. The topologies generated by whole genomes were highly concordant, and almost every node was highly supported, regardless of the different methods used. Among all constructed trees (Fig. 4), Ligusticum comprised two separate subclades where L. tenuissimum presented as a sister clade to the remaining seven taxa. L. chuanxiong cv. Gansu and L. chuaxiong demonstrated a closer relationship, together forming a clade sister to L. officinale, receiving a robust supporting value (~ 100%) for all methods, and this entire group was clustered as the sister clade to L. jeholense following the clade of L. sinense cv. Fuxiong and L. chuanxiong cv. Yunnan. To further verify the phylogenetic relationship obtained, the phylogenetic signal across the genome was measured. As expected, this highly consistent tree was supported by phylogenetic signal analysis based on the value of delta site-wise log-likelihood scores (ΔSLS) with all four strong sites (absolute ΔSLS > 0.5) and 81.3% weak sites (absolute ΔSLS ≤ 0.5) favouring, however, here a lower highest value reaching at most 2.3 was observed ( Supplementary Fig. S9). Additionally, extremely short branch lengths within those seven species were observed. As yet, the increasing availability of CP genomes for Apiaceae provides us unprecedented resources to precisely clarify phylogenetic relationships and investigate the taxonomic status of Ligusticum within Apioideae via phylogenomic analyses. Here, based on 70 shared protein-coding sequences involving 37 species that contain 4196 parsimony informative characters and L. chuanxiong were used to represent those seven species that were attributed to limited parsimony information in CDS. Phylogenetic trees were constructed using maximum parsimony, maximum likelihood and Bayesian inference, with Panax ginseng as an outgroup. The phylogeny produced from each analysis was topologically identical, and most nodes agreed well with previous relevant Notably, Ligusticum formed a monophyletic clade that was placed within Seselinae and allied with a group comprised of S. montanum and Coriandrum sativum, with weak support in ML and MP but moderate support in BI. An ambiguous circumscription between Seselinae and Peucedaneae was depicted in the present cladogram, which is similar to earlier studies. In accordance with previous phylogeny results based on CP genomes, Glehnia littoralis, positioned within the genus Angelica, was moderately supported 22 . Moreover, two Prangos plants were weakly clustered as a sister clade to the group consisting of Seselinae and Peucedaneae, which requires further reexamination.
Discussion
Since the first CP genome of Apiaceae was reported 63 , along with the rapid development of sequencing technology in past decades, approximately 50 CP genomes were figured out within Apiaceae. However, plastid sequences for Ligusticum, the taxonomic scheme and the placement of which is one of the most difficult genera to clarify in Apiaceae, are scarcely available. The seven newly established CP genomic sequences significantly enrich the molecular resources for Ligusticum. The conserved features of gene content and organization, gene orientation, and intron number among those CP genomes were revealed to be similar to the variability within previously reported species of Camellia 64 , Panax 65 and Epimedium 66 . Despite the fact that a long time has passed since its divergence from D. carota, those genomes share an identical gene set, with a similar organization further indicating the structural conservation in contrast to Circaeasteraceae 67 . Although a higher nucleotide variability value was presented in certain divergence hotspot regions based on the diversity investigation and complete genome pairwise alignment, the entire variation exhibited a conserved tendency. Strikingly, in our study, the plastid genome sequences of two species were completely identical, which has barely been reported and may be caused by recent divergence. Overall, a moderate divergence among sequences was demonstrated, compared to several genera recently reported 44,61,68-70 . Of note, petA-psbJ-psbL, the most divergent region revealed in the present study, has also been demonstrated to be highly divergent in many genera 68,69,71,72 . Even though high diversity www.nature.com/scientificreports/ was observed in trnH-psbA and ycf1, which have extensively suggested to be taken as universal barcodes 73 and depicted has a capacity for sufficient variation information for taxon discrimination in angiosperms 72,74 , no one hotspot alone was enough to distinguish these seven herbs. Thus, with plastid scale-level analyses, we proposed a combination strategy of those hotspot regions to enable us to definitively distinguish these species and elucidate a comprehensive resolution, which prior studies were not able to achieve based on a single fragment. Intriguingly, in the present study, we noticed the dramatic branch-specific CP size reduction in the subclade consisting of Peucedaneae and Seselinae, including a considerable number of famous oriental medicinal plants. Subsequently, the lack of one copy of the gene clusters located at the boundary of IRa/LSC attributed to IR contraction was observed. Compared with the IR type of D. carota, rpl2, rpl23, trnL-CAU and ycf2 were lost in A. gigas, L. chuanxiong and S. montanum, implying a branch-specific IR shift. For a certain species within this branch, multiple rounds of contraction and expansion occurred resulting in CP genome re-expansion and of which the IR is even larger than that of the ancestor type; for instance, the IR border of L. tenuissimum reextended to rpl22, which led to a duplication of rps19 that primitively spanned the junction in D. carota, and could potentially be a crucial character in Ligusticum taxonomy. As early as the last century, the frequency fluctuation and large size shift of IR within Apioideae, especially, within the apioid superclade, were noticed and used to reconstruct the phylogenetic relationship 75 , which re-placed L. officinale in the Angelica group based on the restriction map. Likewise, IR variations in Berberidaceae 76,77 , conifers 78,79 , legumes 59 , and ferns 80 were found and used to reconstruct their phylogeny. The rapid development of sequencing technology has allowed us to establish the CP genome, which enables us to further precisely confirm and define the endpoint of IR. Indeed, the examination of IR shifts within Ligusitcum at nucleotide level were reported for the first time and to the best of our knowledge, such a large-scale shift within one genus has not been reported in apioid superclade. Largescale expansion and contraction (over 1 kb) of IR across Apioideae were revealed to be primarily confined at the boundary of LSC/IRb displaying a lineage-specific flux whereas the situation of SSC/IR exhibited a constant character similar to most non-monocot angiosperms 81,82 . Recently, as the availability of plastid genomes increases, IR border shifts become more frequently reported, yet large-scale expansion and contraction of IR are considered uncommon phenomena 81 , principally observed in heterotrophs 83 and a few autotrophs 84 . However, unlike the case of rearrangement that frequently occurred in the CP genome of which IR remarkably shifted, for instance, Pelargonium 85 , conifers 79,86 , Clemati 87 , legumes 59 , Asarum 88 , etc., the genome structure and gene order within the apioid superclade are significantly conserved 81 . Hence, the large-scale alteration of IR within this clade has great benefit for the study of the underlying mechanism causing large-scale expansion and contraction of IR 81 .
In comparison to small alterations of IR which are supposed to result from gene conversion 89 , large-scale IR alterations are attributed to the double-strand break along with illegitimate recombination accounted by repetitive sequence possibly 75,89,90 . But herein, at the boundary of LSC/IRb of seven new plastomes, without a directly supporting evidence of those hypotheses, the repetitive motif, was observed in accordance with some previously reported plastomes in Apiaceae 81 . Nevertheless, around the junction of LSC/IRb in L. tenuissimum, ploy(A) and ploy(T) tracts were discovered. Furthermore, according to previous reports, a novel fragment, the derivation of which remains unsettled, was detected concomitantly residing between LSC/IRa in the CP genome of the apioid superclade and might be contributing to IR shift in Apiaceae 81 while a ~ 500 bp insertion was recovered in our seven plastomes but was absent in L. tenuissimum. In Petroselinum, the corresponding homologous regions are highly similar to the intergenic sequence of cob-atp4 of the mitochondrial genome 81 and were postulated to be transferred from the mitochondrial genome; however, the insertion fragment of seven Ligusticum plant plastomes did not have a specific homologous region in mitochondrial DNA. To be prudent here, those aforementioned mechanisms that were presumed responsible for the IR shift should not be precluded without further convincing evidence. Previously, a decelerated synonymous rate of duplicated genes led by IR fluxes was uncovered 58,59 while herein, no significant changes of the synonymous rate in Ligusticum was manifested, similar to the ycf2 retention event in gingko 91 .
Combining 37 available CP genomes, a phylogenetic analysis of Apiaceae was carried out, of which the phylogenetic trees were highly similar to the topological structure that was recently reported based on CP genomes except for the node with a weakly supported value. Considering a limited variation in protein-coding regions within Ligusticum, phylogenetic inference of eight Ligusticum species was performed based on the entire genome. L. tenuissimum was a sister to the clade comprising the remainder, with a highly supported value, which is in line with the relationship deduced from morphological cladistic analysis using 40 characters 9 . Here, L. chuanxiong cv. Gansu and L. chuanxiong were clustered as a sister clade to L. officinale, verifying a closer relationship of L. chuanxiong cv. Gansu and L. chuanxiong, coincident with the origin investigation by herbal textual research, i.e., they belong to the western type of Chuan-Xiong 30 . In addition, we further confirmed the former revision and repositioning of L. officinale in the Ligusticum genus based on molecular cytogenetic 92 and barcoding 21,23 analyses, providing the molecular evidence for the ancient record that L. officinale was introduced from China. Above all, a closer relationship of L. chuanxiong to L. jeholense rather than L. sinense was revealed for the first time. Therefore, here, we approved the original nomenclature of L. chuanxiong presented by Qiu 20 instead of the revision by Fu 26 . Previous studies based on karyotype suggested L. sinense cv. Fuxiong was a triploid of L. Chuanxiong 29 , while, as another foremost discovery, we provided the distinct result that L. sinense cv. Fuxiong had a closer relationship with L. chuanxiong cv. Yunnan and with affinity to L. sinense rather than to L. chuanxiong. We purely presume that the sequence identity of L. sinense cv. Fuxiong and L. chuanxiong cv. Yunnan resulted from incomplete lineage sorting or a recent divergence event, owing to the triploid event of L. sinense cv. Fuxiong, of which the ancestor probably derived from Yunnan that was demonstrated as the most diverse center of Ligusticum in China 7 . Furthermore, L. sinense cv. Fuxiong or the ancestor was introduced and domesticated in Jiangxi. Our present analyses simultaneously encompassed different original plants with nomenclatural types of L. chuanxiong, which are actually distinct, whereas previous researchers did not realize this. In light of the present findings, we strongly support the hypotheses that ancient Chuan-Xiong was independently derived from www.nature.com/scientificreports/ two regional groups of original plants with different distributions and cultivation centers, one in the north of China, including L. chuanxiong and L. chuanxiong cv. Gansu, and the other mainly in the south, including L. sinense cv. Fuxiong and L. chuanxiong Yunnan. Our data also elucidate a relatively distant relationship between L. chuanxiong cv. Yunnan and L. chuanxiong suggesting the scientific name should be revised for L. chuanxiong cv. Yunnan. Furthermore, we obtained phylogenetic trees based on CP inheritance and our assumption could be further scrutinized by integrating nuclear and mitochondrial data.
Materials and methods
Plant materials, DNA extraction and sequencing. The seven Liguisticum species used in this study were collected from different places. The detailed collection and identification information of each sample is listed in Supplementary Table S1 Table S1). The following sequence features were analyzed: (1) GC content was calculated using an in-house Perl script; (2) SSRs were detected using MISA 98 with the setting file as follows: mononucleotide-ten, dinucleotide-six, trinucleotidefive, tetranucleotide-five, pentanucleotide-five, and hexanucleotide-five, interruptions-one hundred; (3) Codon usage bias was calculated by MEGA 7.0, and the RSCU (Relative synonymous codon usage) ratio with a threshold value of 1 was applied to estimate the usage preference of synonymous codons; (4) Direct (forward), reverse, complement and inverted (palindromic) dispersed repeats were examined via the online program REPuter 99 with parameters as follows: hamming distance was set to 3, minimum and maximum sizes of repeats were 30 bp and 500 bp, respectively, and redundant repeats were manually removed; (5) RNA editing sites were predicted through the online program Predictive RNA Editor for Plants (PREP-Cp) 100 with a cutoff value of 0.8.
Genome comparison. Prank 101 and MAFFT version 7 102 were used for multiple sequence alignment with default parameters. The sequence identity of CP genomes was intercompared and visualized using mVISTA 103 . The annotation of L. chuanxiong was taken as a reference. Colinearity and rearrangement of the CP genome were determined by Mauve 104 with default parameters. When running Mauve, L. chuanxiong cv Ganshu and A. graveolens were taken as reference for detecting within Ligusticm and Apiaceae, respectively. The nucleotide diversity value was calculated by DnaSP v6 105 using a sliding window length of 600 bp and a 200 bp step size. Pairwise synonymous substitution values were examined based on bioperl and were normalized following Chaw's method 91 . The Wilcoxon test was implemented to determine the significance level.
Phylogenetic analysis. In total, 43 CP genomes were used for phylogenetic relationship inference, of which 36 were downloaded from NCBI (Supplementary Table S9). Two data sets were used: (1) for phylogenetic analysis among Ligusticum, the entire CP genome sequence was used, and (2) for the intergeneric phylogenetic analysis within Apiaceae, the CDS of common protein-coding gene were used. The whole genome was aligned using MAFFT version 7. The CDS of seventy common protein-coding genes were detected and extracted using an in-house Perl script, and multiple sequence alignments of each gene were executed separately via two programs, Clustal W 2.1 106 and MAFFT version 7. Afterwards, the aligned CDS sequences were concatenated into one data set for further analysis. Phylogenetic analysis was performed based on three different algorithms, MP, ML and BI. An optimal nucleotide substitution model was implemented via the analysis of jModelTest 2 107 and the model with the best corrected Akaike Information Criterion (AICc) value was selected. RAxML version 8 108 was used for ML tree construction with 1000 bootstrapping replicates and the nucleotide substitution model GTR+G was token based on the results of jModelTest 2. Using PAUP version 4 109 with a heuristic search method (repeat 1000), the MP tree was constructed and tested by the bootstrap method as well. Mrbayes v3.2.6 110 was used for BI tree construction with at least, 2,000,000 iterations of the Markov Chain Monte Carlo method. When p values converged, the majority-rule consensus tree was constructed based on the remaining 75% of the sample. The unconstrained ML tree stated above was set as T1 for phylogenetic signal exploration while the alternative tree (T2) was obtained based on the ML constrained method referring to the framework described by Shen | 2021-01-14T14:25:31.262Z | 2021-01-13T00:00:00.000 | {
"year": 2021,
"sha1": "b41a2b2ae97d52e21a41704f2f5c7b6919940381",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-80225-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bc6821f414127ecb7a1b9a73521e3932a35b014",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254878330 | pes2o/s2orc | v3-fos-license | Femur Osteomyelitis and Associated Fracture as an Initial Presentation of Aortoenteric Fistula
Aortoenteric fistula is a rare condition. Atypical presentations may cause significant management delays. We present the case of a 64-year-old male who experienced a pathological femoral fracture as an initial presentation of an underlying aortoenteric fistula. The aortoenteric fistula, possibly related to a poor graft tunneling technique, induced femur osteomyelitis and the associated pathological fracture.
Introduction
Aortoenteric fistula is a rare condition. It presents with the high complexity of management and a high mortality rate. 1 We present an unusual case wherein a pathological femoral fracture was the initial manifestation of the underlying aortoenteric fistula. Informed consent was obtained from the patient's next of kin before publishing his images and history; therefore, approval by the institutional review board was waived.
Case Presentation
A 64-year-old male patient underwent an aortobifemoral bypass, using a polyethylene terephthalate (PET) graft, in another institution owing to lifestyle limiting claudication.
Seven months after his primary aortic surgery, a diaphyseal femoral fracture occurred while he was resting. Given the absence of trauma or stress, the fracture was considered pathological.
To treat his femoral fracture, external osteosynthesis was performed along with extensive purification and drainage because of intraoperative purulent exudation around his femur (►Fig. 1).
The intraoperative periosteal inflammatory tissue culture revealed two strains of oral gram-positive anaerobic cocci (Fusobacterium nucleatum and Parvimonas micra).
Considering the origin of the infection, computed tomography (CT) scan was performed but was reported without any pathological findings regarding abdominal or other system infection.
Three months after orthopedic surgery, while being afebrile, his external osteosynthesis was removed. After repeating a debridement, antibiotic-impregnated cement and a distal femur plating system were introduced (►Fig. 2).
Despite initial improvement, a localized mass appeared in his left popliteal fossa in the following weeks. The mass was irrigated and a culture isolated Escherichia coli, Enterococcus faecalis, and Proteus mirabilis.
Owing to fever along with extensive edema in his left thigh and elevated inflammatory blood markers (
Keywords
► osteomyelitis ► pathological fracture ► aortobifemoral bypass graft infection ► aortoenteric fistula the patient was readmitted to the orthopaedic service with a diagnosis of osteomyelitis. Radiographic images confirmed the diagnosis; however, blood cultures were negative. Despite an extended antibiotic regimen, the swelling in his left popliteal fossa relapsed. This finding was accompanied by an additional palpable mass in his left inguinal ligament. He was then referred to our vascular surgery department.
The patient underwent a new CT angiography scan that revealed a thrombosed left limb of his aortobifemoral PET graft, with the impression that the thrombosed segment, as well as the right limb of the aortic graft, took an intraluminal course at the proximal end level of his sigmoid colon and cecum, respectively (►Fig. antibiotics (ceftazidime/avibactam and tigecycline) was instituted, and a transperitoneal approach exposed the aortic graft under general anesthesia. In addition to excessive fluid accumulation around the aortic graft, it was found that the right and left iliac limbs of his PET graft transversed the cecum and sigmoid colon, respectively.
Following total explantation of his aortobifemoral graft, accompanied by wide debridement and retroperitoneal irrigation, an appendectomy and sigmoidectomy were performed.
Considering the extensive inflammation in his left inguinal area, and the absence of ischemic symptoms despite the thrombosed left limb of the PET graft, we proceeded with a bailout procedure by creating a composite graft consisting of his harvested right femoral vein and a short segment of tubular silver-impregnated PET graft. The length of the autologous graft alone was inadequate (►Fig. 5). The tubular silver-impregnated PET graft was anastomosed proximally to the aortic stump and distally to the autologous venous graft which was consecutively anastomosed to the right femoral bifurcation.
Despite initial postoperative improvement, sigmoid colon necrosis was observed on day 3 and Hartmann's colectomy procedure was thus performed. Owing to the freedom of the aortofemoral bypass from the above pathology, no reintervention to the graft was attempted. Eventually, the patient died of sepsis because of peritonitis after a prolonged stay (16 days) in the intensive care unit.
Discussion
Aortic graft infection is a rare condition, with a reported incidence of 1 to 4%, while aortoenteric fistula has an incidence of 0.4 to 2%. 1 However, aortic graft infection and aortoenteric fistula pose a great challenge. Following surgical treatment for aortic graft infection, mortality is 18 to 30%, an even higher when accompanied by aortoenteric fistula, reaching an average of 30 to 40%. 1 The clinical presentation of aortic graft infection varies, depending on the time of symptom onset after the primary aortic surgery. 2 In early onset of aortic graft infection, symptoms appear within the 4 months following aortic surgery. In these cases, patients present with signs of systemic infection such as fever and elevation of inflammatory markers; patients may complain of local signs of wound infection.
Conversely, in late-onset cases, symptoms are manifested from 4 months up to 10 years after the primary aortic surgery and are associated with less virulent but more fastidious microorganisms. In these cases, patients complain of nonspecific symptoms such as malaise, fatigue, weight loss, intermittent fever, intermittent claudication, back pain, localized inguinal mass, and gastrointestinal bleeding when aortoenteric fistula is also present. 2 Sporadically, the extension of aortic graft infection to the vertebral fascia by the contiguous infectious process may cause spondylitis. 3 Another rare presentation of aortic graft infection is hypertrophic osteoarthropathy (periosteal new bone formation, digital clubbing, and synovitis) with almost 30 cases reported in the literature. 4 In our case, the malposition of the iliac limbs of the graft tangentially with the cecum and sigmoid colon gave rise to the aortoenteric fistula during the tunneling.
Based on the literature, there have already been reports of iatrogenic perforation of the colon during tunneling of an aortobifemoral graft; however, our patient exhibited a unique presentation. 5 To the best of our knowledge, there have been no other similar cases of pathological fracture as an initial presentation of underlying aortoenteric fistula in the English literature. In our case, the aortoenteric fistula led to periaortic inflammation that infiltrated the surrounding tissues. The expansion of the contiguous infectious process to the femur induced osteomyelitis and a pathological femoral fracture. The atypical presentation caused a significant delay in diagnosis that affected his outcome.
In conclusion, poor graft tunneling technique caused an aortoenteric fistula that induced femoral osteomyelitis and an associated femoral fracture. Regarding the underlying pathology, aortic graft infection may exhibit an unusual presentation; therefore, vigilance regarding aortic surgery complications is crucial.
Funding
None.
Conflict of Interest
The authors declare no conflict of interest related to this article. | 2022-12-21T14:04:43.639Z | 2021-07-15T00:00:00.000 | {
"year": 2022,
"sha1": "0325e24cdecf196d80cef683ac334593f5ceea53",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4c652fed2f661b52e3d532b755cd1f029d0dfa5a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
59479596 | pes2o/s2orc | v3-fos-license | The prevalence of polypharmacy in elderly: a cross section study from Bosnia and Herzegovina
Kosana D. Stanetić1, Suzana M. Savić1, Bojan M. Stanetić2, Olja M. Šiljegović3, Bojana S. Đajić4 1Primary Health Center Banja Luka, Department of family medicine, Medical faculty, University of Banja Luka, Bosnia and Herzegovina 2Department of Cardiology, University Clinical Center of the Republic of Srspka, Banja Luka, Bosnia and Herzegovina 3Primary Health Center Doboj, Bosnia and Herzegovina 4Primary Health Center Gradiška, Bosnia and Herzegovina The prevalence of polypharmacy in elderly: a cross section study from Bosnia and Herzegovina Оригинални радови / Original Articles
Introduction
We define polypharmacy as a simultaneous usage of five or more drugs, an over usage of medically indicated drugs, or a therapy regime in which at least one type of drug is unnecessary 1 .As the human population grows older, the multi-morbidity increases and polypharmacy intensifies, all of which consequently represents a risk of new morbidity and mortality.Polypharmacy, the usage of different supplements, non-prescribed drugs, and potentially undesirable drug effects set a challenge for the efficient drug prescription.Adverse drug effects frequently cause hospitalization, and one of USA studies proved that undesirable drug effects caused 25% of hospitalizations of patients aged more than 80 2 .
Drugs play a pertinent part in treating geriatric patients for chronic diseases in order to alleviate the pain and improve quality of life 3 .It is crucial for a family doctor to rationally prescribe drugs and get well-acquainted with pharmacokinetics and pharmacodynamics of drugs in the elderly 4 .
Aging affects pharmacokinetics and pharmacodynamics of drugs, which further impacts on the choice and dosage of drugs.Drug effects with the elderly may be more or less highlighted than with the younger patients.The nervous system of the elderly is more sensitive to opioid analgesics, benzodiazepines, antipsychotics, and antiparkinsonics.As one grows older, effects of some drugs are minor (e.g.beta blockers), and there are some changes in drug absorption.The reduced absorption is caused by the decreased small intestine surface and the increased stomach pH.The drug distribution within the body is altered due to the decrease in fluids and muscles and the increase in fat tissue.Liposoluble drugs are better distributed with the elderly patients.Furthermore, aging causes changes in metabolism and drug elimination.Metabolism of many drugs is affected by the impaired flow through the liver and the reduced liver mass.The kidney mass and renal flow are reduced and creatinine clearance decreases, which further causes the reduced renal drug elimination that correlates with the creatinine clearance which drops by 50% between the age 25 and 85.Drug pharmacodynamics is closely related with changes in drug sensitivity, which may rise or fall as one ages.These changes are due to deviations in drug-receptor bonds, the decrease in number of receptors, or an altered cellular response 5,6 .
Polypharmacy represents a large burden for developed countries due to undesirable drug effects.In these countries, approximately 1 to 4 elderly patients get hospitalized because of the prescription of at least one inadequate drug, and around 20% of all hospital patient deaths are caused by potentially preventable undesirable drug effects 7 .
In order to rationally prescribe drugs, one may follow recommendations of Mark Beers, an American geriatric doctor, who defined criteria for drug prescription to elderly patients and who determined which drugs should be avoided, which should have a limited usage, and the dosage of which should be time-limited 8,9 .In practice, one may find it rather useful to use criteria singled out by an Irish scientist Gallagher and his associates, who determined which drugs were possibly inadequate for patients aged more than 65 -STOPP (Screening Tool of Older Person's Prescriptions) and criteria for drug prescription for patients aged more than 65 under the condition that there were no subscription contraindications -START (Screening Tool to Alert doctors to Right Treatment).The purpose of these criteria was to set the evidence-based rules in order to avoid the common, potentially inadequate prescriptions and possible negligence 10 .
The aim of our study was to estimate the prevalence of 65+ patients who continuously use five or more drugs and to single out the most frequently used drugs in reference with sex and age.
Method
The research was a cross section study run during October-December 2015 period.During the study, we examined electronic medical charts of all patients older than 65 who were registered at two family clinic departments at the Educational Family Medicine Centre of the Banja Luka Health Centre.The research protocol was approved by the relevant local ethics committee and complies with the Declaration of Helsinski.
Data collected from the electronic medical charts referred to age, sex, chronic diseases (cardiovascular diseases, mental illness and neurological diseases, musculoskeletal diseases, diabetes, malignant diseases, renal insufficiency, glaucoma, benign prostatic hyperplasia, and other chronic diseases) and drugs used in a continuous therapy (antihypertensive, nitrates, diuretics, antiarrhythmic, benzodiazepine, anti-epileptic, anti-parkinsonians, anticoagulation therapy, anti-aggregation therapy, nonsteroidal anti-inflammatory drugs, oral anti-diabetics, insulin, systemic corticosteroids, combination of beta 2 agonist + inhalator corticosteroids, bisphosphonates, statins, alpha1 adrenergic receptors, and other drugs used by the patients in a continuous therapy).We also surveyed the patients on supplementary drugs (ginkgo biloba, omega 3 fatty acids, herbal supplements for treating benign prostatic hyperplasia, vitamin supplements, mineral supplements, etc.) which they use either upon doctor's instructions or they buy it in chemist's self-willingly.All the data were entered into a questionnaire designed for the purpose of our study.Furthermore, data were entered into a Microsoft Excel chart and were statistically processed using SPSS program.
Examinees
The study covered 432 patients registered at two family clinic departments at the Educational Family Medicine Centre of the Banja Luka Health Centre, who were older than 65, suffered from at least one chronic disease, and used at least one drug in a continuous therapy.For the purpose of our study, the patients were singled out in reference with age.The first group were patients aged 65 to 70, the second group were patients aged 71 to 75, the third group were patients aged 76 to 80, and the fourth group were patients aged 80+.
Statistical methods
We used a descriptive analysis of frequency and percentage in order to examine the samples for patients with chronic diseases and the type of drugs they used in a continuous therapy.Chi-Square test was used in order to determine to which extent age affected the usage of most frequently prescribed continuous drugs.The non-parametric Spearman's coefficient was used where appropriate for the regression analyses.The effect of age on the number of continuous drugs with polypharmacy patients was determined by using the non-parametric Kruskal-Wallis test.
Results
In the present study, 3551 patients who were registered at two family clinic departments at the Educational Family Medicine Centre of the Banja Luka Health Centre were screened.A total number of 432 patients aged 65+ were included in this study.Out of 432 patients included, 243 (56.25%) patients continuously used fewer than five drugs as follows: 27 (6.3%)patients used one drug, 44 (10.2%) patients used two drugs, 83 (19.2%) patients used three drugs, and 89 (20.6%) patients used four drugs.On the other hand, 189 (43.75%) patients used five or more drugs as follows: 61 (14.1%) patients used five drugs, 44 (10.2%) patients used six drugs, 49 (11.3%)patients used seven drugs, 17 (3.9%)patients used eight drugs, 11 (2.5%) patients used nine drugs, and 7 (1.6%)patients used ten drugs.
There were 170 (39.35%) male patients and the average age was 73.88 ± 6.5, in which the youngest patient was 65 and the oldest one was 92 years of age.The group aged 65-70 covered 166 (38.45 %) patients, the group aged 71-75 had 101 (23.37 %), the group aged 76-80 had 86 (19.90 %), and the group aged 81+ had 79 (18.28%) patients.The most common chronic diseases were cardiovascular and musculoskeletal diseases.In addition, 94 patients suffered from diabetes type 2, and 35 patients suffered from a malignant disease (Table 1).The following were the most frequently used drugs with patients treated for cardiovascular diseases: ACE inhibitors 186 (43.1%),ACE inhibitor + diuretic combination 177 (41.0%), beta blockers 136 (31.5%), calcium channel blockers 131 (30.3%), nitrates 105 (24.3%), statins 114 (26.4%), anti-aggregation therapy (ASA, clopidogrel) 157 (36.3%), and benzodiazepine 195 (45.1%) (Table 2).We statistically processed the obtained results in order to examine the usage of individual drugs or a group of drugs in comparison with age.The analysis showed that the usage of nitrates (p = 0.000), diuretics (p = 0.005) and antiarrhythmic drugs (p = 0.014) statistically largely increased with age.Speaking of statin usage, there was a statistically relevant difference among our test groups in reference with age (p = 0.028), in which case patients aged 71-75 mostly used these drugs and patients aged 81+ used them the least (Table 3).There were 65 (34.39%)male and 124 (65.61%) female patients within the group who continuously used five or more drugs.Chi-Square test showed no statistically relevant difference (p=0.119) in number of used drugs between male and female patients.
In the regression analysis, number of drugs taken by the study population did not correlate with age (R 2 =0.022, Figure 1).Furthermore, the effect of age on the number of drugs used in a continuous therapy with polypharmacy patients could not be proved (Kruskal-Wallis, p=0.555, Figure 2).
Discussion
Results of our study have shown a high percentage of polypharmacy patients (43.75%).This phenomenon exists in most developed countries, which has been proved in multiple studies led by other authors.Hence, Golchin et al conducted a study on 59 patients aged 65+ and proved polypharmacy with 35.6% of the examinees. 11Opposite to our opinion, results of a UK study that covered 1,900 patients aged 65+ indicated that the number of polypharmacy patients grew as the time passed.Thus, 15.1% of patients used 5 to 9 drugs in 2003, and the figures rose up to 25.2% in 2011.Results of the same study showed that the number of patients using 10 or more drugs grew from 1.3% in 2003 up to 3.8% in 2011 12 .
Many authors studied the prescription of potentially inadequate drugs to the elderly and they compared their results with Beers, STOPP, and START criteria.The study of Bradley et al included 166,108 patients aged 70+ and they inferred that 34% of the patients were using potentially inadequate drugs longer than three months -NSAIDs (nonsteroidal antiinflammatory drugs) 9% and benzodiazepine 6% 13 .In addition, results of our study showed that the patients mostly used NSAIDs (12.5%) as a long-term therapy, and even 45.1 % used benzodiazepine.
A French study showed an upward trend in prescribing drugs to the elderly over the last decade, and the most commonly prescribed drugs were for cardiovascular disease treatment, analgesics, and NSAIDs 14 .Likewise, a large national research in Taiwan indicated a high level (86.2%) of prescription of potentially inadequate drugs to the elderly, of which most frequent ones were NSAIDs and benzodiazepine 15 .Drugs for cardiovascular disease treatment were also most commonly prescribed drugs in our study but there were also many patients using benzodiazepine, analgesics, and NSAIDs.
According to our study, most commonly prescribed drugs were those for cardiovascular disease treatment, which complies with the most frequent chronic diagnosis.Moreover, 45.1% of benzodiazepine continuous usage is troublesome due to its effect on the fall risk, cognitive damages, delirium, and possible car accidents.A relatively large number of patients using NSAIDs (12.5%) as a long-term therapy is a risk to gastrointestinal bleeding.
Drug prescription to the elderly patients demands a balance between too many or too few drugs.Hence, it is necessary to consider specificities typical of this age as well as the potentially undesirable drug effects.
Conclusion
Polypharmacy was present with 43.75% patients in our study.We found no statistically relevant difference for patients with polypharmacy between the number of drugs in reference with either sex or age.Mostly used potentially inadequate drugs were nonsteroidal anti-inflammatory drugs and benzodiazepines.A clinical assessment of a family doctor along with an individual treatment plan based upon medical, functional, and social conditions should be the foundation of the rational drug prescription at family clinic departments.
Kosana D. Stanetić at al.
The prevalence of polypharmacy in elderly: a cross section study from Bosnia and Herzegovina There were 65 (34.39%)male and 124 (65.61%) female patients within the group who continuously used five or more drugs.Chi-Square test showed no statistically relevant difference (p=0.119) in number of used drugs between male and female patients.In the regression analysis, number of drugs taken by the study population did not correlate with age (R 2 =0.022), (Figure 1).Furthermore, the effect of age on the number of drugs used in a continuous therapy with polypharmacy patients could not be proved (Kruskal-Wallis, p=0.555, Figure 2).
Discussion
Results of our study have shown a high percentage of polypharmacy patients (43.75%).This phenomenon exists in most developed countries, which has been proved in multiple studies led by other authors.Hence, Golchin et al conducted a study on 59 patients aged 65+ and proved polypharmacy with 35.6% of the examinees. 11Opposite to our opinion, results of a UK study that covered 1,900 patients aged 65+ indicated that the number of polypharmacy patients grew as the time passed.Thus, 15.1% of patients used 5 to 9 drugs in 2003, and the figures rose up to 25.2% in 2011.Results of the same study showed that the number of patients using 10 or more drugs grew from 1.3% in 2003 up to 3. 8% in 2011 12 .Many authors studied the prescription of potentially inadequate drugs to the elderly and they compared their results with Beers, STOPP, and START criteria.The study of Bradley et al included 166,108 patients aged 70+ and they inferred that 34% of the patients were using potentially inadequate drugs longer than three months -NSAIDs (nonsteroidal anti-inflammatory drugs) 9% and benzodiazepine 6% 13 .In addition, results of our study showed that the patients mostly used NSAIDs (12.5%) as a long-term therapy, and even 45.1 % used benzodiazepine.
Figure 1 .
Figure 1.Linear regression of age vs. number of drugs
Figure 2 .
Figure 2. The effect of age on the number of drugs used in a continious therapy with polypharmacy patients (n=189) Kosana D. Stanetić at al.The prevalence of polypharmacy in elderly: a cross section study from Bosnia and Herzegovina
Table 1 .
Chronic diseases of the patients (n=432)
Table 2 .
Drugs used within a continuous therapy (n=432)
Table 3 .
Effect of age on the usage of most commonly prescribed drugs in continuous therapy (n=432) *Statistically relevant difference at p<0,05 | 2018-12-30T09:52:53.766Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "8d8e5293f3ba58c1287c93d78b3b0fd389361156",
"oa_license": "CCBYSA",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0354-7132/2017/0354-71321702018S.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8d8e5293f3ba58c1287c93d78b3b0fd389361156",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212858662 | pes2o/s2orc | v3-fos-license | MINIMALIZING CONFLICT ON THE MANAGEMENT OF BORDER-AREA BASED ON NYAMABRAYA
Purpose of the study: The research aimed at developing a land border area management model based on the Nyamabraya concept. Methodology: To achieve the research objective, a study was conducted using the research paradigm for the development of prototypical studies. The research data was collected through observation, documentation, interviews, and data analyzed qualitatively. This research was carried out in the Province of Bali. Main Findings: The study found (1) Nyamabraya-based land border management model; (2) suitability of models with community characteristics in border areas, concept validity; model effectiveness and conformity with the local sociocultural environment, (3) valid quality Nyamabraya border management model and practical; (4) assessment of valid and practical quality-based border management models for Nyamabraya and; (5) validity and practical quality-based Nyamabraya border management models. Applications of this study: Regional Government in the management of land border areas by prioritizing local wisdom (nyamabraya) in their territory. The community will have trust and confidence and continue to conduct life based on the local wisdom they believe in, both in managing resources and in reducing conflict. Novelty/Originality of this study: The development program that has been implemented has not yet touched the land border area so that the community remains disadvantaged, this is suspected to trigger conflicts in the use of resources in the land border area. Through the development of a land border management model with a touch of the value of local wisdom (nyamabraya) in community and state life will be steady, the Republic of Indonesia remains intact and sustainable.
INTRODUCTION
Management of border areas is needed to provide legal certainty regarding the scope, authority and area management limits to achieve the welfare of the community in a province, city district, sub-district to the smallest unit. Bali Province has 8 (eight) districts and 1 (one) municipality. Each district has a minimum of three districts. The existing border issues with the district have not been resolved completely, as there are still many problems with the delimitation of regional boundaries, activities related to the economy and socio-culture of people living on the border that do not rule out the possibility of mutual agreement between districts in the utilization and management of resources in border. All aspects of border problems will affect the defense and security sector of a region.
Management of national borders is the final series of border formation processes. Sutisna (2008) asserts, that the management aspects of national borders are continuous work. Because, in the management activities, there are many aspects related to the implementation of the country's sovereignty itself, such as the maintenance of state boundary stakes, the traffic of people and goods, as well as issues of defense and security of the country itself. As such, it is only natural that border areas need an integrated and sustainable management mechanism because in the border space interactions and neighbors will always occur, both positive and negative. Various development programs have been carried out by the government to advance the border area, both physically and socio-economically and the culture of the people. However, the condition of the border has not undergone much change, the community has remained lagging behind developments in the central region. Wuryandari et al., (2017), research on alternative models for managing security in the border regions of Indonesia and Timor Leste by using a combination of external and internal approaches to border management, recommending choices or actions that can be taken to reduce security issues at the border in terms of both policy and implementation.
From various models of existing border area management, there are still problems in handling border issues that must be found out. Handling the problem of border areas is not only the responsibility of the central government but also involves the regional government. Management of border areas is still mostly focused on the sea border which is felt to have more frequent and complex problems. However, land boundary problems also have a very complex impact on development, both physical and socio-cultural and economic development. With the change in the direction of development for now and in the future, the land sector needs legal certainty and law enforcement on border activities, so that physical, socio-cultural and economic development in the border environment can run as expected.
Awareness of the perception of border areas between regions encourages bureaucrats and policymakers in districts/cities in Bali Province to manage land border areas. This is a strategic issue because the management of land border areas is related to the process of nation-state building to the emergence of potential internal conflicts in a region and/or other regions.
In connection with the foregoing, the life of the Balinese is known for its hospitality and socio-cultural life which is supported by local wisdom. One normative concept of local wisdom that exists in Bali is "nyamabraya" with this concept, in fact the individual relationships in Balinese society are closely interwoven, but in reality there are often conflicts with others in the use of existing resources in border areas, including conflicts over springs in the Baong Kambing Forest area including the Bangli Regency area between Bondalem Village and Tejakula Village in 2010. The border conflict between the residents of Ulakan and Antiga Villages in Karangasem Regency clashed over the boundary which contained the Pertamina Manggis depot on April 11, 2005. In connection with the foregoing, a study was carried out "developing a model of management of land borders based on Nyamabraya values in order to preserve the diversity of Bali and the integrity of the Republic of Indonesia". For this reason, the concept used in this study is the boundary-making theory of Stephen B Jones (1945), which is supported by the legal basis for managing regional borders with Law Number 43 year 2008 concerning state territory and Presidential Regulation Number 12 year 2010 amended by Presidential Regulation Number 44 year 2017 concerning BNPP.
LITERATURE REVIEW
The literature review that is used to study the development of border management models is the concept of Stephen B. Jhones, in Sutisna et al., (2008), asserting that aspects of managing national borders are continuous work. Because, the management activities involve many aspects related to the implementation of the sovereignty of a country, such as the maintenance of national boundary stakes, traffic of people and goods, as well as national defense and security issues. Thus, it is only natural that border areas need an integrated and continuous management mechanism because in the boundary space there will always be interactions with neighboring regions, both positive and negative. Azma et al., (2019); argues that border management is guided by the four principles of National Resilience, namely the principle of welfare and security, the principle of comprehensive or integrated comprehensiveness, the principle of introspective inward and outward, and the principle of kinship Research by Wuryandari et al., (2008), on alternative models of security management in the border regions of Indonesia and Timor Leste using a combination of external and internal approaches in managing border, recommending options or actions that can be taken to reduce border security problems both in terms of aspects policy and implementation Border management is an ongoing and long-term process. In this regard, this research is focused on border management with an emphasis on managing land borders based on nyamabraya.
Nyamabraya has always been a reference for Balinese Hindus in implementing the dynamics of social life. The main points of Nyamabraya teachings according to several written texts and interpretations of the Vedas, consist of (1) interdependence between people, (2) respect for differences, (3) feelings of communal ownership, (4) you are me and I am you, and (5) shared social responsibility (Titib, 2009). In its application, Nyamabraya teachings are more interpreted as a pattern of life that prioritizes togetherness on the basis of attachment to humanity's fate and responsibility, so that a social morality among fellow community members in all aspects of life is truly built up.
METHOD
This research was conducted in the area of Bali Province. As a tourist destination, Bali Province has a variety of attractions, including cultural and natural tourist attractions that are supported by the hospitality of its residents. However, in its development, there were quite a lot of conflicts between villages in the land border area (Suryasa, 2019).
The main focus of this research is to develop a border area management model based on the concept of culture. Based on this rationale, this research uses research design to develop prototypical studies types as stated by Akker (1999) and Ely & Plomp (2001). The important thing to consider in development research is the quality of the product produced. Plomp (2001), providing criteria for product quality are: valid (reflecting state of the art knowledge and internal consistency), having practical and effective added value. In general Plomp (2001) and Pageh (2018), stated that the implementation of development research includes three phases, namely: upstream-downstream analysis phase, prototype development phase, and assessment phase. Relating to the focus of this research problem is the development of a valid, practical, and effective land border area management model based on nyamabraya. The research subjects were border communities, community leaders, determined by purposive sampling. Data were collected by observation, documentation and interview techniques, as well as FGDs with academics, professionals, local governments, traditional community leaders, and then the data were analyzed qualitatively.
a. Developing nyamabraya based land area border management models
Development of land area border management models based on nyamabraya cannot be separated from social, cultural and economic characteristics in the management of land borders (Sari, 2019; Wiguna & Yadnyana, 2019). From these characteristics, there is always integration and interrelationship between socio-cultural activities, and the economy in the developed border areas is mutually supportive, so as to realize synergies in improving community welfare. In order for the (1) Increasing the growth of regional economic structures that are more balanced by increasing economic diversification and reducing dependence on only a few main commodities, as well as expanding markets. By increasing economic growth in the border region, by itself it will encourage other sectors to be more stretched, such as education, health, population activities in the economic field, (2) Utilization of natural resource potential that has not been used optimally for the development of food crops agriculture/sub-sectors , forestry, fisheries, mining, and tourism. In potential border areas such as agriculture, animal husbandry, plantations, optimally utilized by considering environmental aspects, so that the utilization of these potentials does not damage the environment. (3) Increasing the ease of investment growth for the development of strategic sectors/subsectors, especially through infrastructure development, incentives for private investment, (4) Potential development is pursued by directing certain cultivation areas in areas that are potentially good according to physical, spatial potential and according to the existing superior commodities, and (5) In an effort to overcome the problem, prioritizing the handling of areas that face the problem of critical land, disaster-prone areas, underdeveloped areas, the area develops rapidly through identification of priority areas along with the preparation and implementation of the handling program (Nahak, 2017; Purnomo et al., 2019).
Balinese people have cultural distinctiveness, in which there is the concept of the village of Kala Patra, or place, time and condition (Astina et al., 2018). This concept is balanced with tri hita karana or three ways of causing harmony, namely the harmony of the relationship between humans and God, humans, and others, and humans and the environment. One of them is the harmony of humans and fellow humans and humans with the environment in the border region. The harmony of this relationship causes damage to the border environment and conflict on the border with a touch of the concept of natural conflict can be minimized (Pinatih et al., 2018;Sanjaya, 2017). The concept that is also entrenched in Balinese society is 'sad kertih', which is six sources of welfare that must be preserved to achieve physical and spiritual happiness which consists of atma kertih, wana kertih, danu kertih, as soon as possible, jana kertih and the universe is gazing. In the Lontar Mpu Kuturan it is mentioned that Bali as Padma Bhuwana (center of the world), everything comes down to Bali so that all life can prosper, in arranging this limited Balinese space required Balinese human obedience to the importance of preserving the environment that preserves the continuity of life by carrying out the six components are sad.
b. The suitability of the model with the characteristics of the community in the border area
The model of land border management that is developed has conformity with the characteristics of the community in the border area, this is indicated by the absence of conflicts on the border or if there are still, conflicts that can be minimized. This is caused by the presence of regional government policies in border management which include:
Policies in the field of Economics and Socio-Culture
The paradigm of managing the border area in the past as a "backyard" of the country's territory, this has an impact on the condition of the border area that is lagging behind the social and economic aspects. The emergence of this paradigm, caused by the past political system that is centralized and emphasizes security stability (Omer, 2017;Widana et al., 2018). In addition, historically, relations between Indonesia and several neighboring countries have been hit by conflict, and often the occurrence of insurgencies in the country.
Consequently, the perception of handling border areas is more dominated by views to secure the border from potential threats from the outside and tend to position the border area as a security belt ( Poverty is often found in inland border areas. This is indicated by the number of underprivileged families and socioconomic disparities with communities in the border region. This is suspected because of the low quality of human resources, the lack of supporting infrastructure, the low productivity of the community and the not yet optimal use of natural resource potential in the border region. The further impact of such conditions is the emergence of actions that encourage the public to engage in illegal economic activities, solely intended to fulfill their daily needs. Like illegal logging in forest areas, mining of minerals in protected areas is legally prohibited. This, potentially causing vulnerability and order is also very detrimental to a region. In addition, many illegal activities are found related to political, economic and security aspects at the border.
Policies in Natural Resource Management
The potential of natural resources in the border region, both inland and sea areas is quite large. However, so far the management efforts have not been carried out optimally. Potential natural resources that are possible are managed along with border areas, including forestry, mining, plantation, tourism and fisheries resources. Efforts to optimize the potential of natural resources must pay attention to the carrying capacity of the environment, so as not to cause environmental damage, both physical and social. In most border areas, efforts to utilize natural resources are carried out illegally and uncontrollably, thus disrupting the balance of ecosystems and environmental sustainability (Wijana et al., 2018). Various environmental impacts such as cross-border smoke pollution), floods, landslides, the sinking of small islands, etc. are generally caused by illegal activities, such as illegal logging and uncontrolled dredging of sand. This is quite difficult to handle, because of the limited supervision of the government in the border region and the lack of enforcement of the rule of law.
Policies in Institutional and Authority Management
Management of border areas has not been done in an integrated manner by integrating all related sectors. Until now, the problems of some border areas are still handled in an ad hoc, temporary and partial manner and are dominated by a security approach through several committees, so that they have not provided optimal results.
In accordance with Law Number 22 year 1999 concerning Regional Autonomy, Regional Governments can develop border areas in addition to these entrances, without waiting for delegation of authority from the Central Government. However, its implementation for regional government has not implemented its authority. This can be caused by several factors: (1) inadequate capacity of regional governments in managing border areas, considering that the handling is crossgovernmental and cross-sectoral administration, so it still requires coordination from institutions that are hierarchically higher; (2) not yet socialized rules and regulations regarding the management of border areas, (3) limited development budget of the regional government; (4) there is still the attraction of central-regional authority, for example in the management of conservation areas such as protected forests and national parks. concerning Guidelines for Affirmation of Regional Limits giving clarity of authority to the provincial government, City Regencies in border management. The government in managing border areas is the establishment of the National Border Management Agency (BNPP). This body coordinates 18 (eighteen) state ministries and institutions to develop border areas. Thus, the problem of coordination between departments and a clearer division of authority can be accommodated. There are three approaches used by BNPP in managing border areas, namely approaches to security, welfare, and the environment. In the security approach, government policy has long been implemented, so that the Indonesian National Army, which is also included in the BNPP coordination circle, takes the biggest role. The Indonesian national army accommodates defense security in two dimensions, namely traditional and non-traditional defense. In traditional defense, the Indonesian National Army presents village pecalang who coordinates with traditional villages.
c. The nyamabraya-based border management model that is quality, valid, and practical
Border management with a development approach is an implementation and continuation of the concept of border development. On average, many cases and problems that occur in the border region, whether related to security or social problems, are a result of development inequality. In many cases, border areas tend to try to break away for reasons of uneven social welfare. The emergence of separatist movements and the issue of seeking better livelihoods are closely related to this.
The scope of border area management includes two aspects, namely the management of boundaries between regions and management of the area. Management of regional boundaries basically contains a variety of strategic steps to establish and affirm the boundaries of the region with neighboring regions, securing territorial boundaries on land, and cross-border management reforms. Management of land border areas is generally related to various strategic steps to improve the welfare of local communities through sustainable regional development.
To ensure the sustainability of the social, economic and political environment in the border region, it is necessary to develop a nyamabraya based border management model. wisdom of Bali was developed, namely nyamabraya. Nyamabraya is one form of Balinese local wisdom, which is held firmly as the basis of the relationship between individuals in Balinese community groups. Basic values in local wisdom, such as the social system of the community, can be lived out, practiced, taught and passed on from one generation to another, which simultaneously shapes and guides the patterns of everyday human behavior, both in nature and in its ecosystem.
d. Assessment of nyamabraya-based border management models that are quality, valid, and practical
Model assessments were carried out through focus group discussion (FGD) activities involving regional government officials, sub-district heads, village heads, babinkamtibmas, babinsa, community leaders and universities. From the focus group discussion conducted, information was obtained about the boundaries of the districts that were established through Permendagri Number. 66 year 2016 between Buleleng and Tabanan Districts. Meanwhile, village boundaries are stipulated through Permendagri Number 45 year 2016. The basis used is the cartometric system in the preparation of regent regulations. This system has high accuracy, considering that this system uses coordinates, even though changes in the landscape occur due to natural factors or due to human interference, it still will not change its position.
Revitalization of moral values at the border, especially cultural values can be an important reference in border management. With Nyamabraya, the border community realizes that living side by side will have a positive impact on managing and utilizing the potential that exists at the border. For example, water resources, with the awareness of the people that living on the border are both close relatives, then brotherhood will certainly be put forward so that each other's sacrifice to crave harmony can be realized. This is evidenced by the fact that in the border areas there has never been a conflict in exploiting the potential at the border.
The development of a land border area management model based on natural values contributes to the regional government in managing the area bordered with a touch of local wisdom. This is indicated by the interaction of people on the border that has been taking place for generations in economic activities that need optimal management, considering this will boost regional income. Empowerment of local communities on the border which is thick with agricultural economic potential continues to be encouraged by regional cooperation programs through voluntary cooperation and compulsory cooperation in accordance with Law number 23 year 2014 concerning regional governance. Increasing community participation at the border, border communities have varied customs, but still, come from the philosophy of local wisdom that is believed by border communities. The management model developed needs to prioritize community participation. Even though the community is the object of development at the border, without community participation, the program developed by the government will not be able to achieve the expected results.
e. Improvement of nyamabraya-based border management models that are valid and practical quality
The improvement of valid and practical quality land-based border management models based on models starts with the development aspects of the model which include:
Social development in the field of poverty alleviation
Poverty is a phenomenon that is seen as how the community attempts to meet the needs and the extent to which these efforts are able to fulfill what is desired. This perspective is very narrow in nature and traps in a partial approach that does not provide a comprehensive solution, therefore. poverty alleviation programs are only focused on how the community is able to fulfill their needs, and not find a way out in an effort to alleviate the burden of poverty through efforts to increase capacity and empower the poor. In contrast to the view that seeing poverty as a phenomenon, this perspective sees poverty more sharply at the root of the problems faced. Poverty as a result of weak socio-economic development climate, lack of capital and low efficiency results in low investment turnover, resulting in low savings. This is supported by low educational output which in turn has an impact on weak professional management and the result is leadership ineffectiveness.
Thus poverty is inseparable from the program being launched, the policy of allocating resources to both natural resources and human resource technology. Poverty is the inability to meet minimum living standards Suwija et al., 2019). The basic needs that must be met include food, clothing, housing, education, and health. Based on the size of income, poverty is divided into absolute poverty and relative poverty. Based on the time pattern, poverty can be divided into four, including (1) persistent poverty, namely chronic poverty or decline; (2) cyclical poverty, is poverty that follows the overall economic cycle pattern; (3) seasonal poverty, is seasonal poverty that is often found in the case of fishermen and agriculture; and (4) accident poverty, namely poverty created by natural disasters, conflicts, and violence, or the impact of a particular policy that causes a decrease in the level of welfare of a community .
With the concept of nyamabraya, real poverty can be alleviated by social development in rural and urban communities. This is based on the view that Nyamabraya or close relatives have a close social relationship. For example in rural areas of Bali, there is a view that living in a society that cannot be separated from the surrounding community, which considers the surrounding community as brothers, so that life must be prosperous, both in social work and social sociology always prioritize Nyamabraya, as shown in Wesnawa et al., (2010;2014;.
Social development in the border areas also puts forward the concept of culture, so that conflict can be minimized because it considers the surrounding community to be close relatives, which will be invited in social and cultural activities.
Border Area Social Development in Education
The Buleleng Regency Government implements regional programs to build on the border, such as the low education of the people in the border area will make the development difficult. In the implementation of infrastructure development in the Sub-District, initially, there were obstacles, especially in providing understanding to the community regarding land use and land acquisition for infrastructure development. Education is also a priority for the government in the development of border areas, so school-age children in Sub-district villages have understood the importance of education. The low level of education of the people at the border caused the knowledge of the people of the area to below. This was revealed by the head of Tejakula district. Low-educated people are very easily affected by negative things because it is very easy for outsiders to incite them to do things that violate the rules. Therefore, public education must be a priority for the government, with the establishment of schools, especially Elementary Schools (SD), it is expected that the illiteracy rate of the community will decrease even if there is no illiterate community.
The construction of schools in border villages is also taken into account so that the distance between the community and the school is not too far away. Before 2009 various systematic efforts had been made in the distribution and expansion of education, especially in the context of implementing the nine-year compulsory education. Completion of the nine-year basic education compulsory program takes into account fair and equitable services for residents who face economic and social-cultural barriers. The strategy is carried out by helping and facilitating those who have not had the opportunity to attend education, both at school, dropping out of school, as well as elementary school graduates who do not continue to a large number of junior high schools, to obtain educational services. The strategy adopted is the application of inclusive classes, namely by giving opportunities to students who have abnormalities to learn with normal students. Another solution offered in the social development of the education sector is to increase access to education by opening up opportunities for the private sector in establishing new educational institutions.
Border Area Social Development in the Field of Health Services
Development of border areas that are not focused on just one aspect, but must be done in various aspects. The construction of health facilities is intended to improve public health, with the construction of policies in remote villages so that the distance between the community and health services is closer. The development of the border area also sees human factors. The population also affects the construction of border areas. In the villages of Kecamatan Tejakula, the number of births is greater than the number of deaths, as stated by the PLKB in the District of Tejakula which states that public health is a priority for the government in social development. With the fulfilment of health needs for the community, it will be easy for the government to carry out development in the villages in the region. Thus social conflict in the health sector can be minimized.
CONCLUSION
The development of land area border management models based on culture, regardless of the socio-cultural characteristics of the economy, all three mutually support one another, linkages, and integration, with the dominance of the agricultural, plantation and livestock sectors; border areas, the truth of the concept, the effectiveness of the model and conformity with the socio-cultural environment of the local community, which is supported by government policies in the fields of socioeconomic, natural resource management, and institutional policy, (3) nyamabraya based border management models that are valid and practical, ( 4) Assessment of valid and practical quality models of nyamabraya-based border management, and (5) valid and practical quality-based border-based border management models covering regional government policies in the management of land border areas, which consist of policies will be in the field of poverty alleviation, education and health services. Improvement of the model from the aspects of developing a land border management model that is practical and valid. Covering social development in border areas in poverty alleviation, education and health.
The Nyamabraya-based border management model needs to be further investigated with aspects of local skills in other areas as an effort to reduce conflicts in resource management at the border.
SUGGESTION
Local governments in the management of land border areas need to prioritize local wisdom in their territory. Local wisdom possessed by a region, its people have trust and confidence and continue to carry out their lives by referring to the local wisdom they believe in. By prioritizing local wisdom, there are conflicts that can be minimized.
ACKNOWLEDGEMENT
Thank you to the Directorate of Research and Community Service (DRPM) Ristekdikti of the Republic of Indonesia for helping with applied research funding for the 2017-2019 funding year, Undiksha LPPM, Regional Government, Border Communities, Community Leaders, who have assisted in the implementation of this research. | 2019-12-05T09:06:50.906Z | 2019-12-03T00:00:00.000 | {
"year": 2019,
"sha1": "b47420678a448ab4f897c99c19571aba38eaf633",
"oa_license": null,
"oa_url": "https://doi.org/10.18510/hssr.2019.7682",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "24ce56af28c1fbf38166ad93d8262ab07fa71632",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Political Science"
]
} |
264948562 | pes2o/s2orc | v3-fos-license | KNOWLEDGE, ATTITUDE, REFERENCE GROUP, AND BEHAVIOR OF DIPHTHERIA IMMUNIZATION IN RURAL AND URBAN AREAS
Diphtheria is an infectious disease that has been an epidemic in Indonesia and has an average mortality rate of 5-10 percent for children under five years of age. This study aims to 1) analyze the differences and relationships of maternal characteristics, family characteristics, knowledge, attitudes, and reference groups with diphtheria immunization behavior in rural and urban areas; and 2) analyze the effect of maternal characteristics, family characteristics, knowledge, attitudes, and reference groups on diphtheria immunization behavior. This study used a cross-sectional study design. The sample was 41 mothers with children aged 5-12 months and purposively selected at one of the Integrated Services Post in Bogor City and Bogor Regency. The data obtained were processed through coding, input, cleaning, analysis, and interpretation of data. The results showed differences between knowledge, attitude, reference group, and behavior of diphtheria immunization in rural and urban areas. The factor associated with the completeness of diphtheria immunization is social media for rural areas, while it does not exist in urban areas. Factors associated with compliance with the immunization schedule were family size, attitude, interpersonal, and expert for rural areas, and years of education for urban areas. Factors that influence the completeness of immunization are the length of education, attitudes, and interpersonal reference groups. Based on the results of this study, a cross-sectoral role is needed to support community education and educate mothers regarding diphtheria immunization.
INTRODUCTION
Health is a condition that not only indicates freedom from disease or weakness, but also a balance of physical, mental, and social functions (World Health Organization, 2010).Efforts in maintaining health are carried out by preventing disease.Disease prevention can be done through immunization to maintain high antibody levels above the prevention threshold (Hartoyo, 2018).Complete basic immunization consists of Hepatitis B, BCG, Polio, DPT, and Measles.Follow-up immunization consists of immunization against diphtheria, pertussis, tetanus, hepatitis B, pneumonia, meningitis, and measles (Ministry of Health Regulation, 2012).Diphtheria is an infectious disease caused by infection of the mucous membranes of the nose and throat.The disease is caused by the bacterium Corynebacterium diphtheria.Diphtheria is a disease transmitted by direct contact or droplets from the patient.Diphtheria is susceptible to children, especially children aged one to ten years.Diphtheria is spread through contaminated air from the mouth or nose of the patient, fingers, and milk that has been touched by the patient.Symptoms of diphtheria are sore throat, fever, difficulty breathing, swallowing, mucus discharge from the mouth and nose, and weakness in the body.This is because the lymph nodes in the neck enlarge and a thick layer forms covering the esophagus, closing the respiratory tract and causing a lack of oxygen in the blood.
World Health Organization (WHO) data on diphtheria shows that the number of diphtheria cases in Indonesia has fluctuated since 1980.Based on data from the Ministry of Health, diphtheria is an old disease that was once an epidemic in Indonesia.The highest number of diphtheria cases occurred in 1980, then fluctuated in the following years and increased again in 2012 and 2016.In 2016, data from the World Health Organization (WHO) showed that there were 7097 diphtheria cases reported from around the world, including 342 cases in Indonesia.In 2017, from January to November, 95 districts and cities from 20 provinces in Indonesia reported diphtheria cases with 593 cases and 32 of them died (Ministry of Health, 2017).Based on data from the Ministry of Health, 11 provinces reported diphtheria outbreaks, namely West Sumatra, Central Java, Aceh, South Sumatra, South Sulawesi, East Kalimantan, Riau, Banten, DKI Jakarta, West Java and East Java.
Diphtheria immunization is one of the mandatory immunizations promoted by the government (Ministry of Health, 2017).According to the Ministry of Health (2017), the main prevention of diphtheria is through immunization.Diphtheria immunization is included in basic and advanced immunization.In the basic immunization of children aged less than a year, diphtheria immunization is carried out three times, namely when the child is two months, three months, and four months old.Diphtheria follow-up immunization is given when the child is 18 months old, grade 1 elementary school and grade 2 elementary school.The achievement of immunization activities is guided by Universal Child Immunization (UCI).UCI is complete basic immunization coverage of at least 80 percent of children in 100 percent of villages.The national achievement of UCI in Indonesia reached 68 percent.Based on these data, it shows that the coverage of complete basic immunization has not yet reached the predetermined national minimum standard of 80 percent even though the national target for UCI in 2014 was 100 percent UCI in villages (Ministry of Health, 2017).The low coverage of basic complete immunization that has not met the national UCI standard can be an illustration of the low immunization rate in Indonesia.This immune vacuum occurs due to the accumulation of groups that are vulnerable to diphtheria, because these groups are not immunized or are incompletely immunized (Ministry of Health, 2017).Immunization begins when the child is less than one year old.The results of previous studies show that the role of parents is related to the completeness of child immunization (Ningsih, Kasanova, & Devitasari, 2016).Parents, especially caregivers, have an important role in making decisions to follow the immunization program, because parents are the key in maintaining and caring for children (Winarsih, Imavike, & Yunita, 2013).Previous research shows that parental knowledge is significantly related to the provision of complete basic immunization in children, parents with low knowledge have a risk of not providing routine immunization (Triana, 2016).Caregiver behavior in providing immunization to children is related to several factors such as access to information, knowledge and there are other factors, namely education, maternal age, and income (Yanti, 2016).Other studies have shown that low maternal knowledge is a maternal characteristic factor that has the greatest risk of irregularity in providing complete basic immunization to children (Harmasdiyani, 2015).
Other factors such as mother's attitude towards immunization, mother's occupation, family support, total income, and distance to immunization services show varied relationships.The data shows that factors from 225 Prameswari & Retnaningsih JCFCS mothers related to immunization will greatly determine the provision of complete immunization of children (Istriyati, 2011).Previous research states that groups or figures who are role models for mothers will influence their behavior towards immunization.Mothers tend to have behaviors that are considered in line with the behavior of people they look up to or consider important (Anton, 2014).Based on this background, this study aims to: (1) identify characteristics of role models, family characteristics, knowledge, attitudes, reference groups, and diphtheria immunization behavior of mothers in rural and urban areas, (2) analyze differences in characteristics of role models, family characteristics, knowledge, attitudes, reference groups, and diphtheria immunization behavior in rural and urban areas, (3) analyze the relationship between variables, and ( 4) analyze the effect of maternal characteristics, family characteristics, knowledge, attitudes, and reference groups on diphtheria immunization behavior.
METHODS
The research design used in this study was a cross sectional study.The selection of research locations was carried out purposively with the criteria of representing the characteristics of rural and urban communities.This study was conducted in two locations, namely Integrated Services Post Jabal Rakhmah located in Tapos 1 Village, Tenjolaya Subdistrict, Bogor District and Integrated Services Post Dahlia located in Bubulak Village, West Bogor Subdistrict, Bogor City.The sample selection was done purposively based on consideration of the population size in each Integrated Services Post.The population of this study were all married women with children aged 5-12 months.The sample in this study were all mothers who were registered at Integrated Services Post Jabal Rakhmah and Integrated Services Post Dahlia and had children aged 5-12 months.The age of the child was chosen from 5-12 months because based on basic immunization rules, the entire series of basic diphtheria immunizations have been given when the child is 5 months old.The two Integrated Services Post in rural and urban areas were the Integrated Services Post with the most complete data and had the highest population that fit the research criteria.The number of samples taken in this study was selected as 41 samples consisting of 22 samples from rural areas and 19 samples from urban areas.
The types of data collected were primary data and secondary data.Primary data is data obtained directly from research subjects through measurement instruments including sample characteristics, family characteristics, knowledge, attitudes, reference groups, and diphtheria immunization behavior.Primary data was obtained from interviews through questionnaires that had been prepared previously.Secondary data were obtained from the latest data on Integrated Services Post and Integrated Services Post cadres in each rural and urban area.Secondary data included the name of the sample, the age of the child, and the place of residence of the sample.The method used was interviews through questionnaires.This study uses two types of variables, namely independent variables (knowledge, attitude, and reference group) and dependent variables (diphtheria immunization behavior).Knowledge in this study is the information possessed by the sample related to basic diphtheria immunization for children less than one year old.The knowledge variable was measured using 17 statements developed from Iskandar's (2009) questionnaire with 'yes' and 'no' answer options and had a Cronbach's alpha value of 0,652.
The attitude variable in this study is the tendency to study examples to then determine behavior in providing diphtheria immunization.Attitude was measured using an instrument developed from Huda's (2009) research.Attitude consists of 14 statements with a Likert scale of 1-5 (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree) and has a Cronbach's alpha value of 0,805.Furthermore, reference groups in this study are groups or individuals who have direct or indirect influence on exemplary behavior, including the interpersonal environment, social media, and experts or experts.The reference group of the example is measured using an instrument developed based on the reference group of Yuliati et al. (2012).The reference group variable is measured through 11 questions with answer options 1 = yes and 2 = no, and has a Cronbach's alpha value of 0,845.Behavior in this study leads to the behavior of giving diphtheria immunization which includes completeness, suitability of schedule, and place of diphtheria immunization in children aged 5-12 months.Measurement of behavioral variables used Huda's (2009) questionnaire which has been adapted to the needs of the study, consisting of 18 questions, and answer options 1=yes and 2=no.The reliability of the diphtheria immunization behavior instrument has a Cronbach's alpha value of 0,725.
The data obtained through sample interviews were then subjected to coding, input, cleaning, analysis, and interpretation.The coding, input, and cleaning processes were processed using the Microsoft Office Excel 2010 program then the data were analyzed using the Statistical Package for Social Science (SPSS) 22.0 for Windows.
The variables studied in this study were given an assessment score according to the scale used and then processed according to each variable.The knowledge variable was calculated from the total score which was summed and converted into an index with a scale of 0-100.The index obtained was categorized based on the cut off less (score 80) (Kinanti, 2011).The data analysis used was descriptive and inferential analysis.Descriptive analysis was used to provide an overview of sample and family characteristics (age, years of education, number of children, and income per capita per month), knowledge, attitude, reference group, and diphtheria immunization behavior.Descriptive statistics used were mean, standard deviation, maximum and minimum values.Inferential analysis included: independent sample t-test to analyze differences in sample characteristics, family characteristics, knowledge, attitude, reference group, and diphtheria immunization behavior in rural and urban areas; Spearman correlation test to analyze the relationship between sample characteristics, family characteristics, knowledge, attitude, and reference group with diphtheria immunization behavior; and logistic regression test to analyze the effect of sample characteristics, family characteristics, knowledge, attitude, and reference group on diphtheria immunization behavior.
Overview of the Research Location
The study was conducted in two areas representing rural and urban areas.Integrated Services Post Jabal Rakhmah, located in Neighbourhood 1, Tapos 1 Village, Tenjolaya Sub-district, Bogor District, represents rural areas.Tapos 1 village is one of six villages in Tenjolaya sub-district.Tapos 1 village has an area of 4,82 km² and a total population of 9589 people with a population density of 1989 people/km².The geographical area of Tapos 1 village is divided into residential areas and agricultural land in the form of rice fields or fields.Educational facilities are available from kindergarten to public or private high school level.Health facilities and infrastructure in Tapos 1 village are 7 Integrated Services Post, 1 puskesmas, and 1 midwife office with 1 village midwife and 1 midwife assistant (Kecamatan Tenjolaya in Figures, 2018).
Integrated Services Post Dahlia, Neighbourhood 1, Bubulak Urban Village, West Bogor Sub-district, Bogor City represents urban areas.Bubulak urban village is one of 16 urban villages in West Bogor sub-district.Kelurahan Bubulak has an area of 3,14 km² and a population of 18,140 with a population density of 5777 people/km².The geographical area of Kelurahan Bubulak is filled with residential areas and public facilities.Educational facilities are available from kindergarten to high school level, both public and private.Health facilities and infrastructure in Kecamatan Bogor Barat consist of 15 Integrated Services Post, 1 puskesmas, 2 clinics, 5 practicing doctors, and 3 practicing midwives (Kecamatan Bogor Barat in Figures, 2018).
Sample Characteristics
The results showed that the age of the samples in both regions ranged from 21-45 years old.Samples in rural areas had a lower average age (27 years) compared to those in urban areas (29 years).Statistically there was no significant difference in the age of the samples in the two regions.The average education of rural samples (6 years) was lower than urban samples (10 years).Most (81,8%) of the rural samples had 6-9 years of education.More than half (63,2%) of urban mothers had 10-12 years of education.There was no significant difference between the years of education of rural and urban samples.Based on occupation, rural (100%) and urban (94,7%) samples were unemployed and housewives.
Family Characteristics
The highest percentage of husbands' occupation was as laborers in both rural (59,1%) and urban areas (42,1%).The number of family members in this study ranged from 3-8 people with an average of four people.Most families in rural areas (86,4%) were categorized as small families, similar to the sample families in urban areas (73,7%).T-test results showed no significant difference (p=0,658) in family size between the two regions.Monthly per capita income was analyzed based on the West Java poverty line according to BPS (2018) of IDR 371.376 per month.Most of the samples in rural areas (86,4%) and urban areas (89,5%) were above the poverty line.The average family income of samples in rural and urban areas is categorized as non-poor because they have per capita income above the poverty line.Based on the T-test results, there was a significant difference (p=0,018) in the monthly per capita income in the two regions even though most of them were above the poverty line.
Knowledge about Diphtheria Immunization
Knowledge is an important variable in shaping a person's behavior and actions.Behavior based on knowledge will be more persistent than behavior that is not based on knowledge.Based on the results of the study, the highest knowledge score of the sample in both regions is on the statement of immunization can be done at Integrated Services Post.The lowest knowledge score in rural areas is the statement about DPT immunization and diphtheria immunization can be done at the hospital.In urban areas, the lowest percentage of correct answers was the statement that diphtheria immunization can treat diphtheria.The results showed that the average knowledge in rural areas was much lower (45,3) than the knowledge of urban samples (80,2).Most (90%) of the rural samples' knowledge was in the low category.More than half (68,4%) of the urban samples' knowledge was categorized as good.None of the knowledge of rural samples (0,0%) was categorized as good, while only 1 sample in urban areas (5,3%) was categorized as poor.Based on the results of the t-test, there was a significant difference in the variable knowledge in the two regions (p=0,000).
Attitude towards Diphtheria Immunization
Attitude is one of the determinants of maternal behavior regarding child immunization.All rural samples (100%) had an attitude in the moderate category with an overall mean score of 64,4.Referring to Table 1 most of the urban samples (94,7%) also had an attitude in the moderate category with an average of 75,7.The average attitude of mothers towards diphtheria immunization in rural areas (64,4) is lower than in urban areas (75,7).Thus, there was a significant difference (p=0,000) in the attitude of mothers in rural and urban areas (Table 1).Reference Group for Diphtheria Immunization Based on Table 2, reference groups are divided based on three dimensions, namely interpersonal, social media, and experts.The results showed that the highest percentage of reference groups (81.8%) in rural areas was midwives from the expert/expert dimension.While the highest percentage (94.7%) in urban areas was family from the interpersonal dimension.Family and midwives are reference groups that come from groups that are occupied and often meet face-to-face with examples.Celebgrams and content creators from the social media dimension are the least chosen reference groups in rural areas (0.0%) and in urban areas (31.6%).All statements from the reference group dimension in rural areas have a lower percentage than in urban areas.
Based on the results of statistical tests, most of the statements from the three dimensions have significant differences.The reference group with the largest difference (p=0.000) between the two regions was the reference group of social media networking, e.g.interested in diphtheria immunization despite a lot of information on social media, and doctors (Table 2).Behavior of Diphtheria Immunization The results showed that the percentage of immunization completeness in rural areas (54,9%) was lower than in urban areas (73,7%).The figure on the completeness of immunization dimension showed that there were 12 samples in rural areas and 14 samples in urban areas who gave DPT immunization three times.There was no significant difference in the completeness of Diphtheria immunization (p=0,421).Compliance with immunization schedule was lower in rural areas (9,1%) than in urban areas (57,9%).The rate of schedule compliance showed that there were only 2 samples in rural areas and 11 samples in urban areas who gave diphtheria immunization three times and according to the immunization schedule rules.Based on the results of the T-test, there is a significant difference (p=0,012) in the dimension of suitability of immunization schedule in both regions.The score of the place of immunization was derived from three times the provision of DPT immunization in the sample.In rural samples, most (83,4%) chose to give immunization at the Integrated Services Post.However, in urban areas the percentage of immunization site selection was more spread out with the largest percentage (49,1%) at Integrated Services Post.There was a significant difference in Integrated Services Post (p=0,000), puskesmas (p=0,018), and midwife (p=0,023) (Table 3). 4 shows that in rural areas there is a significant positive relationship between social media reference groups (r=0,462; α=0,031) and the completeness of diphtheria immunization.This result means that the higher the use of social media, the higher the completeness of immunization.Family size (r=0,440; α=0,040), attitude (r=0,610; α=0,003), interpersonal reference group (r=0,618; α=0,002), and expert reference group (r=0,501; α=0,017) were significantly positively associated with the conformity of diphtheria immunization schedule for example in rural areas.This means that the increase in family size, attitude, interpersonal, and expert will increase the suitability of immunization schedule.
Factors influencing diphtheria immunization behavior
Logistic regression tests were conducted on the combined rural and urban samples.Immunization behavior was analyzed through two dimensions, namely completeness and suitability of diphtheria immunization schedule.The results of the logistic regression test in Table 5 show that of the independent variables studied on the behavior of diphtheria immunization in the combined sample in rural and urban areas, there are three variables that have a significant effect on the completeness of immunization.Maternal education, attitude, and interpersonal reference group have a significant positive effect.There are two variables that significantly influence the suitability of the immunization schedule, namely, the length of maternal education and knowledge.The results of the study on the immunization completeness variable showed a Adjusted R2 value of 0,545.This means that 54,5 percent of the completeness of diphtheria immunization is influenced by the length of maternal education, attitude, and interpersonal, while the remaining 45,5 percent is explained by other variables not examined in this study.In the schedule suitability variable, the Nagelkerke R2 value is 0,713.This means that 71,3 percent of the suitability of diphtheria immunization is influenced by the length of maternal education and knowledge, while 28,7 percent is explained by other variables not examined in this study (Table 5).2018), women with children less than a year old tend not to work.This is because children under a year are not yet independent; therefore, the primary caregiver must meet all their needs.On the other hand, women are required to complete their work properly, and of course, married women need attention to other things, namely family.Regarding family characteristics, there are fewer examples of husbands' occupations in rural areas than in urban ones.Per capita income and family income in rural areas were also lower.Family income in this study is sourced from the husband's income; however, based on the results, it can be seen that most are not working.Rijal and Tahir (2022) mentioned that labor absorption in urban areas is higher than in rural areas.
Research on knowledge variables in rural areas has yielded poor results.Mulyani, Shafira, and Haris (2018) revealed that knowledge related to immunization is low because mothers rarely read and understand the results of recording their baby's growth and development in the contents of the Child Identity Card (KIA) book.In addition, according to Kumar, AggaNeighbourhoodal, and Gomber (2010), low maternal knowledge of immunization is caused by the inactivity of local health workers in providing socialization.The research results in rural areas differ from those in urban areas.Examples in urban areas have good knowledge of immunization.In contrast to previous research, which found no significant differences in knowledge, behavior, and attitudes between generations in urban and rural areas (Anam & Muflikhati, 2022).Jones et al. (2012) showed that mothers have good knowledge due to high maternal education and easy acquisition of information about immunization-related health services.Furthermore, the attitude variable shows a significant difference in both regions, with the highest percentage in the moderate index.Attitude is defined as an attitude towards certain objects, which is an attitude of view or feeling, but this attitude is accompanied by a tendency to act according to the object (Notoadmojo, 2012).Significant differences in attitude variables were also found in previous studies that specifically analyzed differences in consumer attitudes in rural and urban areas (Mahardika & Yuliati, 2022).According to Oliver (2014), attitudes can be shown through behavior and are sometimes referred to as behavioral effects.Attitudes can also be assessed based on affective effects.According to Oliver (2014), affective is an attitude related to one's feelings and emotions.In this study, mothers realized the importance of diphtheria immunization, as evidenced by their behavior in providing immunization.
Reference groups play an important role in providing information on diphtheria immunization.Reference groups are considered role models for consumers when making decisions about consuming goods or (Sherif, 2015).The mother's reference group in providing immunization consisted of invitations and encouragement from people around her, such as family, neighbors, peer groups, midwives, and cadres, based on Nuraprilyanti's (2009) research.Reference groups are divided into three dimensions: interpersonal, consisting of family, friends, friends, and neighbors; social media; and experts or experts, consisting of Integrated Services Post cadres, puskesmas officers, midwives, and doctors.For example, in rural areas, most people choose midwives as their reference group.Health workers for immunization programs usually consist of Puskesmas officers, doctors, or midwives; however, mothers in rural areas also have a high percentage of choosing Integrated Services Post cadres as a reference in providing diphtheria immunization.According to Hidayat and Jahari (2012), cadres are men or women who are chosen by the community and are willing and able to work in a very close relationship with health service places.The high percentage of Integrated Services Post cadres is due to the Integrated Services Post being the closest health service to the community (Ministry of Health, 2011).This result is in accordance with Hoonsopon's research ( 2016), which states that consumers will be more influenced by information sourced from personal and group sources than by public information.The results showed that, in urban areas, the interpersonal dimension of the family was the most preferred reference group.Mothers who receive support from their families feel that immunization is important for children's health and immunity.Ilhami and Afif (2020) found that family emotional support influences the provision of immunization to children.Delays, lack of belief in the benefits after immunization, and busy parents are also reasons why children are not immunized or have incomplete immunization.Family problems such as sick mothers and the unaffordable cost of immunization (Rahmawati, 2014) are also obstacles to the completeness and suitability of the immunization schedule.The completeness and suitability of the immunization schedule of samples from urban areas were higher than those of samples from rural areas.This is in accordance with the research of Awoh and Plugge (2016), who stated that immunization behavior in children in urban areas is much higher than that in rural areas due to various factors.In the dimension of immunization, the Integrated Services Post is the place most chosen by the example to provide immunization to children, but in the example in the city, the place of immunization is more varied and does not only focus on the Integrated Services Post.Based on research and interview results, examples in the village prefer the Integrated Services Post because it is closer and does not incur costs for transportation or immunization itself.Jayanti et al. (2017) found that barriers and distance to health services had a negative effect on the completeness of immunization.Research in other areas found that immunization of infants was well implemented, but there were several activities at the Integrated Services Post that had not been carried out, such as the presence of doctors, health counseling, and inadequate facilities and infrastructure (Samosir, 2023).Overall, immunization behavior in urban areas is better than immunization behavior in rural areas.Romadhona (2015) also stated that the status of residence has a significant effect on the provision of complete basic immunization in children.
Correlation test results in rural areas showed a significant positive relationship between social media and immunization completeness as well as family size, attitudes, interpersonal reference groups, and experts with the suitability of the immunization schedule.Social media has a relationship with immunization completeness.This is in line with the research of Odone et al. (2015) which shows that easy access to social media and other online pages will increase immunization behavior compared to those who do not have access at all.Family size has a significant positive relationship with the suitability of the immunization schedule.Indriyani and Asih (2019) stated that family size is related to the suitability status of the immunization schedule.This is because mothers already have experience with previous children related to immunization.Attitude is related to the suitability of the immunization schedule.This study is in line with Triana's research (2016) which shows that attitudes have a relationship with schedule conformity behavior so that parents who have a lack of attitude tend not to give immunizations on time to their children.Interpersonal reference groups relate to the behavior of immunization schedule conformity.These results are in line with Budiarti's research (2019) which found a significant relationship between family support and maternal compliance with immunization.These results are in line with Budiarti's research (2019) which found a significant relationship between family support and maternal compliance with immunization.The higher the family support, the higher the mother's behavior in providing immunization.Health experts and experts have a relationship with the suitability of the immunization behavior schedule.Notoatmodjo (2012) states that the behavior of health workers such as Integrated Services Post immunization officers, puskesmas, and midwives will support the formation of good immunization behavior in mothers.The results of the correlation test in urban areas showed a relationship between the length of maternal education with the completeness and suitability of the diphtheria immunization schedule.This result is in line with the research of Dinengsih and Hendriyani (2018) which states that there is a significant relationship between maternal education and the completeness of child immunization.In the overall sample, there is a significant relationship between maternal education, per capita income, attitude, interpersonal, and expert with the suitability of the immunization schedule.Per capita income has a relationship with the appropriateness of the immunization schedule.The Indonesian Ministry of Health (2013) states that the higher the family's socioeconomic status, the higher the percentage of complete immunization in children.This is because parents can make more efforts that require funds so that children remain immunized according to schedule.Previous research also found a significant relationship between economic conditions and the provision of complete basic immunization (Lumbantoruan & Simanjuntak, 2020;Nadila, 2022).
The regression test results show the influence of the length of maternal education, attitudes, and interpersonal reference groups on the completeness of immunization.Maternal education affects the completeness of immunization.This study is in accordance with the research of Rahman and Obaidan-Nasrin (2010) which states that mothers with higher education tend to be more complete in providing immunizations to children.
Attitude has a significant effect, this is in line with Anton's research ( 2014) which shows that maternal attitudes can be a predisposing or precipitating factor that causes mothers to bring their babies to be immunized.Attitude affects a behavior because it is influenced by the belief that behavior will lead to both desired and unwanted results.Interpersonal reference groups influence the completeness of immunization.These results are in line with the research of Rahmawati and Umbul ( 2014) which shows the results of the analysis of the influence between the support of the closest person and the immunization of children and toddlers.Family support and the closest person have an effect on child immunization.The regression test results show the length of maternal education and knowledge affect the suitability of the immunization schedule.This is in accordance with the research of Vikram, Vanneman, and Desai (2012) which revealed that mothers with higher education adhere more to immunization guidelines and rules not only because of the advice of health workers but really understand the importance of immunization so that immunization of children follows the applicable rules.The limitation of this study is that the results cannot be generalized because the sampling was nonprobability.In addition, the analysis used is quite limited because the instrument used in the study is a closed question.
CONCLUSIONS AND SUGGESTIONS
The average age of the sample in rural areas is lower than in urban areas but not statistically different.The average years of education of rural samples is lower than urban samples and statistically different.All samples in rural areas and most samples in urban areas are housewives.The occupation of the husband of most of the samples in both rural and urban areas was laborer.Family size in both rural and urban areas is mostly small.The average per capita income in rural areas is lower than in urban areas and statistically different.
The mean knowledge of rural samples is lower than urban samples and statistically different.The mean attitude of rural samples is lower than urban samples and statistically different.There were differences in all dimensions of reference groups in rural and urban areas.The highest reference group in rural areas was midwives, while in urban areas it was family.The average behavior in rural areas was lower than in urban areas and statistically different.The factor associated with the completeness of diphtheria immunization was social media for rural areas, while in urban areas there was no associated factor.Factors associated with compliance with diphtheria immunization schedule in rural areas were family size, attitude, interpersonal, and expert, while in urban areas the length of maternal education.Factors influencing the completeness of diphtheria immunization are length of mother's education, attitude, and interpersonal reference group.Factors JCFCS influencing the suitability of diphtheria immunization schedule are length of maternal education and per capita income.
The diphtheria immunization behavior of mothers in rural and urban areas has a significant difference.Years of education and knowledge are one of the variables that have an influence on the behavior of giving diphtheria immunization.The low level of education and knowledge of mothers in rural areas is one of the causes of low diphtheria immunization behavior.Therefore, the role of family, friends, relatives, government, and health experts is needed to support community education and educate mothers regarding diphtheria immunization.Education of mothers includes knowledge about diphtheria immunization and the mandatory schedule for diphtheria immunization.Mothers will understand the impact of diphtheria disease so that there is awareness to provide complete and proper immunization according to the immunization schedule.In addition, the government is expected to increase and facilitate access to education and socialization about immunization through various media that are easily accessible to the community in rural areas and in urban areas.It is necessary to conduct further research related to other mandatory immunizations in Indonesia and differences between regions, so that solutions can be found for differences in immunization coverage.
Table 1
Distribution of samples based on Diphtheria immunization attitudes of samples and regions
Table 2
Distribustion of samples based on referenced group in providing diphtheria imminization
Table 3
Distribution of samples based on aspects of diphtheria immunization behavior and sample region
Table 2
Distribustion of samples based on referenced group in providing diphtheria imminization (continue)
Table 5
Wasak's (2010)le characteristics, family characteristics, knowledge, attitude, and reference group on diphtheria immunization behavior Most of the studies in both regions were conducted in the early adult age range.According toHurlock (1986), early adulthood is characterized by several factors: reproductive age, a period of adjustment to new environments and conditions, emotional tension, and dependence on changing values.Reproductive age, especially for women, is characterized by the readiness of reproductive organs to reproduce.Education, for example, only reaches the secondary school level in the village.This is in accordance withWasak's (2010)research, which revealed that mothers' education in rural areas is low.Low education levels in rural areas are due to a lack of educational facilities and infrastructure.This study's educational facilities and infrastructure in rural areas consist of two primary schools, one private secondary school, and one public secondary school.
Most rural and urban examples do not work.According toWinnicott (
Table 5
Effect of sample characteristics, family characteristics, knowledge, attitude, and reference group on diphtheria immunization behavior (continue) Behavior in providing immunization to a person is divided into two categories: completeness and compliance with the schedule.Not all examples in this study provided complete diphtheria immunization, and only a few were in accordance with the schedule.Based on the Decree of the Minister of Health No.482 on the National Immunization Acceleration Movement for Universal Child Immunization 2010-2014 (GAIN UCI 2010-2014), there are several reasons why children are not immunized or are incompletely immunized, namely the mother's lack of knowledge or fear of immunization side effects. | 2023-11-03T15:16:55.790Z | 2023-10-31T00:00:00.000 | {
"year": 2023,
"sha1": "aec3be258541ea8c7047fd747cad711fbf40d202",
"oa_license": "CCBY",
"oa_url": "https://journal.ipb.ac.id/index.php/jcfcs/article/download/48449/26544",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "7303b3a1e4b06e142eaeda03651a6fddbe6f565e",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
} |
233999214 | pes2o/s2orc | v3-fos-license | Association of masseter muscles thickness and facial morphology with facial expressions in children
Abstract Objective To evaluate the potential influence of muscular capacity and facial morphology on facial expressions in children. Materials and methods A cross‐sectional study was carried out on 40 healthy children (ages 9–13), without previous orthodontic treatment. Masseter muscle thickness and anthropometric facial proportions were measured using ultrasound and digital calipers respectively. A three‐dimensional infrared face‐tracking system was used to register facial expressions. The maximal amplitude of smile and lip pucker (representing maximal lateral and medial commissure movement) were used for analysis. Stepwise regression was used to investigate whether muscle thickness or anthropometric facial proportions were associated with the quantity of commissure movement. Results When performing maximal smile, children with thicker masseter muscles were found to have more limited displacement of the commissures (R = 0.39; p = 0.036). When performing lip pucker, children with thicker masseter muscles were found to have greater commissure movement (R = 0.40; p = 0.030). No significant associations were found between anthropometric facial proportions and facial expressions. Conclusion Masseter muscle thickness seems to be associated with facial expressions in children. Those with thicker muscles show more limited commissure movement when smiling, but greater movement with lip pucker. This indicates that masticatory muscles may serve as a surrogate for mimic muscle activity. Facial morphology of the subjects does not seem to be associated with facial expression.
. Facial expressions, which are integral to social interactions, may be of clinical importance to orthodontists and patients alike, and thus call for more research into this area.
With the advent of new technologies allowing three-dimensional analyses, an evaluation of facial expressions based on two-dimensional data can only provide limited utility. Some authors have furthermore suggested the necessity of the inclusion of the fourth dimension (i.e., the dynamic status over time) when evaluating the face [Trotman, 2011] or the smile (Sarver & Ackerman, 2003), which is very reasonable given the fact that in the context of social interactions dynamic stimuli may be interpreted differently from the static situation (Rymarczyk et al., 2016). The impact that dynamic facial expressions can have on our interactions with others are of great relevance.
Facial expressions, in relation to the different vectors of motion can be determined by the underlying hard tissues (craniofacial skeleton and dentoalveolar structures) , and the soft tissues (Uchida et al., 2018). The dynamic state of the soft tissues mainly depends on the recruitment of the muscles of facial expression, which contribute to the overall attractiveness of a smile (Lin et al., 2013). The resulting facial expressions follow a unique pattern of motion which seems to be consistent from childhood to young adulthood (Curti et al., 2019). When this motion is impaired, the resulting facial expressions may lead to psychological distress (Ishii et al., 2018).
The masseter muscle has been proposed to be a muscle that is representative of the masticatory muscles in general, based on computer tomography studies (Weijs & Hillen, 1984a;Weijs & Hillen, 1985).
Moreover, masticatory muscles have been found to be associated with facial morphology, whereby those with a brachycephalic pattern have thicker muscles (Weijs & Hillen, 1984b). An indirect association has also been found with the activity of the masseter muscles having been shown to be associated with facial expressions, especially for smiling (Steele et al., 2018). In many individuals, the motor nerve to the masseter muscle has been shown to be activated during normal smile production (Schaverien et al., 2011). We can thus hypothesize that the activity of the neighboring muscles of facial expression may be related to the functional capacity of the masticatory muscles, and facial morphology, although this has never been adequately investigated. It would thus be interesting to identify individuals with a well-developed facial musculature which may have an influence on orthodontic, surgical, and cosmetic treatment planning with regard to changes in soft tissue movement and facial expressions.
Our hypothesis was that children with a well-developed masticatory system, and a brachycephalic facial morphology, show greater perioral commissure movement when preforming facial expressions.
This hypothesis is based on the aforementioned studies and the notion that the increased functional capacity of a muscle, or group of muscles, can lead to an increase in the range of motion. We therefore aimed to evaluate the potential influence of the masticatory muscular capacity as well as facial morphology on facial expressions in children.
| METHODS
The present cross-sectional prospective study was approved by our local research ethics board (No. 07-020), and all participants and their guardians gave informed consent.
| Participants
The study sample consisted of 40 healthy children, without previous orthodontic treatment, seen at the University clinics of dental medicine in Geneva, Switzerland. Children were invited to participate in the study during their pre-treatment diagnostic appointment, and for those who accepted and fulfilled the inclusion criteria, records were collected during a second appointment prior to commencing the orthodontic treatment. Eligibility criteria aimed to ensure the inclusion of subjects without striking deviations from facial norms. More specifically, inclusion criteria were the following: children aged between 9 and 13 years; mixed dentition; dental Class I or Class II malocclusion; and no previous orthodontic treatment. Exclusion criteria were the following: dental Class III malocclusion; transverse discrepancies; lip incompetence; non-nutritive sucking habits; dysfunction or pathological signs of the temporomandibular joint; extreme brachycephalic or dolichocephalic facial pattern; craniofacial syndromes; neuromuscular disorders.
| Methods
The three following variables were recorded during the same visit, by one investigator: masseter muscle thickness; anthropometric facial dimension; and facial expressions.
| Masseter muscle thickness
Masseter muscle thickness was measured to the nearest 0.1 mm using an ultrasound scanner (Pie Medical Scanner 480, 7.5 MHz linear array transducer) adhering to the method described by Raasdsheer et al (Raadsheer et al., 1994). In brief, with the children seated and their heads in natural head position, the scan plane was perpendicular to the insertion of the masseter muscle, halfway between the gonial angle and the zygomatic arch. Two registrations of the transverse section of the muscle were taken on each side of the jaw, with the muscles in contraction (maximal clenching in intercuspidation). The average of the two measurements of the transverse section of the masseter muscle in contraction was used for the analysis.
| Anthropometric facial dimensions
The anthropometric vertical facial proportions of the children were measured directly on the skin of the participants with digital calipers FINO GmbH,Bad Bocklet,Germany) similar to what was proposed by a previous study (Raadsheer et al., 1996). The reference points on the skin were defined to approximate the underlying cephalometric landmarks of nasion, subnasale, and menton, thus allowing measurements for total facial height (distance from menton to nasion) and lower facial height (distance from menton to subnasale) to be measured. The lower facial height ratio was also calculated by dividing the lower facial height by the total facial height.
| Facial expressions
Vectors of displacement of the oral commissures were recorded dynamically during a series of facial expressions with a threedimensional infrared face-tracking system (Smarteye® Pro system, SmartEye AB, Gothenburg, Sweden) and the corresponding custommade MME (mimic muscle evaluation) add-on software that allows the tracking of lip movements. The protocol and method used was that described in the paper of Sjögreen et al (Sjogreen et al., 2011).
The method has been previously validated by another study (Schimmel et al., 2010).
Briefly, the participants were seated in a chair without a headrest to ensure a natural head position. A sequence of 10 photographs was taken for the identification of landmark settings. Subsequently, the reference position "rest position" was recorded with the lips at rest. Finally, two different facial expressions were recorded, each to its maximal extent, namely maximum smile and lip pucker. This task was repeated twice, in order to record the maximal commissure movements. The maximal amplitude of the smile (representing maximal lateral movement of the commissures) and of the lip pucker (representing maximal medial movement of the commissures) were used for the analysis.
Movements were looked at in the three axes, namely the x-axis (horizontal commissure movement), y-axis (vertical commissure movement), and z-axis (anteroposterior commissure movement). Changes from rest position to maximal smile, or lip pucker, were recorded.
Moreover, the resultant (R) was also calculated, which represents the combined three-dimensional oral commissure displacement, considering the movement in all three axes. This was calculated with the following formula: R = √[(change in horizontal oral commissure movement) 2 + (change in vertical oral commissure movement) 2 + (change in antero-posterior oral commissure movement) 2 ].
| Statistical analysis
All statistical analyses were performed using SPSS version 25.0 (SPSS Inc., Chicago, IL). Pearson correlation was used to analyze the correlation between the two facial expressions recorded, namely maximal smile and lip pucker, including age and sex as covariables. Similarly, Pearson correlation was used to analyze the correlation between masseter muscle thickness and anthropometric facial proportions.
Stepwise regression was performed with soft tissue commissure movement as the dependent variable and the square of muscle thickness or anthropometric facial proportions as independent variables, along with age and sex. Masseter muscle thickness was squared for these analyses since the activity of this muscle depends on its surface area and not solely on its thickness. Soft tissue commissure movements were namely movements from rest to facial expressions (maximum smile or lip pucker).
| RESULTS
The participants consisted of 30 boys and 10 girls, with a mean age of 11.3 years (+/À1.7 years). All of the eligible participants accepted to take part in the study during the recruitment period, and an informed consent form was signed. Table 1 When performing multiple regression analyses using stepwise regression, looking at associations between masseter muscle thickness, anthropometric data, age, sex, and dynamic facial expressions, the following was found. No associations were found when looking at movement in a specific axis (x-, y-, or z-axes) neither for maximal smile, nor for lip pucker. For the resultant (R) however, upon maximal smile, children with thicker masseter muscles had less displacement of the oral commissures (R = 0.39; p = 0.036) (Table 2)
| DISCUSSION
The present study evaluated the association between masseter muscle thickness, anthropometric facial measurements, and facial expressions (namely maximal smile and lip pucker) in children. Correlations were found between the three-dimensional movement of the oral commissures and masseter muscle thickness both for lip pucker and maximal smile, however not in the same direction. Thicker masseter muscles, in children, are associated with greater oral commissure movement during lip pucker, but less movement when smiling. In addition, children with more oral commissure movement during smile displayed less movement during lip pucker and vice versa.
Our findings partially confirmed our hypothesis, that children with a well-developed masticatory system, and a brachycephalic facial morphology, show greater perioral commissure movement when preforming facial expressions. This was only the case with regard to the lip pucker, but not for maximal smile. Anthropometric facial proportions, in the present sample, did not show associations with facial expressions.
Facial expressions were also not associated with age or sex.
Interestingly, no associations were found between masseter muscle thickness and facial expressions within this sample when subdividing oral commissure movements into their individual components in the horizontal, vertical, and antero-posterior axes. Only when looking at the total three-dimensional commissure movement (which includes movement in the three planes of space) were significant associations observed. Perhaps the differences were too small to be significant in each individual plane, and combining them all together as suggested by Sjögreen et al (Sjogreen et al., 2011). permitted associations to be detected.
To the best of our knowledge, these data are the first which attempt to investigate whether the masticatory muscles are associated with soft tissues activity when performing facial expressions.
Even though the muscles of mastication do not typically belong to the muscles of facial expression, we were interested in seeing whether they could perhaps be used as an indirect marker of the state of the muscles of facial expressions, since the masseter muscle is easily and reproducibly evaluated using ultrasonographic muscle thickness measurements. If the masticatory musculature is well developed, then by F I G U R E 1 Direction of oral commissure movement in the x-, y-, and z-axes from rest position (top image) to maximal smile (center image) and lip pucker (bottom image) extrapolation perhaps the muscles of facial expression will also be well developed. The conflicting results between the two different facial expressions looked at however, puts the plausibility of such a hypothesis in doubt, since children with thicker masseter muscles showed greater soft tissue movement during lip pucker but less during maximal smile than those with thinner muscles.
When looking more specifically into the activity of the muscles of facial expression recruited when performing each of these facial expressions, it can be seen that different muscles are brought into play. The act of lip puckering mainly depends on the activation of the orbicularis oris muscle (Jain & Rathee, 2020). This muscle originates on each side from the modiolus for its deeper part and from the other muscles of facial expression for its superficial part (Nicolau, 1983).
Smiling, on the other hand, implies the recruitment of the elevation muscles of the commissures which coalesce at the modiolus.
They are mainly controlled by zygomaticus major and the levator anguli oris muscles, pulling them in a superolateral and posterior direction for the zygomaticus major and with an additional superior vector for the levator anguli oris, increasing the elevation (Dao & Le, 2020;Ewart et al., 2005). Because the prominence of the zygomaticus major is more important than the levator anguli oris muscle, a welldeveloped musculature of both muscles can diminish the vertical amplitude of the movement due to the levator anguli oris muscle. This could partially explain our results, with regard to less oral commissure movement upon smiling in children with thicker masseter muscles.
This limited upward pulling effect of the commissure when smiling in the presence of a well-developed musculature was also observed by Kant et al (Kant et al., 2014). A negative correlation was found between the thickness and the activity of the orbicularis muscle (whose contraction occurs when smiling) in healthy individuals. Moreover, a direct link between the masseter and the modiulus muscles does not exist, also supporting our findings. Well-developed muscles of facial expression may limit the upward pulling effect of the smile and increase the movement when performing lip pucker.
A direct connection between the masseter and modiulus muscles may occur via the risorius muscle, but this is found in only two thirds of the population and is often described as a thin and wispy muscle (Som et al., 2012). This may not be important in relation to our results.
However, the superficial musculoaponeurotic system, is another possible direct link between the two muscle groups, and this connective tissue in the oral region pulls the skin in the direction of the masticatory muscles, especially the buccinator muscle (Hinganu et al., 2018), which supports our results by limiting the upward pulling effect when smiling in the presence of thick masseter muscles.
In our sample, no association was found between the anthropometric data and oral commissure movement during facial expressions.
Trotman et al. found that the general pattern of the movement of the face follows the static facial shape, except when performing lip pucker . Based on these findings, an association was expected between anthropometric facial measurements and smiling movements, but none was found. In line with our results however, Ramires et al. note that anthropometry based on vertical facial type determination is not a good predictor (Ramires et al., 2011), which can lead to the impossibility of showing any association between facial expressions and anthropometric facial measurements in our study.
With regard to age, Parks et al. observed a reduction of the strength and endurance of the orbicularis oris muscle with aging (Park et al., 2018). This could affect facial expressions, especially lip pucker-
ing. Houstis et al. compared facial expressions in children and in adults
finding differences between the two groups (Houstis & Kiliaridis, 2009). Our results however found no association between facial expressions and age, perhaps because of the relatively homogenous age of the sample. Differences between males and females when performing facial expressions have been found for adults, but not for children (Houstis & Kiliaridis, 2009). Our sample, consisting only of growing children, also showed no differences between males and females.
Limitations of the present study generally related to generalizability. Included participants were all patients presenting to our orthodontic clinic and do not necessarily constitute a representative sample of the population. Moreover, only patients with non-extreme facial patterns were included. Geographical and ethnic differences may exist, as well as differences in different malocclusion groups. With regard to the dynamic oral commissure data, the generation of facial expressions in a research environment is often artificial and difficult to reproduce, and this is especially true for the smile and less so for the lip pucker (Johnston et al., 2003). Lastly, we did not have cephalometric radiographs available to assess hard tissues morphology for the obvious reason of unnecessary radiation exposure. However, when cephalometric radiographs are not available to evaluate the hard tissues, anthropometric proportions can be used as an alternative, albeit with certain limitations (Budai et al., 2003;Farkas et al., 1999). Farkas et al. showed that errors in landmark positions are more frequent for vertical anthropometric data compared to cephalometric data, and thus these results should be interpreted with caution (Farkas et al., 1999).
The lips play a critical role in facial expressions (Piccinin & Zito, 2020). Most studies looking at facial expressions report on how medical intervention can normalize an affected pattern of motion, in pathological situations such as facial palsy (De Stefani et al., 2019).
Looking at normal populations and the variation that exists in facial expressions however between individuals, and with aging, is essential to understanding the effect of disease on facial expressions and the potential benefits of the correction of these problems with techniques such as facial reanimation surgery. Common facial expressions such as the smile are important to analyze, but reproducibility is also key. Popat and colleagues have raised this problem and suggest the use of the words "puppy" and "baby" as the most appropriate gestures to register lip movement (Popat, Richmond, et al., 2008;Popat, Henley, et al., 2010). Moreover, facial landmark placement, in the x-, y-and z-coordinate system must be precise, and a reproducibility of <1 mm is considered clinically acceptable (Toma et al., 2009). These are important points to consider for future investigations.
The present results are pertinent to patients and practitioners alike. As patients are becoming more actively involved in treatmentplanning decisions, a more dynamic and individualized approach to analyzing their smile and facial expressions may provide for more interactive patient-practitioner joint treatment planning, in fields such as orthodontics, orthognathic and plastic surgery, and cosmetic interventions. Care providers should be aware of different facial expression trends based on muscular factors, especially with regard to smile variation.
5 | CONCLUSIONS 1. Thicker masseter muscles in children are associated with greater commissure movement during lip pucker.
2. Thinner masseter muscles in children are associated with greater commissure movement during maximal smile.
3. In addition, children with more commissure movement during smile display less commissure movement during lip pucker and vice versa.
4. Masticatory muscles seem to be associated with the activity of the muscles of facial expressions and may serve as a surrogate for the activity of these muscles.
5. Neither facial morphology, nor age nor sex are associated with facial expression in the present sample.
CONFLICT OF INTEREST
The authors declare that they have no conflict of interest.
AUTHOR CONTRIBUTIONS
Stavros Kiliaridis and Gregory S. Antonarakis were involvedin the conception, design and supervision of the study. Christophe Guédat, Ourania Stergiopulos, and Gregory S. Antonarakis were involved in data acquisition, analysis and interpretation. All authors contributed to the drafting of the manuscript and its critical revision, and gave final approval of the version tobe published. All authors are accountable for all aspects of the present work.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2021-05-09T06:16:26.368Z | 2021-05-08T00:00:00.000 | {
"year": 2021,
"sha1": "1426d60a50536f3b71515e08ffcf765cf755952a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cre2.431",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "09a2004b77ce8d73aab7bfc0cabb5f0ce8d570c9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202771138 | pes2o/s2orc | v3-fos-license | Application of Adam-BP neural network in leveling fitting
According to the accuracy of BP (Back Propagation) neural network in leveling fitting, an improved BP neural network model is proposed. In order to overcome the defect of the low convergence rate and local minimal in leveling fitting. The BP neural network is improved by using Relu (Rectified Linear Units) function as network activation function and Adam (Adaptive Moment Estimation) algorithm as network optimization function. The experimental results show that the improved BP neural network can effectively improve the accuracy.
Preface
Because GPS has the advantages of high positioning accuracy, all-weather, all-time and short measuring period, it plays an increasingly important role in geodetic surveying. Using GPS data for leveling fitting has become a hot issue on engineering measurement research [1] .
In the vertical datum, the elevation provided by GPS survey is the GPS geodetic height based on WGS-84 reference ellipsoid, the normal height is used in China. The relationship is as shown in Equation 1.
Aiming at the leveling fitting methods, many experts at home and abroad have done a lot of research, put forward a variety of models and corresponding algorithms, and achieved some results. Such as direct method, mathematical fitting method, neural network method, etc,but each model has its own advantages and disadvantages [2] . The direct method obtains elevation accuracy with high accuracy, but requires high-precision, high-density gravity data and high-resolution terrain data. Mathematical fitting is a simple and fast conversion method, but it is limited by its own model and has low precision. The BP neural network has no model assumptions in GPS elevation conversion [3] , which has the characteristics of adaptive characteristics, avoids the error caused by the model hypothesis, and does not need to use high-precision gravity data to adjust the connection parameters in the neural network. By adjusting the connection parameters and weights of the neural network, any non-linear function can be approached with arbitrary accuracy, so that the conversion result obtains higher precision, which provides a new method for leveling fitting.
Adam-bp neural network is proposed to overcome the drawbacks of the slow convergence speed and the local minimum of the back-propagation algorithm.Through experiments, the GPS leveling fitting accuracy of quadratic polynomial fitting method, BP neural network and Adam-BP neural network are compared and analyzed.
Quadratic polynomial fitting method
Quadratic polynomial fitting method is often used in leveling fitting [4] . The expression between elevation anomaly and plane coordinates is as follows: (2) In the formula, 0 , 1 a , 2 a , 3 a , 4 a , 5 a are the parameters to be determined. Therefore, this area requires at least 6 known points. When there are more than 6 known points, the corresponding error equations can be listed: ζ can be obtained according to the principle of least squares method, and then the height anomaly of any point can be obtained, thus the normal height can be obtained [5] .
BP neural network model
BP neural network is a multi-layer feedforward neural network. Each layer is composed of several neurons. Each neuron in each layer is fully connected. There is no connection between neurons in the same layer. BP neural network is supervised learning, which consists of input layer, hidden layer and output layer [6] . The network structure model is shown in Figure 1.
BP neural network model
The process of BP neural network calculation is mainly divided into two stages. The specific steps are shown in Figure2. The first stage is the forward propagation of input signal. After input learning samples, the signal reaches the output layer from the input layer to the hidden layer. The second stage is to update the weights and thresholds between input layer, hidden layer and output layer, and constantly revise the parameters, so as to improve the accuracy of network fitting.
Use Relu function as activation function
In the neural network, the activation function is to convert the input signal received by each neuron into the output signal. In the neural network, the function of activation function is to convert the input signal received by each neuron into the output signal. By introducing the non-linear activation function, the neuron can realize the non-linear output, approximate any non-linear function, and enhance the learning ability of the network. [7] .
In BP neural network, sigmoid function is usually used as the activation function. However, in leveling fitting, the accuracy of leveling fitting is not high because of the problems of large amount of calculation, disappearance of gradient and slow convergence speed. The Relu function is a piecewise linear function [8] . When the input is greater than 0, the value is directly output; when the input is less than or equal to 0, the output is 0. This feature is called single-sided suppression. Therefore, the neural network has sparsity and can better mine data related features and fit training data. Because of the fast convergence speed of the neural network, the Relu function is used as the activation function. The formula is as follows: The graph of function is shown in Figure 3.
Use the ADAM algorithm as an optimization function
In the bp neural network, the process of solving the minimum value of the loss function is called optimization. SGD (Stochastic Gradient Descent) is usually used as the optimization function of BP neural network, but it has the defect of the low convergence rate and local minimal [9] . However, there is a disadvantage that the rate of decline is slow and it is easy to fall into the local minimum point, resulting in low accuracy in the level fitting.
Adam is a learning rate adaptive optimization algorithm [10] , which can update the weight of neural network iteratively based on training data. Adam has the following advantages: 1) The gradient firs moment (exponential weighting) is added to the momentum, and the momentum is applied to the scaled gradient to make the gradient diagonal invariance suitable for solving the problem of high noise and sparse gradient.
2) The Adam algorithm modifies the first moment (both momentum term) initialized from the origin and the non-central second moment estimate. After the offset correction, each iteration learning rate has a certain range, making the parameters relatively stable. It is robust to the selection of hyperparameters and enables efficient searching of parameter spaces.
The specific algorithm steps of Adam are shown in the following table: 1)Setting step size , moment estimation attenuation rate 1 , 2 , initial parameters θ 2) Updating biased first moment estimate: 11 (1 ) + − s s g (7) 3) Updating biased second moment estimate:
Test area control network
Leveling fitting experiment is carried out with a regional GPS control network [11] . Quadratic polynomial fitting method, BP neural network method and Adam-BP neural network method are used The elevation data is shown in
Selection of sample points
Points 4, 6, 7, 13 and 19 were selected as test points, and the remaining 15 points were selected as fitting points. Fitting experiments were carried out on three fitting models: quadratic polynomial, BP neural network and Adam-BP, and the fitting results were compared [12] . In the formula, v is the residual and n is the number of number of checkpoints.
Accuracy comparison of different algorithms
The fitting results of the three methods are shown in Table 2.
Conclusion
By comparing and analyzing the effects of quadratic polynomial fitting method, BP neural network method and Adam-BP neural network method in leveling fitting, the following conclusions are drawn: 1) In the case where the number of known points is small and the accuracy is not required, a quadratic polynomial fitting method can be used.
2) Because BP neural network has the advantages of nonlinear mapping ability and small model error, BP neural network accuracy is better than quadratic polynomial fitting method.
3) Adam-BP neural network model, adding momentum and adaptive learning rate based on BP neural network model, so the accuracy in GPS leveling fitting is higher than BP neural network, and the convergence speed is faster, which can avoid falling into local minimum. | 2019-09-17T02:58:46.996Z | 2019-09-05T00:00:00.000 | {
"year": 2019,
"sha1": "630eeb790aa2e0707acb20235fdf6c6708e81f76",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/310/2/022036",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2adb25d1fab738cf198a90cc025625009c2aeb98",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
} |
26293999 | pes2o/s2orc | v3-fos-license | Successful 1:1 proportion ventilation with a unique device for independent lung ventilation using a double-lumen tube without complications in the supine and lateral decubitus positions. A pilot study
Introduction Adequate blood oxygenation and ventilation/perfusion matching should be main goal of anaesthetic and intensive care management. At present, one of the methods of improving gas exchange restricted by ventilation/perfusion mismatching is independent ventilation with two ventilators. Recently, however, a unique device has been developed, enabling ventilation of independent lungs in 1:1, 2:1, 3:1, and 5:1 proportions. The main goal of the study was to evaluate the device’s utility, precision and impact on pulmonary mechanics. Secondly- to measure the gas distribution in supine and lateral decubitus position. Materials and methods 69 patients who underwent elective thoracic surgery were eligible for the study. During general anaesthesia, after double lumen tube intubation, the aforementioned control system was placed between the anaesthetic machine and the patient. In the supine and lateral decubitus (left/right) positions, measurements of conventional and independent (1:1 proportion) ventilation were performed separately for each lung, including the following: tidal volume, peak pressure and dynamic compliance. Results Our results show that conventional ventilation using Robertshaw tube in the supine position directs 47% of the tidal volume to the left lung and 53% to the right lung. Furthermore, in the left lateral position, 44% is directed to the dependent lung and 56% to the non-dependent lung. In the right lateral position, 49% is directed to the dependent lung and 51% to the non-dependent lung. The control system positively affected non-dependent and dependent lung ventilation by delivering equal tidal volumes into both lungs with no adverse effects, regardless of patient's position. Conclusions We report that gas distribution is uneven during conventional ventilation using Robertshaw tube in the supine and lateral decubitus positions. However, this recently released control system enables precise and safe independent ventilation in the supine and the left and right lateral decubitus positions.
Introduction
Adequate blood oxygenation and ventilation/perfusion matching should be main goal of anaesthetic and intensive care management. At present, one of the methods of improving gas exchange restricted by ventilation/perfusion mismatching is independent ventilation with two ventilators. Recently, however, a unique device has been developed, enabling ventilation of independent lungs in 1:1, 2:1, 3:1, and 5:1 proportions. The main goal of the study was to evaluate the device's utility, precision and impact on pulmonary mechanics. Secondly-to measure the gas distribution in supine and lateral decubitus position.
Materials and methods
69 patients who underwent elective thoracic surgery were eligible for the study. During general anaesthesia, after double lumen tube intubation, the aforementioned control system was placed between the anaesthetic machine and the patient. In the supine and lateral decubitus (left/right) positions, measurements of conventional and independent (1:1 proportion) ventilation were performed separately for each lung, including the following: tidal volume, peak pressure and dynamic compliance.
Results
Our results show that conventional ventilation using Robertshaw tube in the supine position directs 47% of the tidal volume to the left lung and 53% to the right lung. Furthermore, in the left lateral position, 44% is directed to the dependent lung and 56% to the non-dependent lung. In the right lateral position, 49% is directed to the dependent lung and 51% to the nondependent lung. The control system positively affected non-dependent and dependent lung PLOS
Introduction
The maintenance of adequate blood oxygenation should be the main goal of each intra-operative anaesthetic management. This is achieved by maintaining adequate blood pressure and preventing ventilation/perfusion mismatch. Should these parameters not be met, oxygen delivery to the cells will be inadequate, leading to anaerobic metabolism and potential multi-organ failure. One of the reasons for clinical ventilation/perfusion mismatch can be pulmonary pathology, leading to so called shunt [1][2][3], another is-patient positioning on the operating table [4,5]. While young and healthy subjects can cope this scenario due to the efficiency of their physiological reflexes, in medicine, many surgical procedures involve elderly and critically ill patients with significant health impairments [6,7]. Additionally, the hypoxic vasoconstriction reflex is compromised by anaesthetic agents [8][9][10].
In the lateral decubitus position, greater perfusion occurs inside the dependent lung, while greater ventilation is found within the non-dependent one [1,2,[11][12][13]. Data concerning tidal volume distribution in the lateral decubitus position are inconsistent and vary between 30% to 39% for the dependent lung. Presently, the only way to equalise uneven gas distribution is independent lung ventilation using two ventilators [1][2][3]12]. Recently, a unique device has been invented and patented by Polish engineers. This device enables dividing the inspired tidal volume into even volumes to both lungs during mechanical ventilation with double lumen oro-bronchial intubation (independent ventilation in a proportion of 1:1). This control system (called the 'tidal volume divider') can be easily attached to respirator or anaesthetic machine. Apart from allowing independent ventilation at a 1:1 ratio, it enables the division of the inspired volume in proportions of 2:1, 3:1 and 5:1 [14][15][16]. This capability provides a new perspective for treating intensive care unit patients with one-lung pathologies [3]. The second prospect for use is within general anaesthesia in the lateral decubitus position, especially in the surgical treatment of major concomitant diseases or involving the pathology of one lung. The main utility of the device could be prevention of volume shift between the lungs during lateral position ventilation.
The main hypothesis of the study was that the use of the device prevents the shift in tidal volume between the lungs during ventilation in the lateral decubitus position, as well as the evaluation of its impact on pulmonary mechanics. The second goal was to measure the distribution of inspired gases between the lungs in two positions, the supine and the lateral decubitus. Additionally, the impact of position changes on tidal volume distribution were assessed.
Materials and methods
Ethical approval for this study (Ethical Committee N˚KE-0254/47/2009, approval date 26-02-2009) was provided by the Ethical Committee of Medical University of Lublin, Poland. Study protocol approved by Ethical Committee is attached as Supplementary Information. Study was registered as a trial at ClinicalTrials.gov, ID: NCT02786862. This trial was registered after patients requirement due to comply with WHO regulations. Reasons for delay in registration was that patients requirement has been done between March 2009 and February 2010 and at that time trial registration was not obligatory due to Polish regulations. The authors confirm that all ongoing and related trials for this intervention are registered. A flowchart of this study is presented in Fig 1. After obtaining written consent, 69 ASA I and II patients who underwent elective thoracic surgery were eligible into the analysis. Inclusion criteria were as follow: elective thoracic surgery with double lumen tube intubation, assessment as ASA I or II, age above 18 years. Exclusion criteria were as follow: asthma or chronic obstructive pulmonary disease, history of thoracotomy, assessment as ASA III, difficult airway conditions, kyphoscoliosis or other alterations of chest wall or severe obesity. All patients were examined routinely 24 h before surgery. The typical tests were performed, as well as gasometric and spirometric exams for risk evaluation.
Anaesthetic management
One hour before surgery, the patient-subjects received diazepam 0.15 mg kg -1 as premedication. After arriving at the operating theatre, standard monitoring system practices were applied, including heart rate (HR), systolic arterial pressure (SAP), diastolic arterial pressure (DAP), mean arterial pressure (MAP), and pulse oximetry (SpO 2 ). Moreover, an intra-vein cannula was placed, and the infusion of multi-electrolytic fluid 5-10 ml kg -1 h -1 was started. After pre-oxygenation, atropine 0.5 mg and fentanyl 3 μg kg -1 were given, and the induction of anaesthesia with thiopentone 5-7 mg kg -1 was started. Suxamethonium was administered for neuromuscular blockade, and bronchial intubation with a Robertshow double lumen tube was performed. The left bronchus was intubated for right lung surgery, and the right bronchus was intubated for left lung surgery. Tube placement was checked via auscultation and fiberscope. Anaesthesia was maintained with sevoflurane, additional fentanyl doses were used if needed, and neuromuscular blockade was obtained with vecuronium 0.1 mg kg -1 . Additionally, 0.1 mg kg -1 dose of morphine was given subcutaneously for postoperative pain control. Patients were also ventilated with O 2 and AIR mixture, using the following settings: volume control intermittent positive pressure ventilation, FiO 2 0.4, Vt 6-10 ml/kg, and f 12-15 /min. Furthermore, end tidal CO 2 was monitored due to normocapnia maintenance (4.0-5.3 kPa). At the end of surgery, intercostal blockade was brought about with 0.5% bupivacaine, with 5 ml for each nerve. Finally, the neuromuscular blockade was reversed with neostigmine 0.04 mg kg -1 and with atropine 0.01 mg kg -1 .
Ventilation measurements
After anaesthesia stabilisation, we used the unique control system, called the 'tidal volume divider'. This device was placed between the anaesthetic machine and the double lumen tube of the patient. This control system enables conventional ventilation (without any intervention from the control system, as the settings are defined on the ventilator), as well as independent ventilation with division of the tidal volume between the lungs in proportions of 1:1, 2:1, 3:1, and 5:1. It also enables selective positive end-expiratory pressure application ('PEEP') to each lung, using mechanical valves. With independent ventilation, settings such as frequency, tidal volume and inspiration time are defined by ventilator, and this system only controls the direction of tidal volume to each lung (as the control system is a flow divider) by using a differential pneumatic resistor. It also enables dependent and non-dependent lung ventilation in the lateral decubitus positions. Furthermore, the system monitors the expired volume, airway pressure and dynamic compliance of each lung, (Fig 2). This device was described and tested on mechanical lung models with a variety of lung model mechanics (compliance, resistance) and ventilation parameters (frequency, tidal volume) [15]. It was also tested clinically for its safety during our previous study [16]. The system was invented, developed and patented by a group of Polish engineers from the Nałęcz Institute of Biocybernetics and Biomedical Engineering, the Polish Academy of Sciences.
During the entire procedure, we constantly monitored the expired volume, the peak respiratory pressure and the dynamic compliance separately for each lung. These values were documented at each point of the study. We made measurements for conventional ventilation in the supine position; then, we began independent ventilation at a 1:1 proportion. Subsequently, we discontinued independent ventilation and moved the patient to the lateral decubitus position.
In so doing, we divided our sample into two groups as follows: group L: patients moved to the left decubitus position due to right lung surgery, and group R: patients moved to the right decubitus position due to left lung surgery. We then made measurements for conventional anaesthetic practices and followed independent 1:1 proportion ventilation. Measurements at each point of study, covering also hemodynamic (MAP, HR) and oxygenation state (SpO 2 ), were made after a 10 minute stabilisation period. Adverse effects of ventilation with control system were defined as follow: increase/decrease in blood pressure more than 20% of initial value, increase/decrease in heart rate more than 20% of initial value, pulse oximetry below 96%, peak inspiratory pressure more than 30 cm H 2 O. Subsequently, we disconnected the control system and performed typical anaesthetic procedures for thoracic surgery with a one-lung ventilation procedure. Any protocol violations resulted withdrawal from analysis (Fig 1). Study protocol (Polish and English versions) attached as S1 and S2 Files. Raw clinical data attached as S3 and S4 Files.
Statistical analysis was performed using STATISTICA software (StatSoft), with a significance level of 0.05. Sample size was calculated using STATISTICA Power Analysis module and determined at minimum level of 23 subjects per group. Assumptions for primary endpoint were as follow: tidal volume distributed to the dependent lung during conventional ventilation in the lateral decubitus position, on average (Mean 1) = 300 ± 30 ml (approximately 10% less than to the non-dependent lung). Tidal volume distributed to the dependent lung during independent 1:1 ventilation (with the device) in the lateral decubitus position, on average (Mean 2) = 330 ± 33 ml (approximately the same as to the non-dependent lung, without volume shift between the lungs), alpha = 0.05, power goal 0.9. Group characteristics values are presented as the mean and standard deviation due to normal distribution, and differences were tested using Student's parametric t-test, with the exception of age, pO 2 and sex. Age and pO 2 differences were tested using the Mann-Whitney U-test due to non-parametric distribution. Sex differences are presented as numbers and percentages and were tested with the non-parametric chisquare test. Differences in MAP were tested using Student's parametric t-test due to normal distribution. Other values are presented as the median and range due to non-parametric distribution, and differences were tested using the non-parametric Wilcoxon test.
There were no significant differences in patient characteristics (including spirometric and gasometric measurement results) between groups, as shown in Table 1.
Tidal volume distribution
Our results showed that during conventional ventilation using Robertshow tube in the supine position, the right lung received a larger volume of air in comparison to the left: 53±6% vs. 47 ±6% (p = 0.000), respectively (as % of total tidal volume). There was no peak pressure difference between the lungs. The dynamic compliance differed, with the right lung being more compliant (on average). Independent ventilation at a proportion of 1:1 ensured the equal division of tidal volume to each lung (50±1% each), without significant changes in peak pressure and dynamic compliance ( Table 2).
After the change in position, patients in group L were placed into the left decubitus position. These test subjects shown an uneven tidal volume distribution, with 44±6% to the left (lower, dependent) and 56±2% to the right (upper, non-dependent) lung (p = 0.000), without peak pressure differences between lungs but with higher non-dependent lung compliance. However, independent ventilation at a proportion of 1:1 brought about an equal division of tidal volume to each lung (50±2% each), without significant changes in peak pressure and dynamic compliance ( Table 3).
The right decubitus reposition also induced an unequal tidal volume distribution, with 51 ±5% to the left (upper, non-dependent) and 49±5% to the right (lower, dependent) (p = 0.035). There were no peak pressure and dynamic compliance differences between these patients' lungs. Independent ventilation in a proportion of 1:1 distributed 50±1% of the tidal volume to the each lung, without any peak pressure and dynamic compliance changes ( Table 3).
Impact of position change
Position change impact analysis for the tidal volume distribution also revealed that the left and right decubitus positions induce unequal gas distribution. Accordingly, the left position distributed 44±6% of the tidal volume distribution to the dependent lung and 56±6% to the nondependent lung, while the right position directed 49±5% of this distribution to the dependent lung and 51±5% to the non-dependent one. There were no significant peak pressure changes according to the position change, but dynamic compliance decreased in the dependent lungs and increased in the non-dependent lungs in both positions ( Table 4).
Impact of independent ventilation
Independent ventilation in a proportion of 1:1 in the supine position using Robertshaw tube increased the left lung compliance after ventilation, but there were no significant changes in the right lung. There was no effect of independent ventilation institution on hemodynamic parameters, such as mean arterial pressure, heart rate and oxygenation state, as measured by pulse oximetry in the supine position. However, independent ventilation in the left decubitus position increased the left (dependent) lung compliance and decreased that of the right (nondependent) lung. Similar to the results observed in the supine position, in the left decubitus position, independent ventilation had no significant influence on hemodynamic and oximetry measurements. In the right decubitus position, increased right (dependent) lung compliance occurred without any other significant changes ( Table 5). As above, there were no changes in the hemodynamic or oximetry parameters after initiation of independent ventilation.
There were no registered adverse effects during the whole study protocol.
Discussion
Our data with respect to gas distribution in the supine position were consistent with established literature, as Baehrendtz and Klingstedt [12] showed the same findings (53% to the right and 47% to the left) and so did Bindslev et al. [13] (52% to the right and 48% to the left). These studies were also performed on anaesthetised human subjects with double-lumen intubation. Baehrendtz et al. present slightly different values in two studies, with 55% vs. 45% in the first [1] and 54 vs. 46% in the second [2], but these findings involved intensive care unit patients with acute bilateral lung disease. The findings of inspired gas distribution in anaesthetised subjects in the lateral decubitus position were less consistent. Presently, only a few reports assessed inspired gas distribution during conventional ventilation with anaesthetised subjects in the lateral decubitus position, and these were mostly without left/right side distinction. In contrast, our study revealed differences in gas distribution between the left and right decubitus positions. It must be noted that Bindslev et al. [13] only made their measurements in the left lateral position. Their results showed a 61% gas distribution to the non-dependent lung and 39% to the dependent lung. In two of the studies made by Baehrendtz et al. [1,2] involving intensive care patients, the distribution was 70% to the non-dependent lung and 30% to the dependent lung. One of these studies did not distinguished between left or lateral position [1], and the second one only assessed the left lateral position [2]. All of the studies previously mentioned were made on small sample populations, between 7 to 11, while our population sample was for 69 patients, with higher statistical value.
Differences in gas distribution between lungs in the supine position probably do not have a great clinical relevance, but are informative and could supplement literature findings, while uneven gas distribution in the lateral decubitus position during general anaesthesia with artificial ventilation lowers the functional residual capacity (FRC) [17,18] causing alveolar collapse [19,20] and compression atelectasis [4,20]. The second issue is the perfusion inequality caused by gravitational force [11,21] and what is more dynamic hyperinflation diverts perfusion towards the dependent hypo-inflated lung [22,23]. The above mentioned factors can increase ventilation/perfusion mismatch and venous admixture [1,2,12] leading to a decrease in oxygen tension and increase the risk for organ failure. Additionally, anaesthetic agents can impair hypoxic pulmonary vasoconstriction reflex [7][8][9][10]. One of the ways to improve ventilation/ Values are presented as medians and in parentheses as interquartile ranges, additionally in square brackets volumes are presented as percent of total tidal volume CONV, conventional ventilation; 1:1, independent ventilation in 1:1 proportions; % VT, percent of total tidal volume; P perfusion inequality is the initiation of independent ventilation, which has been previously shown to improve ventilation/perfusion matching and decrease venous admixture [1][2][3]12]. The ideal situation is to divert the tidal volume in proportion to regional perfusion. The independent ventilation of each lung with two synchronised ventilators meets this need. As shown by Baehrendtz et al. [12] on anaesthetised patients, independent ventilation with equal tidal volume decreases shunt by 26% and increases arterial oxygen tension by 27%.
We should mention that general PEEP application can improve gas exchange, but it decreases cardiac output [24] and diverts flow into the less compliant lung [25]. Only selective PEEP into the dependent lung can improve ventilation/perfusion matching [26].
The application of independent ventilation with two ventilators is very difficult, requiring specialised equipment and additional staff. Currently, we have a new opportunity to institute independent ventilation with tidal volume equalisation by using the unique device designed, developed and patented by Polish engineers. The use of this device is practical, comfortable and safe, as demonstrated during our current study, the maximal pressure did not cross the safe level of 30 cmH 2 O. Additionally, the device enables selective PEEP application [27]. Moreover, the use of independent ventilation did not cause any adverse hemodynamic effects, as we showed in both the supine and the left and right lateral decubitus positions. Furthermore, oxygenation measured by pulse oximetry did not decrease with independent ventilation, and while we hypothesised that oxygen tension should improve, we were limited in arterial oxygenation and degree of shunt measurements.
We believe that particular benefits should be obtained during thoracic surgery procedures requiring lateral decubitus positioning, especially in long lasting operations when alveoli collapse could occur [28]. Position concerned ventilation/perfusion mismatch could be fully controlled with this system. Furthermore, in addition to 1:1 proportioning for patient ventilation, this device allows 2:1, 3:1 and 5:1 proportioning. We feel that this could be clinically useful in patients with greater impairment. Hypoxemia occurs during one lung ventilation in the lateral position for thoracic surgery in 5-10% patients [29] and could be exaggerated among chronically ill subjects. For example, independent ventilation with 5:1 proportioning in the lateral decubitus position for thoracic surgery could enable the performance of surgical procedures without hypoxemia. This outcome requires non-dependent lung ventilation with some interference due to surgery but should be possible. This issue needs further study. Another application could involve intensive care patients with unilateral lung pathology and with great compliance differences between lungs. Sawulski et al. [30] described the case of young trauma patient with unilateral lung pathology who had been successfully treated through 1:1 and 2:1 proportion ventilation applications with this device. Improved gas exchange enabled collapsed lung expansion without traumatic repercussions on the compliant lung and likely saved the patient from surgical lobectomy.
A very interesting clinical application could be expected in patients after single-lung transplantation, as Pilcher et al. [3] reported, treatment with independent lung ventilation after one lung transplantation leads to a satisfactory long-term outcome.
The limitation of this study was lack of detailed hemodynamic and oxygen status measurements due to low efficiency of non-invasive methods and the higher risk for patients while using the invasive one. The second limitation was that all subjects had conventional ventilation prior to independent 1:1 ventilation, what potentially could cause carry-over effect to independent 1:1 ventilation. We assumed that conventional ventilation should not affect patients lungs in a lasting manner, so when we changed ventilation to independent in 1:1 proportions, all biomechanical lungs properties should be the same. Of course these assumptions apply to lungs without any serious pathologies as in our study: only patient ASA I and II were included. What is more, to minimize bias risk 10 minutes stabilisation period at each point of study was applied before measurements. Another issue is lack of randomization and the two groups R and L could not be well balanced in patient-subject characteristics by the study design. We assumed that it might be impossible to make randomization meeting ethical committee approval. For example, if we would assign right lung surgery patient to group R we should move patient to the right decubitus position, then take measurements and move to left decubitus position for surgery proceeding. Additional changes in position carry unnecessary risk for patient and should be avoided. To minimize bias caused by lack of baseline groups characteristics adjustment we enrolled only ASA I and II patients with all exclusions. As these assumptions were correct there should not be any baseline differences between groups, what was confirmed by our statistical analysis ( Table 1). As we used two types of Robertshow tubes: left and right, it could potentially bias our measurements due to inhaled gas distribution. Other disadvantage of studied device is that during ventilation we can use only one ventilator mode while during independent ventilation with two respirators two different modes can be applied, separately to each lung.
Conclusions
Concluding, in our study we revealed uneven gas distribution during conventional ventilation using Robertshow tube in general anaesthetised humans in the supine, as well as in lateral decubitus positioning. Of these, the most disadvantageous gas distribution was in the left lateral decubitus position. The control system, designed, developed and patented by Polish engineers, enabled precise, safe and simple independent (in 1:1 proportions) ventilation in the supine, as well as in the left and right lateral decubitus positions without any serious adverse effects. | 2018-04-03T00:42:39.060Z | 2017-09-14T00:00:00.000 | {
"year": 2017,
"sha1": "b3eda00d4b621bd427c26114dff5a4b0f3d691cd",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0184537&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3eda00d4b621bd427c26114dff5a4b0f3d691cd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237771156 | pes2o/s2orc | v3-fos-license | Arthropods of the Limarí River basin (Coquimbo Region, Chile): taxonomic composition in agricultural ecosystems
The Limarí valley, located in the Coquimbo Region of Chile, is an important agricultural area that is immersed in the transverse valleys of the Norte Chico. In recent decades, the continuous expansion of agriculture towards dry land zones has favored the migration and establishment of potential pests, such as arthropods, that may affect crops or be zoonotic agents. Based on the limited knowledge we have about the arthropod group present in the Limarí basin, our objective is to describe the taxonomic composition of the assemblage of economically important arthropods inhabiting this basin of the semiarid region of Chile. After reviewing historical data, specimen collections, and the specialized literature, a total of 414 arthropod species were recorded. Of the total number of species recorded, 92.5% were insects, the most diverse taxon, with 11 orders. Arachnids, in turn, were represented only by Acari with 31 species. The most widely represented orders of insects were Coleoptera, Hemiptera, and Lepidoptera. Within Coleoptera the most species-rich families were, in decreasing order of importance, Curculionidae, Coccinellidae, Cerambycidae, Scarabaeidae, Chrysomelidae (Bruchinae), Ptinidae, and Bostrichidae; within Hemiptera these were Aphididae, Diaspididae, Coccidae, Pseudococcidae, Pentatomidae and Rhopalidae; and within Lepidoptera they were Noctuidae and Tortricidae. We hope this study serves as a starting point for identifying the most diverse arthropod groups and developing pest monitoring and control programs.
Highlights:
A large percentage of phytophagous species, mainly belonging to Acari, Lepidoptera, Hemiptera and Coleoptera, were registered in the Limarí basin.
Some families of agricultural importance (Aleyrodidae, Aphididae, Coccidae, Diaspididae, Margarodidae, Pseudococcidae), were observed in large agricultural crops in the basin (e.g., vines, oranges, mandarins, lemon trees, avocado trees, walnuts, olive trees, vegetable crops).
A smaller fraction corresponded to the group of predators and parasitoids, mainly represented by Coleoptera (Coccinellidae), Neuroptera (Chrysopidae) and Hymenoptera (Braconidae, Encyrtidae, Ichneumonidae, Platygastridae, Signiphoridae).
The richness and spatial records of arthropods were mostly concentrated between the city of Ovalle and the estuary of Punitaqui - the areas with most intense agricultural activity in the Limarí basin.
Introduction
In Chile, the Norte Chico region extends from 27° to 32° S and encompasses the administrative regions of Atacama and Coquimbo. The area is characterized by the presence of an intermediate depression interspersed with mountain ranges that give origin to transverse valleys that extend from the Andes to the Pacific Ocean (14). These valleys, namely the valleys of Copiapó (27° to 28° S) and Huasco (28° to 29° S) in the Atacama region and the valleys of Elqui (30° S), Limarí (31° S) and Choapa (32° S) in the Coquimbo region, form a semiarid matrix characterized by scarce and disperse rainfall and the presence of permanent, mixed-regime rivers (14).
Significant among the transverse valleys of semiarid Chile is the Limarí basin, considered to be an economically important food and agricultural area (22) with secondary production activities such as small-scale agriculture, cattle raising and small-scale mining (13). Nowadays the valley surface is covered by forage (25,456 ha), fruit (20,151 ha), grapevine (8,353 ha) and vegetable (4,753 ha) cultivation lands (22). Based on export volumes, the most economically important fruit crops include grapevines (7,321.7 ha), avocado trees (4,128.0 ha), olives (2,511.2 ha) and mandarin trees (1,573.4 ha), which together with other fruit crops account for 46.9% of the fruit tree species of the Norte Chico (12,22).
The agri-food activity in the Limary valley is directly dependent on the irrigation water storage capacity provided by its three dams (i.e., La Paloma Dam, 748 million m 3 ; Cogotí Dam, 150 million m 3 ; Recoleta Dam, 100 million m 3 ) (15). However, as a result of the introduction of novel irrigation techniques, cultivation lands have expanded from the river bank to slopes located at higher altitudes (4), that is, hill sides, that naturally host vegetation whose productivity varies greatly with the seasons and is highly dependent on climate phenomena (37). Once resources in the natural areas become scarce, the newly cultivated lands, which maintain stable vegetable productivity due to artificial irrigation, become favorable areas for the migration of potential pests (1,2).
The establishment of large-scale cultivation systems in a semiarid basin translates into increased vegetable diversity and, consequently, greater availability of habitats and resources for both pests and natural enemies (3,7). For this reason, taxonomical characterization of pest species, including arthropods, is key to get a better understanding of their nature and of the potential biological risks of cultivation systems, particularly agricultural ones (38), data which is essential for identifying biological and entomological vulnerabilities in this large agricultural area. Based on the limited knowledge we have about the arthropod group present in the Limarí basin, our objective is to describe the taxonomic composition of the assemblage of economically important arthropods inhabiting this basin of the semiarid region of Chile.
Study area
The study area encompasses the basin of the Limarí River, including the coastal fringe extending north from the mouth of the Elqui River to the mouth of the Choapa River, following the limits of the water divide for the Limarí River basin, and the coastal fringe south of 31°W (figure 1); the corresponding limits were defined using 1:250.000 scale maps from the Instituto Geográfico Militar (29,30). The predominant soil types in the area are entisols, aridisols, and inceptisols, all of which show some influence from the vegetation (26). The climate is steppe type, ranging from steppe with abundant clouds in the coast to cold steppe in mountainous areas (27).
The mean annual precipitation exceeds 300 mm in mountainous areas and reaches 60 mm to 240 mm in the lowlands near the coast (16). (Coquimbo Region, Chile). Figura 1. Localización geográfica de la cuenca del Limarí y cuencas costeras circundantes.
(Región de Coquimbo, Chile). Revista de la Facultad de Ciencias Agrarias -UNCuyo | Tomo 53-1 -Año 2021 The annual temperature is homogeneous in the coast, but varies in the interior valleys and mountainous areas (16). Far from the influence of the sea, the vegetation in the interior areas corresponds to an interior steppe scrubland (19,37).
Capture methods and data collection
The taxonomic characterization of economically important arthropods in the Limarí basin was based on distributional data obtained from reference material deposited in the following entomological collections: Juan Enrique Barriga's personal collection (JEBC); Laboratorio de Entomología Ecológica, Universidad de La Serena, La Serena, Chile (LEULS); and Museo Entomológico Luis Peña, Departamento de Sanidad Vegetal, Facultad de Ciencias Agronómicas, Universidad de Chile, Santiago, Chile (MEUC). Additionally, the Servicio Agrícola y Ganadero de Chile (SAG) provided data from entomological prospections conducted between 2009 and 2015 in the Limarí Province. These records were supplemented with distributional data from the literature and data collected using insect-capturing devices, including nets, umbrellas, and fans, between June and October 2015. Additionally, farmers, stakeholders, community leaders, agricultural and orchard workers were interviewd. The information was checked and complemented with literature reviews. The captured material was cleaned, dried, and preserved in alcohol (70%) until processing and mounting as per Pizarro-Araya et al. (2019 a y b). All the collected material is deposited at the Laboratorio de Entomología Ecológica of Universidad de La Serena (LEULS).
Discussion
One of the first studies to document potential pests in a valley of Chile's Norte Chico . Taxa of medical-veterinary importance were represented by 18 species, 14 genera, and 11 families, 9 of them arachnids and 9 insects. The most important arachnid genera were Loxosceles (Sicariidae), Latrodectus, Steatoda (Theridiidae) and Rhipicephalus (Ixodidae), whereas within insects the most important ones were Triatoma and Mepraia (Reduviidae). Some species identified in this study corresponded to predators and parasitoids, both groups considered natural enemies of certain pests. These species can help control and reduce pest populations in agricultural crops (20,32).
At present, the Servicio Agrícola y Ganadero (SAG) has under official control six agricultural pests according to the impacts they generate both in international markets and in the environment and biodiversity; these are Lobesia botrana, Bagrada hilaris, Halymorpha halys Stal, Phyllocnistis citrella Stainton, Homalodisca vitripennis (Germar) and Dactylopis coccus (Costa) (35), of which the first two have been registered in the Limarí basin.
As of now, a total of 385 insect species and 24 mite species considered potential agricultural pests have been documented for Chile (6,23,33). In spite of these works, continuous update to these taxonomic inventories is required to identify the inlet pressure and pest establishment in this transverse valley of agricultural and productive importance. We hope that this taxonomic report may be useful for monitoring and controlling pests in newly established crops in a highly modified basin.
Conclusions
The economically important arthropods in the Limarí basin included mostly insects (orders Coleoptera, Hemiptera, Lepidoptera and Hymenoptera) and, to a lesser extent, arachnids, the latter represented only by the order Acari. The majority of the species registered in the basin were phytophagous groups (table 2, page 250) that make up a large assemblage of arthropods specialized in and adapted to agricultural crops (e.g., fruit and vegetables crops) of great importance in the Limarí. The total richness recorded in the study area was higher compared to the richness documented for the Elqui basin, probably due to differences in the collection methods used. These data may be complemented with future studies incorporating detailed information on the major economically important groups per crop or sub-basin for a more realistic analysis of the richness level observed in the Limarí basin. | 2021-09-28T01:09:47.184Z | 2021-07-07T00:00:00.000 | {
"year": 2021,
"sha1": "8526c380562fec3178da663a6c101a5438ed097d",
"oa_license": "CCBYNCSA",
"oa_url": "https://revistas.uncu.edu.ar/ojs3/index.php/RFCA/article/download/3796/2714",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6dd007cedeeed2f7a0ba58d646f3ab2b9e34982d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
3012218 | pes2o/s2orc | v3-fos-license | High-performance SERS substrate based on hybrid structure of graphene oxide/AgNPs/Cu film@pyramid Si
We present a novel surface-enhanced Raman scattering (SERS) substrate based on graphene oxide/silver nanoparticles/copper film covered silicon pyramid arrays (GO/AgNPs/PCu@Si) by a low-cost and simple method. The GO/AgNPs/PCu@Si substrate presents high sensitivity, good homogeneity and well stability with R6G molecules as a probe. The detected concentration of Rhodamine 6 G (R6G) is as low as 10−15 M. These sensitive SERS behaviors are also confirmed in theory via a commercial COMSOL software, the electric field enhancement is not only formed between the AgNPs, but also formed between the AgNPs and Cu film. And the GO/AgNPs/PCu@Si substrates also present good property on practical application for the detection of methylene blue (MB) and crystal violet (CV). This work may offer a novel and practical method to facilitate the SERS applications in areas of medicine, food safety and biotechnology.
pyramid Si (PSi) is nonmetal, we attempt to deposit Cu film on the PSi and form a metal pyramid with a simple and low-cost method.
In this paper, based on the above advantages, we combine the GO, Ag nanoparticles (AgNPs), Cu film and PSi forming the GO/AgNPs/PCu@Si substrate. The GO/AgNPs/PCu@Si substrate shows the following advantages: (1) The PSi offers more hot spots. (2) We make use of the morphology characteristic of PSi to fabricate a layer of pyramid Cu film, the local electric field enhancement can formed between the pyramid Cu film and the AgNPs thereby to obtain more sensitive SERS substrate. (3) Besides the above advantages of the GO, it can also prevent the oxidation of the metal nanoparticles, more molecules can effectively absorb on the hot spots and thus more stable and sensitive Raman signals are expected. What's more, the GO makes molecules distribute more uniformly, it leads to higher linearity. Using the proposed GO/AgNPs/PCu@Si substrate, the sensitive, homogeneous and stable SERS signals of R6G and methylene blue (MB) was successfully collected. This work indicates that the GO/AgNPs/PCu@Si substrate have great potential for the practical application in biological sensing and other biotechnology. Figure 1 schematically illustrates the simple process for the synthesis of GO/AgNPs/PCu@Si substrate PSi substrate (boron-doped single crystal silicon) was fabricated by using wet texturing technology with the assist of NaOH 21 . The Cu film was synthesized on the PSi by thermal evaporation. Then these prepared Cu film coverd PSi (PCu@Si) were respectively immersed into 1 mM AgNO 3 solution for 3, 4, 5, 6 and 7 min to fabricate the AgNPs/ PCu@Si substrate. After the AgNPs/PCu@Si was rinsed by deionized water, and dried in the N 2 atmosphere. In order to control the number of layers of the GO, we choose to control the concentration and volume of the GO suspension, then the 50 μ L and 0.5 mg/mL GO (obtained using the modified Hummers method 22 ) suspension was deposited on the surface of the AgNPs/PCu@Si substrate by a dip-coating method. The prepared GO/AgNPs/ PCu@Si substrate sealed in the nitrogen atmosphere was ready for the SERS measurements. All the pyramid-Si substrates in the experiment were cleaned by acetone, alcohol and deionized water until employed.
Experimental
The surface morphology of the GO/AgNPs/PCu@Si substrate was characterized by scanning electron microscope (SEM, Zeiss Gemini Ultra-55) and atomic force microscope (AFM, Park XE-100) in the noncontact mode. The transmission electron microscopy (TEM) is carried out by a transmission electron microscopy system (Hitachi H-800). SERS experiments were carried out with a Horiba HR Evolution 800 Raman spectrometer with laser wavelength at 473, 532 and 633 nm. All the spectra were collected under the same conditions (integration time: 4 s), the excitation laser spot was about 1 μ m, and the effective power of the laser source was kept at 50 mW. Figure 2a shows the surface morphology of the PSi substrate, where one can observe that the regular pyramid arrays are relatively uniform. As shown in Fig. 2b, the PSi is covered by Cu film completely after the treatment of thermal evaporation and the Cu film also presents the pyramid shape. Because of the rough surface of the PSi, it is difficult to measure the thickness of the Cu film on the PSi substrate. Therefore, we deposited the Cu film on flat Si substrate by thermal evaporation under the same conditions to detect the thickness of the Cu film. The Fig. 2c shows AFM image of the Cu film on flat Si substrate. As we can see from the line profile of the Cu film, the thickness of the Cu film is 110 nm. What's more, based on the EDS spectrum of the PCu@Si as shown in Fig. 2d, we can draw a conclusion that the Cu film has been fabricated on pyramid-Si successfully and continuously. Fig. 3h. The peaks located at 613 cm −1 is corresponded to the C-C-C in-plane vibration of the R6G molecular 23 . The peaks located at 774 and 1185 cm −1 can be respectively attributed to the out-of-plane vibration and in-plane vibration of C-H bonds. The peaks observed at 1311, 1362, 1506 and 1647 cm −1 can be assigned to the aromatic C-C stretching vibration mode. As shown in Fig. 3i, the intensity changes as a function of replacement reaction time. Obviously, the intensity at 613 cm −1 peak increases with the replacement reaction time increase from 3 to 5 min and saturates at the 5 min, then the intensity decreases with the replacement reaction time increase from 5 to 7 min. Through this process, we draw a conclusion that the AgNPs/PCu@Si substrate possesses optimum SERS activity with a replacement reaction time of 5 min. Therefore, we choose the substrate with 5 mins reaction time as the object to further research. First of all, we need to detect the thickness of the Cu film for target substrate. On account of the same reason, we detect the Cu film reacting with AgNO 3 solution for 5 min on flat Si substrate via AFM. As shown in Supplementary Fig. S1, the AFM image reveals the thickness of Cu film is about 70 nm, and the fold line indicates that there are indeed nanoparticles on the Cu film. Supplementary Fig. S1 shows the EDS spectrum of the AgNPs/ PCu@Si with a replacement reaction time of 5 min, where the existence of Ag element demonstrates that the Ag element has been fabricated expectedly.
Results and Discussion
Based on the above experiment results, we combine the optimum AgNPs/PCu@Si SERS substrate with the GO with dip-coating method. Figure 4a and b are the SEM image of the GO/AgNPs/PCu@Si substrate under different magnification. Obviously, some wrinkles are observed on the surface of the GO/AgNPs/PCu@Si substrate, and the AgNPs/PCu@Si substrate is almost covered by the GO thin layer. In order to further demonstrate the existence of the GO film on the GO/AgNPs/PCu@Si substrate, Raman spectra were obtained. As shown in Fig. 4c, the D, G, 2D and S3 bands of GO are clearly observed. The D band (1360 cm −1 ) is assigned to the ring vibration symmetrical breathing mode and associated with the defects caused by the attachment of hydroxyl and epoxide groups. The G band (1595 cm −1 ) is assigned to the first-order scattering of the in-plane optical phonon E 2g mode and the 2D band (2722 cm −1 ) is assigned to the second-order process involving two phonons with opposite momentum. The S3 band (2930 cm −1 ) is caused by the imperfect activated grouping of phonons. The Raman spectra clearly demonstrate that high-quality GO films have coated on the AgNPs/PCu@Si. To evaluate the uniformity of GO, Raman mapping of D-peak to G-peak of GO on AgNPs/PCu@Si substrates were implemented over 20 μ m × 20 μ m area as shown in the inset of Fig. 3c. From the color scale in the inset, the intensity of D-peak to G-peak shows a small fluctuation from 1132 to 1201, the result indicates that the GO have a relatively uniform structure in a large area. The TEM image of the GO is shown in Fig. 4d, an edge can be seen clearly at the black arrow, the color turn darker with the increase of the layers of GO film. The TEM results indicate that the GO possesses a relatively uniform thickness. And as shown in Fig. 4e, we detected the R6G with the concentration of 10 −12 M on the GO/AgNPs/PCu@Si substrates under the laser wavelength with 473, 532 and 633 nm respectively, we can ensure that the intensity of the Raman spectrum is the highest under the laser wavelength with 532 nm. And in order to compare the intensity of the Raman spectrum under the three different laser wavelengths more visually, we take the intensity of the R6G at 613 cm −1 peak changes as a function of the laser wavelength, as shown in Fig. 4f. And the main reason of the phenomenon we concluded is that the absorption band of R6G on the AgNPs is nearby the 532 nm, and the R6G molecular resonances play the main role 23 .
To further investigate the SERS activity of our proposed GO/AgNPs/PCu@Si substrate, we compared the SERS performance of the GO/AgNPs/PCu@Si substrate with that of the AgNPs/PCu@Si substrate. The R6G aqueous solutions with varied concentrations were used as the probe molecule. Figure 5a and b show the Raman spectra of R6G on the AgNPs/PCu@Si and GO/AgNPs/PCu@Si substrate respectively with various concentrations from 10 −10 to 10 −15 M. After the coating of GO film, it is drastically distinct that the intensities of SERS spectra from the GO/AgNPs/PCu@Si are much stronger than those of AgNPs/PCu@Si. The Raman spectra of R6G with concentration of 10 −15 M can be easily detected on the GO/AgNPs/PCu@Si substrate, where the intensity of the peaks at 613 cm −1 is about 2 times stronger than that of the AgNPs/PCu@Si substrate. This phenomenon can be attributed to the excellent bio-compatibility of GO, which can serve as a superior molecule-enricher and enhance the Raman signal [24][25][26][27][28] . Because the higher intensity of the Raman spectra of R6G compared with that of the GO, the Raman spectra of the GO become few obvious, but it still can be seen in the Fig. 5b at 1360 and 1595 cm −1 compared with Fig. 5a, there are two bulges obviously. Figure 5c and d show respectively the Raman intensity of R6G peaks at 613 cm −1 as a function of the molecular concentration on the AgNPs/PCu@Si and GO/AgNPs/PCu@Si, in log scale, where the value of R 2 reaches 0.986 and 0.996 indicating the linearity of the latter is superior to the former. In the latter case, the GO acts as the excellent adsorbent towards organic molecules, and leads to the R6G molecules uniformly distribute. As the Fig. 6a and 6b shown, we randomly detected the SERS spectra of the R6G with to 2341 compared with that on the AgNPs/PCu@Si substrates from 322 to 1091, therefore we concluded that the GO/AgNPs/PCu@Si substrates have possesses well uniformity for SERS signal in a large area. The relatively well uniformity of the GO/AgNPs/ PCu@Si substrates can be ascribed to the well distributed AgNPs and the existence of GO films. The well distributed AgNPs on the Cu film surface can achieve the well distributed hot spots and the GO films covering both AgNPs and spaces can make the probe molecule effectively absorbed around the hot spots 29 . For the former case, as the absence of the GO film, the molecules will distribute unevenly on the AgNPs/PCu@Si substrate, which will lead to the weak homogeneity of SERS signal. The enhancement factor (EF) for GO/AgNPs/PCu@Si substrates was calculated according to the following equation
RS RS
where I SERS and I RS , respectively, represent the peak intensities of the SERS spectra and the normal Raman spectra, N SERS and N RS are, respectively, the numbers of molecules on the substrates within the laser spot. According to the above equations, in the experiments, the EF of R6G with a concentration of 10 −15 M is calculated to be 6.7 × 10 11 for the AgNPs/PCu@Si substrate, 2 × 10 12 for the GO/AgNPs/PCu@Si substrate. Compared with the AgNPs/ PCu@Si substrate, the GO/AgNPs/PCu@Si substrates exhibit an enhancement of about 3 times in the EF values resulting from the CM of the GO. The EF of the GO/AgNPs/PCu@Si substrate is 1.7 × 10 3 times larger than that of the Au@Ag/3D-Si substrate 30 , 6.06 × 10 3 times larger than the G/AgNP array substrates 31 . The excellent SERS sensitivity of the GO/AgNPs/PCu@Si substrate can be attributed to the combined action of the pyramid of the PCu@Si, the coupling of GO and plasmonic AgNPs, and the local electric field enhancement between the AgNPs and Cu film. Furthermore we measure the stability of the GO/AgNPs/PCu@Si and AgNPs/PCu@Si SERS substrate by subjecting it to aerobic exposure for 15 days. As shown in Fig. 7a and 7b, what should be noticed is that, after the oxidation treatment, the intensity of R6G with a concentration of 10 −15 M on the GO/AgNPs/PCu@Si substrate is almost invariant, indicating that the GO/AgNPs/PCu@Si substrate possesses excellent antioxidant stability. On the contrary, the intensity of the R6G with a concentration of 10 −15 M on the AgNPs/PCu@Si substrate decreases obviously after being treated with oxidation. The decrease of the intensity may be due to the oxidation of AgNPs as it is greatly impressionable to oxidation, which can be confirmed by the EDS result in Fig. 7c.
In order to further identify the effect of the PCu film for the enhanced electric field and better understand the SERS enhancement mechanism of the GO/AgNPs/PCu@Si sbustrate. We calculated and analyzed the local electric field properties of the PSi, AgNPs/Si, AgNPs/PSi, AgNPs/PCu@Si structure using commercial COMSOL software. The Fig. 8a shows the x-z views of the electric field distribution on the PSi sample with incident light wavelength 532 nm and as is shown in the inset, the definitions of the geometrical parameters are provided: E(x), H(y) and K(z) are the electric field (the polarization direction of laser), magnetic field and direction of light propagation respectively. It's obvious that electric field is weak, just as we have discussed above, the EM is from metal nanoparticles, once attaching the AgNPs on the surface, it can effectively amplify and increased electrical field intensity of the plasmonic resonance. In order to prove that the superiority of the PSi compared to flat Si, we model AgNPs/Si and AgNPs/PSi structure, respectively. As shown in Fig. 8b and c, we set the diameter of AgNPs as 55 nm and gaps as 10 nm. It can be seen clearly that the magnitude of electrical field of AgNPs/PSi is much larger than that of AgNPs/Si substrate. And then, we build the theoretical model of the AgNPs/PCu@Si, we set the diameter of AgNPs as 55 nm and gaps as 10 nm and the thickness of the Cu film as 70 nm. As shown in Fig. 8d, the electrical field is stronger than that of AgNPs/PSi, the reason for this phenomenon is that local surface plasmon will be formed between AgNPs and Cu film. For the AgNPs/Si case, while, the local surface plasmon can not be formed between nanoparticles and PSi substrate. Therefore, the excellent SERS behaviors of the Ag/PCu@Si substrate can contributed to the following points: (1) the pyramidal structure of the PSi substrate can be used as the amplifier for incident light and introduce large electrical field intensity of the plasmonic resonance. (2) The PCu film can provide an extra electrical field enhancement due to the interaction with AgNPs. Based on these theoretical results, we can conclude that the GO/AgNPs/PCu@Si SERS substrate with higher sensitivity will be realized by further optimizing its structure.
To investigate the feasibility of the GO/AgNPs/PCu@Si substrates in practical application, the MB in deionized water with concentration of from 10 −5 to 10 −9 M were tested on GO/AgNPs/PCu@Si substrates. MB dye can cause eye burns, which may be responsible for permanent injury to the eyes of human and animals. Once inhalation, it can give rise to short periods of rapid and difficult breathing, if ingestion through the mouth it will produce a burning sensation and may cause nausea, vomiting, profuse sweating, mental confusion, painful micturition, and methemoglobinemia 32 . Therefore, achieving the effective detection of the MB is very significant for the health of human. In Fig. 9a, the characteristic Raman peaks of MB at 449, 670, 860, 1030, 1150, 1390, 1442 and 1617 cm −1 are observed. Among these peaks, the peak at 449 cm −l can be attributed to the C-N-C skeletal deformation mode G(CNC), the peak at 1390 and 1442 cm −l can respectively, be attributed to the symmetric and asymmetric CN stretches (v sym (CN) and v asym (CN)), and the peak at 1624 cm −l can be attributed to the ring stretch (v(CC)). To demonstrate the capability of the quantitative detection of MB, the linear fit calibration curve (R 2 = 0.982) is illustrated in Fig. 9b. A good linear response of SERS is obtained from 10 −3 to 10 −9 M. What's more, we also detected the CV on GO/AgNPs/PCu@Si substrates with concentration of from 10 −5 to 10 −9 M. The CV is also a kind of dyes, which is used to control fungi and intestinal parasites in humans, as an antimicrobial agent on burn victims, to treat umbilical cords of infants, for the treatment of long-term vaginal candidosis, for various purposes in veterinary medicine, etc. 33,34 . The Fig. 9c shows the characteristic Raman peaks of CV at 223, 422, 523, 730, 915, 1178, 1372, 1533, 1588 and 1621 cm −1 , among these peaks, the peaks at 223, 422, 523 (915), 730, 1178 and 1372 cm −1 can be assigned to the breathing of central bonds, Ph-C + -Ph bend, ring skeletal vib of radical orientation, ring C-H bend, ring skeletal vib of radical orientation, N-phenyl stretching, respectively. And the peaks at 1533, 1588 and 1621 cm −1 can be assigned to the ring C-C stretching 35 . And the linearity is also provided in Fig. 9d, the linear fit calibration curve (R 2 = 0.995) shows the good linear response. By detecting the MB and CV successfully, the result indicates that a great potential application of the GO/AgNPs/PCu@Si substrates to detect other analytes in real biological systems can be well achieved. | 2018-04-03T03:22:18.110Z | 2016-12-07T00:00:00.000 | {
"year": 2016,
"sha1": "8666cf5206ce6e4e2199afc919648499d93a28c8",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep38539.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8666cf5206ce6e4e2199afc919648499d93a28c8",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
245609122 | pes2o/s2orc | v3-fos-license | App-based quantification of crystal phases and amorphous content in ZIF biocomposites
The performance of zeolitic imidazolate frameworks (ZIFs) as protective hosts for proteins in drug delivery or biocatalysis strongly depends on the type of crystalline phase used for the encapsulation of the biomacromolecule (biomacromolecule@ZIF). Therefore, quantifying the different crystal phases and the amount of amorphous content of ZIFs is becoming increasingly important for a better understanding of the structure–property relationship. Typically, crystalline ZIF phases are qualitatively identified from diffraction patterns. However, accurate phase examinations are time-consuming and require specialized expertise. Here, we propose a calibration procedure (internal standard ZrO2) for the rapid and quantitative analysis of crystalline and amorphous ZIF phases from diffraction patterns. We integrated the procedure into a user-friendly web application, named ZIF Phase Analysis, which facilitates ZIF-based data analysis. As a result, it is now possible to quantify i) the relative amount of various common crystal phases (sodalite, diamondoid, ZIF-CO3-1, ZIF-EC-1, U12 and ZIF-L) in biomacromolecule@ZIF biocomposites based on Zn2+ and 2-methylimidazole (HmIM) and ii) the crystalline-to-amorphous ratio. This new analysis tool will advance the research on ZIF biocomposites for drug delivery and biocatalysis.
ZIF-L 1 : 20 ml of an aqueous 0.05 mM Zn(NO 3 ) 2 *6(H 2 O) Solution was added to 20 ml of an aqueous 0.4 mM 2mIm Solution. The reaction was stirred for 4 h. The solid product was separated via centrifugation (13000 rpm for 5 min; centrifuge used: Eppendorf 5425) and the supernatant was discarded. The powder was washed 3 times with 20 ml each DI water and then air-dried for 48 h at room temperature (23° C). For the calibration of each phase, mixtures with varying content of the pure phase with a constant amount of an internal standard (ZrO 2 ) were prepared. 5 mg of ZrO 2 were suspended in 150 µl DI water, sonicated for 15 minutes and added to a varying amount of pure ZIF phase (0,1 mg; 0.5 mg; 1 mg; 3 mg; 5 mg). The suspensions were vortexed for 1 minute to ensure a homogeneously mixed suspension. From each suspension mixture 50 µl were drop-cast on a 1.5 cm x 1.5 cm piece of Si and air-dried for 48 h at room temperature (23° C). Triplicates were made from each mixture. S3.2: Quantification Peaks and Reference Intensity Ratios (RIRs): 4 selected ZIF biocomposite samples (S1-S4) with different phases (U12, Dia, ZIF-C and Sod, respectively) were made according to literature. For this, stock solutions of Zn(OAc) 2 *2(H 2 O) (160 mM), 2mIm (2560 mM) and BSA (70 mg/ml) were diluted and mixed in varying ratios to yield a final reaction volume of 2 ml. The precise recipe can be found in Table S2. The reaction mixtures were left at static conditions at room temperature (23°C) for 24 h. The solid product was separated via centrifugation (13000 rpm for 5 min; centrifuge used: Eppendorf 5425) and the supernatant was discarded. The powder was washed either 3 times with 1 ml each DI water (water washing) or 2 times with 1 ml DI water and subsequently 2 times with 1 ml ethanol (ethanol washing). Finally, the powder was air-dried for 48 h at room temperature (23° C). Figure S4: FT-IR spectra of the synthesised ZIF biocomposite Samples (S1-S4) with BSA in the 4 different phases (U12, Diamondoid, ZIF-C and Sodalite) The bands corresponding to the amid bonds located at 1700-1610 cm -1 and 1595-1480 cm -1 in BSA are very prominent in the composites. 8,9 Moreover, the peaks at 420 cm -1 and 427 cm -1 , assigned to the Zn-N stretching in sod (as well as dia, U12, ZIF-EC-1 and ZIF-L) and ZIF-C respectively, are distinct in the spectra. 2,4,10 Depending on the reaction conditions and washing, ZIF-C can be obtained. This can be seen if additional modes are present in the region of 700 to 850 cm -1 and 1300 to 1400 cm -1 , which are due to stretching modes of the incorporated CO 3 2-. 4 Through FT-IR spectroscopy one can therefore get a first impression: if the protein is encapsulated and if ZIF-C is formed.
S5: Phase and Amorphous Content Quantification
If the reference intensity ratio (RIR) of a material or phase is known, the weight fraction (wt%) of this material or phase (W i ) in a mixture can be determined. This is achieved by comparing the intensity of the most intense peak of the diffraction pattern of the sample (I i ) to the most intense peak of the diffraction pattern of a standard material (I c ), which is usually Al 2 O 3 (Corundum). The weight fraction of the standard material (W c ) should be known. This is summarised in the following equation. 11 = * * If Al 2 O 3 (Corundum) is not present in the mixture or not available, another reference material (like ZrO 2 ) can be added as an internal standard (IS). However, the change in the RIR has to be accounted for. A new adapted RIR (RIR i,IS ) arises, which is a combination of the RIR of the sample and the RIR of the internal standard. One can easily determine the RIR i,IS by mixing a known amount of the IS with a known amount of the sample. The following formula is then applied. The expected crystallinity should be 100 %. However, if this is not met, it means that a certain amount of the material does not contribute to the intensity and is therefore amorphous. 12
S6: Phases Selection
For each crystal phase we have identified 3 to 4 reference peaks (Table S3) that are used to uniquely identify the presence of a phase in a diffractogram. Each phase has a quantification peak, which is highlighted in bold in Table S3. It is the most intense peak and typically the first to appear when the phase concentration is low in the sample. The remaining peaks tend to appear as the phase concentration increases in the sample. Additionally, the quantification peak should not overlap with peaks from other phases. The ZIF Phase Analysis app identifies all the peaks in a diffractogram using the diffractometry package 13 and then retain only those peaks that are in the vicinity of the reference peaks. In particular, each identified peak is compared to each reference peak by looking at the difference between the position of the identified peak and the angle of the reference peak. If the absolute difference is 0.10 or smaller, the identified peak is associated to the reference peak. The result is a table similar to Table S3 containing information about the angles of the retained peaks. Based on the retained peaks, the crystal phases present in a sample are determined using the following selection criteria:
Reference peaks (2θ, (hkl))
1. if only one of the quantification peaks (e.g. Dia peak 15.57°) is identified, then it is assumed that only one crystal phase is present in the sample (e.g. Dia) and the quantification peak is used to quantify the crystal phase; 2. if more than one quantification peak is identified, it is assumed that multiple crystal phases exist in the sample. To select the multiple phases present in a sample we developed an optimized procedure that minimizes the discrepancy between app-based and expert-based phase selection. The optimized procedure selects the Sod, Dia, and/or ZIF-C phase if the corresponding quantification peaks are identified. For the remaining phases, the presence of both the quantification peak and the majority (i.e. 3 out of 4, or 2 out of 3) of the reference peaks is required. Finally, the selected phases are quantified using the corresponding quantification peaks.
It is noted that there exists an overlap between two reference peaks of the Sod phase (7.36° and 18.07°) and the ZIF-L phase (7.33° and 18.00°). To account for this overlap, the ZIF-L phase is considered to be present in the sample only if the peak at 7.77 is identified as well.
S7: Limit of detection
The lowest ZIF wt% experimentally investigated to build the calibration curves was 2% (Fig. S3). Using a 3% wt/wt ZIF-L/ZrO 2 , the ZIF-L (the sample with the lowest RIR and therefore the lowest diffraction intensity compared to ZrO 2 ) diffraction peaks were barely detectable (Fig. S7c,d). Conversely, using the same acquisition parameters (i.e. Cu 9kW source, 4°/min), the quantification peak(s) of the other ZIF phases were clearly identified. We investigated additional wt% (i.e. 1, 0.5, 0.1) of the ZIF phase with the highest RIR (sodalite) to determine its limit of detection in a mixture with ZrO 2 . We found that 0.5% wt/wt sod/ZrO 2 is the lowest sod wt% detectable using the same acquisition parameters of the other samples (Fig. S7a,b). However, to ensure the reproducibility of the results for all the phases, we suggest not to use a ZIF wt% lower than 2% when preparing the samples for the calibration curve. | 2022-01-01T16:16:11.142Z | 2021-12-30T00:00:00.000 | {
"year": 2022,
"sha1": "5af01504b1ecd86e527bee584f549679b2d9f1ad",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/ce/d2ce00073c",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "68ba0b9bdd406f0d1cabf7cb6a083ff26c33613a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
123942838 | pes2o/s2orc | v3-fos-license | Neutrino-induced upward stopping muons in Super-Kamiokande
A total of 137 upward stopping muons of minimum energy 1.6 GeV are observed by Super-Kamiokande during 516 detector live days. The measured muon flux is 0.39+/-0.04(stat.)+/-0.02(syst.)x10^{-13}cm^{-2}s^{-1}sr^{-1} compared to an expected flux of 0.73+/-0.16(theo.)x10^{-13}cm^{-2}s^{-1}sr^{-1}. Using our previously-published measurement of the upward through-going muon flux, we calculate the stopping/through-going flux ratio R}, which has less theoretical uncertainty. The measured value of R=0.22+/-0.02(stat.)+/-0.01(syst.) is significantly smaller than the value 0.37^{+0.05}_{-0.04}(theo.) expected using the best theoretical information (the probability that the measured R is a statistical fluctuation below the expected value is 0.39%). A simultaneous fitting to zenith angle distributions of upward stopping and through-going muons gives a result which is consistent with the hypothesis of neutrino oscillations with the parameters sin^2 2\theta>0.7 and 1.5x10^{-3}<\Delta m^2<1.5x10^{-2} eV^2 at 90% confidence level, providing a confirmation of the observation of neutrino oscillations by Super-Kamiokande using the contained atmospheric neutrino events.
Energetic atmospheric ν µ or νµ interact with the rock surrounding the Super-Kamiokande ("Super-K") detector and produce muons via weak interactions.For downward-going particles, the rock overburden is insufficient to prevent cosmic-ray muons from overwhelming any neutrino induced muons, but upwardgoing muons are ν µ or νµ induced because the entire thickness of the earth shields the detector.Muons energetic enough to cross the entire detector are defined as "upward through-going muons", and have been discussed in a previous article [1].The typical energy of the parent neutrinos is approximately 100 GeV.Upward-going muons that stop in the detector are defined as "upward stopping muons", and come from parent atmospheric neutrinos with a typical energy of about 10 GeV.These energy spectra are shown in Fig. 1.Neutrinos arriving vertically from beneath the detector travel roughly 13,000 km from their point of production, while those coming from near the horizon originate only ∼500 km away.Thus, observation of upward-going muons provides a relatively pure sample of muon neutrinos, with a wide range of path lengths, allowing tests of possible ν µ disappearance due to flavor neutrino oscillations [2].
For atmospheric neutrinos with interaction vertices inside the detector's fiducial volume (referred to as "contained" events), Super-K measures a low ν µ /ν e ratio [3] and clearly observes a strong zenith angle dependence of the ν µ events [4].This muon neutrino disappearance [5] is consistent with ν µ ↔ ν τ oscillations, while the up-down symmetry seen in the ν e events rules out significant ν e appearance, in agreement with recent results from the CHOOZ experiment [6].The ν µ ↔ ν τ oscillation hypothesis suggested to explain the Super-K contained event results is also consistent with anomalous upward through-going muon zenith angle distributions observed by Kamiokande [7], MACRO [8] and Super-Kamiokande [1].The upward stopping muons are the remaining class of atmospheric neutrino events with which Super-K can test the oscillation hypothesis.It should be noted here that the parent neutrino energies of upward stopping muons are similar to those of multi-GeV fully-contained (FC) and partiallycontained (PC) events discussed in [4].Thus, a similar deficit of upward stopping muons to that of upward-going multi-GeV FC and PC events is expected if the oscillation hypothesis is the cause of the multi-GeV FC and PC event deficit.The main difference between the multi-GeV FC and PC event sample and the upward stopping muon sample is that roughly 80% of the upward stopping muons are generated in the surrounding rock, which has different neutrino interaction cross-sections and systematic uncertainties than does water.
The Super-K detector is a 50 kton cylindrical water Cherenkov detector.To reduce the cosmic-ray muon background, the detector was constructed ∼1000 m (2700 m.w.e.) underground at the Kamioka Observatory, Institute for Cosmic Ray Research, the University of Tokyo, in the Kamioka-Mozumi mine, Japan.The detector is divided by an optical barrier instrumented with photomultiplier tubes ("PMT"s) into a cylindrical primary detector region (the Inner Detector, or "ID") and a surrounding shell of water (the Outer Detector, or "OD") serving as a cosmic-ray veto counter.Details of the detector and general data reduction procedures can be found in reference [3].The data used in this analysis were taken from Apr. 1996 to Jan. 1998, corresponding to 516 days of detector livetime.
An upward-going muon is defined as a track that appears to enter the detector from the rock, and reconstructs as traveling in the upward direction.Thus, PMT activity in the OD at the muon's entrance point is required.The total cosmic-ray muon rate at Super-K is 2.2 Hz, of which a few percent are stopping muons, and the great majority are downward-going.The trigger efficiency for a muon entering the ID with momentum more than 200 MeV/c is ∼100% for all zenith angles.
Muons which leave entrance signal clusters in the OD with no corresponding exit cluster are regarded as stopping.A neutrino interaction inside the water of the OD also produces such a signal, so some fraction (∼ 20%) of the upward-going stopping muon sample is estimated to originate in the OD rather than the rock according to simulations.This effect has been accounted for in the expected flux calculations.
Stopping muons with track length > 7 m (∼1.6 GeV) in the ID are selected for further analysis.The stopping muon track length is determined by calculating the muon momentum from a photoelectron count in the same way as in the contained event analysis [3].This cut eliminates short tracks that are very close to the PMTs and thus are hard to reconstruct, provides an energy threshold, and also eliminates nearly all contamination of the upward stopping muon signal by pions.These pions are photoproduced by cosmic-ray muons outside the detector and are observed to have a soft spectrum [9], such that residual contamination upward-going pions meeting the 7 m track length requirement is estimated to be < 1%.The nominal detector effective area for upward-going muons with a track length > 7 m in the ID is ∼1200 m 2 .137 upward stopping muons which satisfy a cos Θ < 0 cut are found, where Θ is the zenith angle of the muon track, with cos Θ = −1 corresponding to vertically upward-going events.
Details of the muon track reconstruction method and data reduction algorithm are similar to those of through-going muons described elsewhere [1].The total detection efficiency for the complete data reduction process for upward stopping muons is estimated by a Monte Carlo simulation to be >98% over −1 < cos Θ < 0. The validity of this Monte Carlo estimate is in turn checked with real cosmic-ray downward stopping muons, taking advantage of the up/down symmetry of the detector geometry.
Because of multiple Coulomb scattering and finite muon fitter angular resolution (∼ 1 ), some of the abundant downward-going cosmic-ray stopping muon background may be reconstructed with cos Θ < 0 and contaminate the neutrino induced upward stopping muon sample.Figure 2 illustrates the estimation of this contamination by extrapolation of the downward-going zenith angle distribution to the flat neutrino-induced signal near the horizon.This background falls exponentially with decreasing cos Θ, the contribution to apparent upward stopping muons is estimated to be 10.8±4.2 events in the zenith angle bin with cos Θ between -0.2 and 0. This is a larger contamination than was present in the upward through-going muon sample due to the lower muon energies (and correspondingly larger multiple scattering) of the stopping muon sample, and is more uncertain due to lower statistics.As an independent crosscheck of the background contamination, the energy spectrum of the observed stopping muons is shown in Fig. 3.That shape is consistent with the expected and no significant contamination is observed at the 1.6 GeV muon energy analysis threshold.
To analytically calculate the expected upward stopping (and through-going as well) muon flux, employed is the combination of the Bartol atmospheric neutrino flux model [10], a neutrino interaction model composed of quasi-elastic (QE) scattering [11] + single-pion (1π) production [12] + deep inelastic (DIS) scattering multi-pion production based on the parton distribution functions (PDF) of GRV94 [13] with the additional kinematic constraint of W > 1.4 GeV/c 2 (where W is the invariant mass of the hadronic recoil system), and Lohmann's muon energy loss formula in standard rock [14].
This expected flux is compared to three other analytic calculations to estimate the model-dependent uncertainties of the expected muon flux.The other flux calculations use various pairwise permutations of the Honda flux [15] or the atmospheric neutrino flux model calculated by the Bartol group [10], the GRV94DIS PDF or the CTEQ3M [16]PDF.This comparison yields ±10% of difference in the overall flux and -2% to +1% for the bin-by-bin shape difference in the zenith-angle distribution.The shape difference is due mostly to differences in the input flux models.
The expected muon flux Φ stop theo resulting from the above calculation is 0.73 ± 0.16 × 10 −13 cm −2 s −1 sr −1 (cos Θ < 0), where the estimated theoretical uncertainties are described in Table I.The dominant error comes from the overall normalization uncertainty in the neutrino flux, which is estimated to be approximately ±20% [10,15,17] above several GeV.
Given the detector live time T , the effective area for upward stopping muons S(Θ), and the detection efficiency ε(Θ), the upward stopping muon flux is calculated using: where the index j runs over observed events, 2π is the total solid angle covered by the detector for upward stopping muons, and N is the total number of observed muon events (137).Subsequently, we subtract the cosmic-ray muon contamination (10.8 events) from the most horizontal bin (-0.2<cosΘ<0).The resulting observed upward stopping muon flux is: Φ stop = 0.39 ± 0.04(stat.)± 0.02(sys.)× 10 −13 cm −2 s −1 sr −1 .If instead of subtracting this background, we simply omit upward stopping muons coming from the thin-rock region [1] in the most horizontal cosΘ bin (-0.1<cosΘ<0 and 60 • < φ <310 • ), the calculated flux differs from the background-subtracted flux by -1.1%.Systematic errors in the experimental measurement are summarized in Table II.
The flux as a function of zenith angle, (dΦ stop /dΩ), is shown in Fig. 4. Due to limited statistics for stopping muons, 5 angular bins are used instead of the 10 bins used for Super-K through-going muons.With the present statistics, the shape of the normalized distribution is consistent with the no-oscillations hypothesis (χ 2 /d.o.f.= 4.1/4 corresponding to 39% probability).However, the overall flux of upward stopping muons observed is substantially depressed from that expected, as this shape comparison is made after multiplying the expected flux by a free-running overall normalization factor (1+α) whose best fit value is α = −51% calculated by a χ 2 shape fit .This can be compared with an observed no-oscillations α = −14% in the through-going muon sample [1].
A more useful physical quantity than the absolute flux for probing neutrino oscillations is the stopping/through-going flux ratio R = Φ stop /Φ thru , which cancels much of the large (∼ 20%) uncertainty in the neutrino flux normalization and the neutrino interaction cross sections [18].The systematic theoretical and experimental uncertainties in R are summarized in Table III and Table IV, respectively.The measured R is 0.22 ± 0.02(stat.)± 0.01(sys.),while the expected R theo is 0.37 +0.05 −0.04 (theo.).The probability that the observed ratio could fluctuate this far below the expectation is 0.39%.The zenith angle distribution of the ratio R is shown in Fig. 6.
Using these data, we derive probability contours on the neutrino oscillation parameter (sin 2 2θ, ∆m 2 ) plane for the ν µ ↔ ν τ oscillation hypothesis, as shown in Fig. 7, based on a χ 2 defined by: where R i is the observed ratio in the i-th cosΘ bin, σ R i stat the experimental statistical error, σ R i sys the bin-by-bin uncorrelated systematic error (≃2%) estimated by adding uncorrelated theoretical and experimental systematic errors in Table III and Table IV in quadrature, and R i theo the expected ratio.The β for R i theo represents a stopping-to-throughgoing relative muon flux normalization factor with error σ β =14% estimated by the correlated theoretical and experimental systematic errors in Table III and Table IV added in quadrature.
The allowed region thus obtained in Fig. 7 is in good agreement with that found in the Super-K contained event analysis [5].The minimum χ 2 locations on the ∆m 2 − sin 2 2θ plane and corresponding parameter values for various conditions are listed in Table V.As the minimum χ 2 lies in the unphysical region, the contour is drawn according to the prescription for bounded physical regions given in Ref. [19].Because the χ 2 surface has a rather broad minimum, the specific best-fit oscillation parameter values cited are of less importance than the contours shown in Fig. 7.If we replace the Bartol neutrino flux [10] by the Honda's flux [15] and/or the GRV94DIS parton distribution functions [13] by CTEQ3M [16], the allowed region contours do not change significantly.
Finally, we performed an oscillation analysis which use all the upward-going muon information available by simultaneously fitting to the upward through-going and stopping muon zenith angle distributions.In this analysis, χ 2 is defined by the sum over 10 upward-throughgoing and 5 upward-stopping muon zenith angle bins: where dΦ dΩ i is the observed muon flux in the i-th cosΘ bin, σ i stat the experimental statistical error, σ i sys (∼2 to ∼4%) the bin-by-bin uncorrelated theoretical and experimental systematic errors in Table I and Table II added in quadrature for stop or from Ref. [1] for thru , dΦ dΩ i theo the expected muon flux, α the running overall flux normalization factor with error σ α of 22% estimated by adding in quadrature the correlated theoretical systematic errors in Table I, β the running stopping-to-throughgoing relative muon flux normalization factor with error σ β =14% estimated by the correlated theoretical and experimental systematic errors in Table III and Table IV added in quadrature.The results are shown in Fig. 7 and are consistent with those from the Super-K contained event analysis.As is shown in Table V, the minimum χ 2 falls in the unphysical region so the contours are drawn according to the prescription for bounded physical regions given in Ref. [19].
The contamination due to ν e charged current interactions and neutral current interactions in the upward-going muons is estimated to be < 1% by a Monte Carlo simulation and is neglected in these analyses.The contribution of possible ν τ interactions in the rock below to the upward stopping muon flux is suppressed by branching ratios and kinematics to less than a few percent, and is also neglected.
The observed overall upward stopping muon flux alone and its zenith angle distribution do not conflict with the expected no-oscillations values significantly within the present statistical and systematic errors.However, in order to simultaneously explain both the stopping and through-going upward muon fluxes using both an analysis based on the stop/through ratio R and a combined fit of upward stopping and through-going muon zenith angle data, the no-oscillations theoretical expectations reproduce the observed fluxes poorly at the 0.39% C.L. (stop/thru ratio averaged over zenith angle) and 0.87% C.L, respectively, while the ν µ ↔ ν τ oscillation assumption is consistent with observations.This result supports the evidence for neutrino oscillations given by the analysis of the contained and through-going muon atmospheric neutrino events by Super-K.
We gratefully acknowledge the cooperation of the Kamioka Mining and Smelting Company.The Super-Kamiokande experiment has been built and operated from funding by the Japanese Ministry of Education, Science, Sports and Culture, and the United States Department of Energy.The normalization is made so that the integrated number of expected events with muon energy > 1.6 GeV (the vertical line) corresponds to the observed number of events (=137).Preselection is applied at muon energy ∼1.4 GeV.The allowed region contours at 68% (dotted contour), 90% (thick solid), and 99% (dashed) C.L. obtained by the combined analysis of Super-K upward stopping and through-going muons drawn on the (sin 2 2θ,∆m 2 ) plane for νµ ↔ ντ oscillations.The star indicates the best fit point at (sin 2 2θ, ∆m 2 ) = (1.0,3.9 × 10 −3 eV 2 ) in the physical region.The allowed region contour indicated by solid thick labelled line with "STOP/THRU" is made based on the Super-K stopping/through-going muon ratio alone at 90% C.L. Also shown is the allowed region contour (the remaining solid thin line) at 90% C.L. by the Super-K contained event analysis.The allowed regions are to the right of the contours.
1 )FIG. 1 .
FIG.1.The energy spectra of the parent neutrinos of upward stopping muons (left) and upward through-going muons (right).The dashed lines are the result of using the Bartol input fluxes, and the solid lines are from Honda's calculations.
FIG. 2 .FIG. 3 .
FIG.2.Zenith angle distribution of stopping muons near the horizon observed by Super-K.Filled triangles (open circles) indicate events coming from direction where the rock overburden is thick > 15000 m.w.e.(shallow > 5000 m.w.e.).The two distributions are normalized to a common azimuth angle range (0 -360 degrees).The solid line is the zenith fit to the shallow rock events used to estimate the cosmic-ray muon background contamination in the −0.2 < cos Θ < 0 bin.
FIG. 4 .FIG. 5 .FIG. 6 .
FIG. 4. Upward stopping (filled circles) and through-going (open circles) muon fluxes observed in Super-K as a function of zenith angle.The error bars indicate uncorrelated experimental systematic plus statistical errors added in quadrature.The lower (upper) solid histograms show the expected upward stopping (through-going) muon flux based on the Bartol neutrino flux without oscillation.Also shown as lower (upper) dashed histograms are the expected stopping (through-going) muon flux assuming the best fit parameters of the combined analysis in the physical region at (sin 2 2θ, ∆m 2 ) = (1.0,3.9 × 10 −3 eV 2 ), α = +0.061and β=-0.083 for the νµ ↔ ντ oscillation case.
FIG.7.The allowed region contours at 68% (dotted contour), 90% (thick solid), and 99% (dashed) C.L. obtained by the combined analysis of Super-K upward stopping and through-going muons drawn on the (sin 2 2θ,∆m 2 ) plane for νµ ↔ ντ oscillations.The star indicates the best fit point at (sin 2 2θ, ∆m 2 ) = (1.0,3.9 × 10 −3 eV 2 ) in the physical region.The allowed region contour indicated by solid thick labelled line with "STOP/THRU" is made based on the Super-K stopping/through-going muon ratio alone at 90% C.L. Also shown is the allowed region contour (the remaining solid thin line) at 90% C.L. by the Super-K contained event analysis.The allowed regions are to the right of the contours.
TABLE I .
List of theoretical uncertainties in upward stopping muon flux calculation.
a Theoretical cosΘ bin-by-bin correlated uncertainty b Theoretical cosΘ bin-by-bin uncorrelated uncertainty
TABLE II .
List of experimental systematic errors in upward stopping muon flux measurement.Experimental cosΘ bin-by-bin correlated systematic error b Experimental cosΘ bin-by-bin uncorrelated systematic error Note that the uncertainty of the background subtraction is included in the statistical error.
TABLE III .
List of theoretical uncertainties in stopping/through-going muon flux ratio.Theoretical cosΘ bin-by-bin correlated uncertainty b Theoretical cosΘ bin-by-bin uncorrelated uncertainty c QE, 1π and DIS cross sections are changed independently by ±15%, and the effect of this on R is noted here.
TABLE IV .
List of experimental systematic errors in stopping/through-going muon flux ratio.
TABLE V .
Summary of fit results.See text for definitions of α and β. | 2019-04-14T02:28:18.645Z | 1999-08-11T00:00:00.000 | {
"year": 1999,
"sha1": "7bdb44ad9ebd81d7b50949e6ec9a58146b14a1cc",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/hep-ex/9908049",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7bdb44ad9ebd81d7b50949e6ec9a58146b14a1cc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257255321 | pes2o/s2orc | v3-fos-license | Full analytical solution of finite-length armchair/zigzag nanoribbons
Finite-length armchair graphene nanoribbons can behave as one dimensional topological materials, that may show edge states in their zigzag-terminated edges, depending on their width and termination. We show here a full solution of Tight-Binding graphene rectangles of any length and width that can be seen as either finite-length armchair or zigzag ribbons. We find exact analytical expressions for both bulk and edge eigen-states and eigen-energies. We write down exact expressions for the Coulomb interactions among edge states and introduce a Hubbard-dimer model to analyse the emergence and features of different magnetic states at the edges, whose existence depends on the ribbon length. We find ample room for experimental testing of our predictions in N = 5 armchair ribbons. We compare the analytical results with ab initio simulations to benchmark the quality of the dimer model and to set its parameters. A further detailed analysis of the ab initio Hamiltonian allows us to identify those variations of the Tight-Binding parameters that affect the topological properties of the ribbons.
I. INTRODUCTION
The experimental identification of graphene sheets almost two decades ago 1 lead to the development of a whole new branch of condensed matter physics, that of 2D materials. Since then, several new 2D materials, such as silicene, 2 phosphorene 3 or MoS 2 4 have been fabricated, presenting different and exotic properties. However, the interest in graphene-based structures has not diminished during the years. In particular, graphene nanoribbons (GNRs) keep attracting attention due to their characteristic electronic and magnetic properties, usually related to the presence of topologically protected edge states around their zigzag terminations. Experimentally, bottom-up techniques have enabled the fabrication of long armchair GNRs of different widths and finite length from molecular precursors with atomic precision. [5][6][7][8][9][10] The existence of edge states at the zigzag ends of some of these ribbons has been confirmed by scanning tunneling microscopy, 7 while transport measurements have demonstrated their magnetic character. 11 From the theoretical point of view, the existence of edge states localized at the zigzag edges of GNRs [12][13][14][15][16][17][18] and graphene islands of different shapes 19 was predicted long time ago. But, only after the work of Cao et al in 2017, 20 the topological nature of these edge states has been unveiled. Cao et al made use of a Z 2 topological invariant that depended on the ribbon width and termination and could be computed by determining the Zak phase from the Tight-Binding (TB) wavefunctions. 21,22 Finite-length armchair ribbons could be classified into a Z 2 = 1 topological class, where ribbons host robust edge states, and a Z 2 = 0, topologically trivial class. Furthermore, GNR-based heterostructures were proposed and found, where protected edge states emerge at the boundaries between GNRs of different topology. 20,23 This work led to a renovated interest in finite-length GNRs and the topological states at their ends, with new efforts dedicated to further characterize them both computationally 24 and experimentally. 25,26 We analyse here the emergence and features of edge states in finite-length GNRs, where we map the ribbons to a waveguide of Schrieffer-Heeger-Su (SSH) 27 transverse modes. The ribbons that we discuss here can be viewed as either armchair or zigzag depending on the width/length aspect ratio, or more generally as graphene rectangles or rectangulenes. We present a full analytical solution of a graphene TB Hamiltonian with open boundary conditions in all directions to take into account the ribbons finite width and length. We uncover the bulkboundary condition 28 by relating the ribbon Hamiltonian winding number to the quantization condition for the bulk and edge states. Our analysis goes beyond a topological classification since we are able to characterize fully the edge wave-function spatial distribution, that determines the strength of electron-electron interactions and hence the magnetic properties of the ribbons. We also show how and why topological predictions for edge states fail for short enough ribbons.
The analytical solution of infinite-length ribbons with armchair and zigzag or arbitrary orientation has been known for a long time now, 29,30 where a band of edge states associated to zigzag-like terminations appears at the lateral edges of the ribbons. Akhmerov and coworkers 31 analysed the nature of edge states in finitesize graphene dots. Little effort has been done however in obtaining the analytical solution of finite-length ribbons, where a small set of edge states appears at the ribbon ends rather than along the ribbon. In addition, previous solutions usually required the definition of a one dimensional unit cell having several (more than two) basis states to generate the ribbon, while our analysis shows that two orbitals suffice just as in bulk graphene if one chooses the adequate boundary conditions. Hence the connection to bulk graphene and to the SSH model is made transparent.
We also deduce a double-site Hubbard model that accounts for the magnetic states of small-width armchair GNR (AGNR), and show how different magnetic states emerge as the ribbon length increases. We find that the length windows between transitions is large enough for N = 5 AGNR to leave ample room for experimental testing.
We complement our analytical TB approach with Density Functional Theory (DFT) simulations to deliver a complete theoretical characterization of the ribbons, with the possibility of getting in closer contact to current-day experiments. We are therefore able to characterize completely the TB parameters, where we discuss how needed second-and third-neighbour hopping elements affect the topology of a given ribbon.
The outline of this article is as follows. Section II introduces the ribbon TB Hamiltonian, our handling of open boundary conditions and explains the full analytical solution, together with a complete analysis of the exact bulk and edge states. Section III introduces an effective Hubbard dimer model that accounts for the electron-electron interactions between edge states, whose parameters are fully determined thanks to the knowledge of the exact wave-functions. The section includes a detailed analysis of the magnetic mean-field solutions of the model, where their existence is found to depend on the ribbon length. Section IV compares the analytical results to ab initio DFT simulations of the ribbons. A close inspection and handling of the DFT Hamiltonian allows us to map it to a third-nearest neighbour TB Hamiltonian. We discuss how the extra neighbour terms affect the robustness of the edge states. Section V summarizes our main conclusions. Appendix A delivers a pedagogical description of the analytical solution of finite-length one dimensional chains. Appendix B shows our DFT results for N = 7 and 9 AGNR.
II. ANALYTICAL SOLUTION OF FINITE-LENGTH GNRS
A.
Hamiltonian, eigen-states and eigen-functions of an infinite graphene sheet We discuss here shortly the solution of an infinite graphene sheet to introduce notation that will help us to discuss the finite-length case. We consider the primitive unit cell depicted in Fig. 1 (a), where we consider a single p z orbital per carbon atom as usual. Lattice vectors R are spanned in terms of the primitive vectors a 1 and a 2 . Distances along the X and Y axes are measured in units of the primitive vector components lengths a x 2.13Å and a y 1.23Å. This choice simplifies the algebraic expressions below, rendering our results independent of uniform distortions of the lattice from the hexagonal structure (however notice that lattice distortions affect the value of the hopping integrals, as we will discuss later). Then, the Hamiltonian of the system can be written as follows: are the creation and annihilation operators acting on site A (B) of the unit cell defined by the lattice vector R. We are considering only nearest-neighbors hopping integrals −t ( Fig. 1 (a)) and set all on-site energies to zero. We gather the basis states centered at sites A or B of each R unit cell into a vector Then, any eigen-state wave-function can be written as the linear combination where translational symmetry dictates that the Bloch coefficients must be decomposed as The wave-vectors k label the Bloch eigen-states. They must be real to guarantee that the wave-function is normalizable, and are determined by imposing suitable (periodic) boundary conditions. The 2×2 Hamiltonian can be written as: where f (k) = t 1 + e ika1 + e ika2 = t 1 + ∆ y e ikx (6) |f (k)| = t 1 + ∆ 2 y + 2 ∆ y cos (k x ) and θ k is the polar angle of f (k). We have dumped all the k y dependence into the function ∆ y = ∆(k y ) = 2 cos (k y ) that depends only on the modulus of k y . It is now straightforward to see that the eigen-values and eigen-function coefficients can be written as follows: where τ = ± labels graphene's valence and conduction bands.
B. Open boundary conditions in a finite armchair ribbon
We consider now armchair nanoribbons of finite length, defined by their width N (e.g.: the number of atomic rows) and their length M (e.g.: the number of hexagons along the length of the ribbon) as shown in Fig. 1 (b). We focus on odd values of N because those are the kind of ribbons that can be obtained experimentally. However, most of our analytical results are also valid for an even value of N , and we also comment briefly those cases in the following sections. In contrast with the infinite sheet, translational symmetry is broken now because edge atoms exist that have a coordination number of two instead of three. We can however restore translational symmetry by inserting fake atoms at the edges as drawn in Fig. 1 (b), so that edge atoms recover a coordination number of three. The cost for doing so consists of inserting extra equations that ensure that the wave-function is exactly zero at the fake-atom positions. These extra equations are the finite-length boundary conditions that replace the periodic boundary conditions of the infinite sheet.
We note now that the Bloch coefficients C R in Eq. (4) are non-zero for all R, so that they cannot meet the boundary condition equations. We can however take advantage of the fact that any linear combination of sameenergy bulk coefficients C k is also an eigen-state of the system with the same energy. We therefore search for those linear combinations that fulfill the boundary conditions. Graphene bulk eigen-states have large degeneracies at most energies. This is illustrated in Fig. 1 (c), where isoenergy curves within graphene's bulk Brillouin zone are drawn. The set of possible linear combinations can however be restricted by noticing that it is the edges that mix waves as we illustrate in Figs. 1 (d) and (e). Indeed, any wave with wave-vector k i that impinges on an edge must bounce back with a momentum k b whose components satisfy k b = k i and k b ⊥ = −k i ⊥ . Fig. 1 (d) shows a wave inpinging on an edge that has irregular shape, typical of a chaotic cavity. This edge gives rise to many outgoing waves, and all of them must be included in the linear combination. In contrast, Fig. 1 (e) shows equalenergy waves inside one of the ribbons that we study in this article. Then the edges' symmetries restrict the possible linear combinations to just four waves for each incident wave-vector k i . We denote the set of four waves by k σ,σ = (σk x , σ k y ), where σ, σ = ±. These considerations imply that the wave-function coefficients consist of the summation of four Bloch coefficients: with boundary conditions (Rx=2M,Ry) = 0; R y = 2, 4...N − 1 (10) The eigen-functions of the system are characterized by a single wave-vector k that lies inside a region within the first quadrant of the Brillouin Zone region. We draw in Fig. 1 (f) several possible choices for the region. We have chosen the region enclosed by the dashed red lines (k x ∈ [0, π] , k y ∈ 0, π 2 ) because the function ∆ y ≥ 0 inside it. To proceed, we notice that the Bloch coefficients C k in Eq. (8) depend only on the modulus of k y , e.g.: C (kx,ky) = C (kx,−ky) and we denote these by C kx below. This means that we can factorize the wave-function coefficients as follows: Notice that D Rx (k x , k y ) is a vector of components d A Rx , d B Rx , while E Ry (k y ) is just a scalar. Similarly, the boundary conditions in Eq. (10) can be written in a factorized form as follows: Eqs. (11) through (15) are the first central result of this article. We can infer from them that a finite-length armchair GNR system is a separable problem in the sense that it can be decomposed into two much simpler finitelength one-dimensional models as follows. Eqs. (13) and (15) correspond to a simple N -site mono-atomic chain that lies along the Y-direction. As also shown in Appendix A, the boundary condition of Eq. (15) quantizes the k y wave-vectors as follows: sin ((N + 1)k y ) = 0 ⇒ k y = k α = π α N + 1 (16) where we have labeled the allowed wave-vectors by the integer number α, with α = 1, ..., N +1 2 . These k α wavevectors lie all inside the ribbon Brillouin zone shown in red lines in Fig. 1 (f).
The quantized k α wave-vectors enter Eqs. (12) and (14) as a parameter through the function ∆(k y ), and we call ∆ α = ∆(k α ) = 2 cos (k α ) to simplify the notation below. Then, these two equations correspond to a set of dimerized chains lying along the X-axis that have 2 M cells. Each of the chains correspond to a different ∆ α . The dimerized TB chain is solved in detail in Appendix A. The boundary conditions of Eq. (14) fix the k x, α allowed values for each k y, α via the equation with the additional condition that the wave-vectors must lie within the ribbon Brillouin Zone, k x, α ∈ [0, π]. For each given value of α, the integer value β univocally defines the value of k x , so we label k x, α, β = k α β . We define a critical ∆ c α = 1, that corresponds to a critical wave-vector k c α = π/3. Then, the above equation has 2 M real solutions so that β = 1, ) and the chain length M < M c = ∆α 2(1−∆α) . However, the above equation has only 2 M − 1 real solutions if ∆ α < ∆ c α (k α > π/3) and the chain M > M c , so that β = 1, ..., 2 M − 1 in this case. The missing solution can be found by setting where we have introduced θ q in analogy to θ k as: The case of k α = π is especial. In that case ∆ α = 0, but E Ry = 0 for all even values of R y . Therefore, there is no condition over k x , instead we obtain M degenerated states of ε k = −τ t. However, we can still use condition (17) to obtain these M bulk states in the range k x ∈ [0, π 2 ]. The central panels in Fig. 2 show the resulting grid of real (k x, α, β , k y, α ) = (k α β , k α ) solutions within the ribbon Brillouin Zone for ribbons of two selected widths. In these central panels, the quantization condition (16) is represented by red horizontal lines, while blue lines represent the quantization condition (17). The curvature of the latter represents the dependence of quantized k x values in k y . The last blue curve hits k x = π at k y > k c α , where complex values of k x arise. Considering only one of the quantization conditions we recover the band structure of armchair (left) or zigzag (right) ribbons. Considering both conditions we obtain a grid of points that represent the actual (k αβ , k α ) states of the ribbon. band structure. The number of edge states of the ribbon is given by the number of allowed k α ∈ (π/3, π/2), which gives floor( N +1 6 ) for odd values of N . Each putative edge state must also fulfill the extra condition M > M c . The bulk eigen-states have wave-functions and eigenenergies given by where τ = ±, and we have used the shorthand M = 4 M + 1, while the edge eigen-states wave-functions and eigen-energies are All these results are valid for both odd and even values of N . The only noticeable difference is that the especial case of k α = π only appears for ribbons with odd N , and that in this case the number of edge states of the ribbon is given by floor( N +4 6 ). Eqs. (16) through (25) give the full solution of the TB finite length nanoribbon and are the second central result of this article.
C. Number of edge states and topology
We note the well-known fact that SSH chains can be classified according to two topological categories depending on the ratio between their hopping integrals. In the correspondence between the GNR along the X direction and the dimerized chain, this ratio is just ∆ c α . SSH chains with ∆ α > ∆ c α are topologically trivial in the sense that they host only bulk states. SSH chains with ∆ α < ∆ c α are topological, they host topologically protected edge states (beyond a certain length). Appendix A shows in detail the content of the bulk/boundary principle for SSH chains.
We can separate armchair ribbons into 3 groups, corresponding to N = 3p, 3p + 1 or 3p + 2. For long enough ribbons with an odd value of N , N = 3p contains p−1 2 edge states for each edge, while N = 3p + 1 contains p 2 . Within this approach N = 3p + 2 infinite ribbons are found to be metallic, as the k α = π 3 band passes through the Dirac point K. However, DFT results show that these ribbons have a small gap. 15 A modification of our TB model that has different hopping integrals −t and −t in the longitudinal and transverse directions of the ribbon (see Fig. 3) reproduces this behavior. We redefine ∆ α = 2 t t cos (k α ), so that the rest of the problem remains the same.
Then, if t > t, the region of reciprocal space that represents ∆ α < 1 is reduced and the N = 3p + 2 ribbons can only present p−1 2 edge states. If t < t, the same region is increased and these ribbons can present p 2 edge states. This is shown in Fig. 3.
. Z 2 = 1 (0) is equivalent to a topologically protected odd (even) number of edge states. This is consistent with our results if t < t. Cao et al also reported Z 2 values for ribbons with open edges, , that corresponds to the opposite value of Z 2 from that of ribbons of the same width N and closed edges, as those analyzed here. The analytical solution of these new ribbons is very similar to that presented here, but in this case condition (14) must be satisfied for odd values of R y , and that is not immediately satisfied in k α = π as for the ribbons with closed edges. This leads to an extra couple of edge states in the limit ∆ α = 0, fully localized on the edge atoms and with ε ατ = 0.
For ribbons with an even value of N , N = 3p contains for these ribbons. This is again consistent with our calculations if t < t, where we obtain p+2 2 edge states for N = 3p + 2 ribbons.
III. HUBBARD DIMER MODEL FOR INTER-EDGE COULOMB INTERACTIONS
Some of the most relevant features of graphene nanoribbons such as their magnetic, electrical or optical properties originate from the strong electron-electron interactions existing among edge states, that go beyond the single-electron picture described above. We drop in this section the bulk states and set up a model of interacting edge electrons for the case where we have a single edge-state solution q α .
A. Left and Right edge states
The above |Ψ α τ edge eigen-states are delocalized over both edges and both A and B sub-lattices as we show in Fig. 4. But we can define alternatively orthogonal zero-energy states that are located at either the left/B or right/A edge/sublattice (but are not eigen-states) as follows: Alternatively, |Ψ α τ can be viewed as the bonding and antibonding states formed by the interaction between the single-edge states |Ψ α L and |Ψ α R via an effective hopping integral t α :
B. Hubbard Dimer model for inter-edge Coulomb interactions
We assume now that electrons in a graphene ribbon obey the Hubbard model to a good approximation. One can then show that two electrons in the same single-edge state q α have opposite spins. We find that their dynamics can be described to a good approximation by the following Hubbard dimer model where U is the local interaction within one atom. We show in Fig. 6 (a) and (b) the dependence of t α /t and U α /U with the ribbon length M for a N = 5 ribbon and different possible values of ∆ α . We find that t α /t decays exponentially to zero with M . In contrast, the Hubbard U α /U parameter decreases strongly for short ribbons, but then levels off and converges to a constant value U 0 α as each edge state adquires its maximum delocalization.
C. Mean field analysis at half-filling
We perform a mean field treatment of the Hamiltonian, where we denote n iσ = n iσ . We also denote by m i = n i↑ − n i↓ the magnetic moment in units of µ B at either the i = L or the i = R ribbon edge. We shall restrict the analysis to the half-filled case so that n L↑ + n R↑ + n L↓ + n R↓ = 2.
We find always a non-magnetic (NM) solution to the mean-field equations. In addition, an antiferromagnetic (AFM) solution exists if U α /t α > 2, that is always more stable than the NM solution whenever it exists. A ferro-magnetic (FM) solution also exists if U α /t α > 4. The FM solution is less stable than the AFM one, but more stable than the NM solution. Fig. 5 is a graphical summary of these three solutions, where we draw the one-electron eigen-states, and write down the total energies and local magnetic moments. We analyse now whether the three magnetic states can be realized in short-width ribbons that host a single edge state. Although at this point we do not know the exact values of the parameters that define the ribbon, we can make an educated guess that may shed some light on the expected behavior of the ribbons. We consider ribbons of N = 5, 7, and 9, and we estimate t = U . Then, for each ribbon we can calculate M c , M AF M and M F M as a function only of ∆ α (that, for each value of N , only needs t t to be defined). We show our results in Fig. 7, where we focus especially in the ∆ α region where t t ∈ [0.9, 1.1]. In all cases we find the 4 types of behavior, but both N = 7 and N = 9 ribbons reach M F M already for ribbons with a few unit cells. More interesting is what happens with N = 5 ribbons. In this case, M c , M AF M and M F M all become much larger, and we can expect to be able to distinguish a quite wide range of integer M values within each regime.
IV. DFT SIMULATIONS OF FINITE-LENGTH GNRS
The goal of this section is two-fold. We want to check in the first place whether our results and predictions above using a simple TB model agree with more realistic DFT simulation. Second, we wish to determine the U , t and t parameters of our model that reproduce the DFT simulations.
We have performed DFT simulations of finite graphene nanoribbons of widths N = 5, 7, and 9 and different lengths from M = 2 to M = 10 or 30, depending on the width. We have used for this task the code SIESTA. 32,33 The choice is based on the fact that the SIESTA code expands wave-functions into a variational basis of atomiclike functions. Therefore the SIESTA Hamiltonian is already written in the TB language. Difficulties arise however because (a) SIESTA's atomic-like functions are not orthogonal to each other; (b) SIESTA's basis includes usually multiple-ζ atomic functions at each atom, that have the same angular symmetry (e.g.: two or three swave-functions, etc.); (c) atomic-like functions have a radius larger than several times the inter-atomic distance, so that hopping integrals exist to several neighbor shells. We shall explain below our procedure to handle these difficulties and achieve an accurate mapping.
A. Simulation details
We have chosen the generalized gradient approximation (GGA) parametrized by Perdew, Burke and Ernzerhof (PBE) 34 for the exchange and correlation potential. The code SIESTA uses the pseudopotential method as implemented by Troullier and Martins, 35 where core electrons are integrated out and valence electrons feels semi-local potentials. We have employed standard pseudopotential parameters for both carbon and hydrogen atoms. We have employed a double ζ polarized (DZP) basis set for the carbon atoms, that includes 2 pseudoatomic orbitals for each 2s and 2p atomic state, and a p-polarized (e.g.: a d) function; we have used a simpler double ζ basis set for H with 2 orbitals for its 1s states. We have used a real-space grid defined by a mesh cut-off of 250 Ry. We have also relaxed all atom positions in the nanoribbons simulated until all forces were smaller than 0.001 eV/Å. We have employed our own MATLAB scripts to post-process the SIESTA Hamiltonian.
B. Tight-Binding model accuracy and parameters
We have searched for NM, AFM and FM DFT selfconsistent solutions for each of the ribbons that we have simulated. We have found that all those ribbons have a NM solution while AFM and FM solutions only exist for ribbons larger than given critical lengths. These facts fully agree with the TB and Hubbard dimer model predictions. We have taken advantage of the fact that DFT is in effect a mean-field method. This means that we can use the Kohn-Sham (KS) eigen-energies to perform estimates and make comparisons with the eigen-energies of both the TB and the Hubbard dimer models, by using the equations in Fig. 5.
First, we note that the eigen-energy of any bulk/edge state must lie inside the band/gap of the corresponding infinite-length ribbon. We can therefore simply look into the NM DFT solutions to establish the critical length M DF T c as the length in which in-gap states nucleate for the first time. Second, we can extract the effective hopping between DFT edge states t DF T α from the NM edge states KS eigen-energies (see the top panel in Fig. 5): Third, we can extract the Hubbard-U interaction between DFT edge states U DF T α from the AFM edge states KS eigen-energies: We can then extract the TB parameters t, t and U by fitting t α in Eq. (27) to t DF T α and U α in Eq. (30) to U DF T α . We show the results of this fitting procedure for t α and U α , for N = 5 ribbons, in the top two panels of Fig. 8. We then write down in Table I the fitted values of t, t and U . We estimate now ∆ α , M c , M AF M and M F M from these fitted parameters, and compare them with the DFT values, that are also shown in Table I. We stress that the two panels and the values of the critical lengths show that both model and DFT simulations agree truly well. The high quality of the mapping can be further tested by looking into more complex magnitudes. We have chosen here the energy differences between different magnetic solutions E N M − E AF M and E F M − E AF M , as well as the magnetic moment of the AFM solution. The bottom panels in Fig. 8 shed more weight on the quality of the mapping. We have chosen N = 5 ribbons for the present discussion because they have the highest potential for experimental testing of our predictions. The results for N = 7 and 9 ribbons is qualitatively similar and therefore relegated to Appendix B. Table I indicates a possible significant trouble for the validity of our results, since the fitted t value of about 4 to 5 eV is much larger than the universally accepted value for bulk graphene of about 2.7 eV. 36 This discrepancy has prompted us to perform a deeper analysis of the DFT Hamiltonian.
C. DFT Hamiltonian downsizing
We devote this section to trim the SIESTA DFT Hamiltonian gradually from the initial full-basis form H f ull down to the simple TB expression given in Eq. (1).
Our first step is to reduce the basis set and leave only the 2p z carbon orbitals. This is equivalent to picking the Hamiltonian box containing only matrix elements among 2p z orbitals. We call the resulting Hamiltonian H DZ because each atom contains two p z orbitals. The drastic reduction of the Hamiltonian is justified by the fact that the lowest-lying valence and conduction bands of graphene have 2p z flavor to a very large extent.
The second step consists of reducing the basis from two 2p z orbitals per carbon atom to a single one. This is accomplished by making use of the variational principle and integrating out the unwanted high-energy degrees of freedom. The single remaining p z orbital is defined by the linear combination of the 2 original p z orbitals that minimizes the energy of the HOMO and LUMO states. We denote the resulting Hamiltonian H SZ SIESTA orbitals are non-orthogonal to each other, and so are the orbitals of the single-ζ basis defined in the previous paragraph. We therefore compute the overlap matrix S SZ and orthogonalize the basis. The resulting Hamiltonian H SZ,orth is already rather similar to the Hamiltonian in Eq. (1). There remain however three differences: first, H SZ,orth has non-zero hopping integrals to first, second and third nearest neighbors, that we denote by −t 1 , t 2 and −t 3 , respectively; second, non-zero on-site energies ε 0 appear; third, both on-site energies and hopping integrals are non-uniform across the ribbon. We define t 1 and t 3 with a negative sign in front of them so that all numbers are real positive. We show in Fig. 9 the spatial distribution of on-site energies and hopping integrals for a N = 5, M = 10 ribbon to achieve further insight on their non-uniformities. The figure shows that all values of t 1 fall in the range (2.6, 2.9) eV in agreement with the accepted values of nearest neighbor hopping integrals in graphene. 36 We find that ε 0 ∼ t 2 ∼ t 3 , and that the three are one order of magnitude smaller than t 1 . This later fact has prompted us Fig. 11 (a).
to undertake two further trimmings on the Hamiltonian. The first consists of setting all on-site energies to zero, the resulting Hamiltonian being called H 3N . A second trimming consists of picking H 3N and chopping off all t 2 and t 3 hopping integral, whereby the resulting Hamiltonian H 1N indeed conforms to Eq. (1).
We assess now the impact of each the above Hamiltonian reductions for a N = 5 ribbon. We show first t α computed from the different Hamiltonians as a function of the ribbon length in Fig. 10. We find that all of them deliver estimates for t α in close agreement to the full DFT Hamiltonian. The single exception is H 1N , the one Hamiltonian that looks like Eq. (1). We then reach the conclusion that the simplest DFT-based Hamiltonian that reproduces the simulations is H 3N . D. Parameter mapping Fig. 9 shows that the hopping integrals t i are mainly affected by their proximity to the edges, so that we should assess whether those changes modify the topological protection and existence of edge states defined by the full Hamiltonian. To do so, we define a new TB Hamiltonian for infinite-length N = 5 ribbons H T B whose hopping integrals are defined graphically in Fig. 11, and are written down in Table II. The hopping integrals t 1a , t 1c and t 1e correspond to TB model t, while t 1b and t 1d correspond to t . The table displays some apparent paradoxes because t 1a , t 1c and t 1e are not equal, and furthermore they are not really larger than t 1b and t 1d , which is a requisite for the appearance of an edge state for the N = 5 ribbon within the TB model.
E. Z2 invariant
We have computed the Z 2 invariant using H T B , and have found that Z 2 = 1 as expected, hence confirming the presence of topologically protected edge states. We modify now each of the different hopping integrals in the model at a time to identify which of them affect most the Z 2 value. Our results, shown in Fig. 11 (b), demonstrate that changes in any t 2 , t 3 , or t 1a , t 1b do not modify Z 2 , while small variations in t 1c , t 1d or t 1e do, and kill the edge states.
For a moment, let's just focus on a first neighbor Hamiltonian. In this case, our model indicates that for N = 5, inside the relevant region of the reciprocal space to find edge states k α = π 3 , and the coefficients E y (k y ) defined in equation (13) vanish in the central row of the ribbon R y = 3. This condition, fixed by the boundary conditions in the Y direction, is maintained even if we change the different values of t 1N , as far as the axial symmetry around the axis defined by the central row of C atoms is conserved. Then, any interaction with the central C-atoms of the ribbon, that is, t a 1N and t b 1N , becomes irrelevant for the properties of the edge states; and the edge states of the ribbon are exactly those of a SSHlike chain with t and t hopping integrals, formed by the 2 upper or 2 lower C chains of the ribbon structure (as it is clear from the value of f (k) = t + t e ikx ).
We show in Fig. 11 (c) the value of Z 2 of the ribbon as we modify t c 1N , t d 1N and t e 1N in pairs, considering only the t 1N interactions (upper panels), or all the interactions shown in (a) (lower panels). With only the t 1N interactions, we can make a correspondence . Similar relations between our simplified t, t parameters and the t 1N values of the real ribbon are expected for ribbons of other widths. Then, we obtain that the transition between Z 2 = 0 and Z 2 = 1 occurs exactly at t = t , as expected in our model. Cao et al 20 indicate that a distortion at the edges leading to a stronger hopping between the edge atoms (t e 1N in our calculations) is enough to open a GAP in the band struc-ture of these ribbons and obtain Z 2 = 1 for N = 5. This agrees with our results, where the strongest value of t 1N is indeed t e 1N , and is crucial to fulfill the t < t condition. However, we go beyond this edge-distorted model, as we consider the effect of changing any of the t 1N parameters.
With the values of Table II, Z 2 = 1 but ∆ y = 0.997 and M c 187, that explains why no edge states are shown in Fig. 10 for H 1N . Including t 2N and t 3N interactions changes the results, increasing the region where Z 2 = 1. Although several factors affect to this change, the most important is the inclusion t b 3N and t d 3N that modify the SSH-like chain formed by 2 C chains. If we include in our in the SSH-like chain becomes: In this case, when k x → π, θ k → 0 if t − t 3N < t, and the condition to obtain edge states becomes less restrictive, in agreement to what is shown in Fig. 11 (c). We can make a rough estimation of the equivalent ∆ α = t −t 3N t = 0.927, in good agreement with the result of ∆ α = 0.933 obtained from our fitting of t and t . Therefore, we can assume that the obtained value of ∆ y in our fitted TB model, that is the main responsible of the behavior of the edge states in the ribbon, is correct, but it is obtained at the cost of getting unrealistic values of t and t that take care of the effects of interactions between other neighbors and of the differences in the hopping integrals as we move closer to the edges.
V. CONCLUSIONS
We have presented a full analytical solution of the TB model of finite-length AGNRs, that we have also called rectangulenes. We have indeed shown that the above problem can be separated as the product of a one-dimensional finite-length mono-atomic chain times a one-dimensional finite-length dimerized chain. We have written down the explicit expressions for the quantum numbers, the eigen-functions and the eigen-energies. We have found that finite-length armchair ribbons witness a cascade of magnetic transitions as a function of the ribbons length. We have found ample room for experimental testing of the prediction in N = 5 AGNRs.
We have also performed DFT simulations of N = 5, 7 and 9 ribbons where the above TB-based estimates are confirmed. We have then performed a mapping between the TB and the DFT Hamiltonian to check the robustness of the predictions and determine the model parameters.
VI. ACKNOWLEDGMENTS
The research carried out in this article was funded by project PGC2018-094783 (MCIU/AEI/FEDER, EU) and by Asturias FICYT under grant AYUD/2021/51185 with the support of FEDER funds. G. R. received a GEFES scholarship.
Appendix A: Open boundary conditions in TB chains
In this appendix we show the analytical solution of the TB Hamiltonian of a monoatomic chain (Fig. 12 (a)) and of a dimerized chain (Fig. 12 (b)), also known as the SSH model, 27 with open boundary conditions.
The solution of the monoatomic chain is quite straightforward. We consider a chain of n sites (where we use lower case letters to avoid confusion with the definition of the graphene ribbon structure in the main text), with all on-site energies shifted to zero and first neighbors interaction of value −t. In the basis of the orbitals located on each cell l, labeled |l , any wave-function can be described from a set of coefficients C l as: In particular, a Block wave-function of the system |u k can be written as: −2t cos (k), leads to a degeneracy ε k = ε −k . Therefore, we write the following trial wave-function, of energy ε k : to try to fulfill the open boundary conditions, consisting in: We obtain the following solution: where A is just a normalization constant. We define the dimerized chain ( Fig. 12 (b)) as follows. Each unit cell l contains 2 orbitals a and b, so we write |l, a , |l, b to identify our basis. Those can be gathered in a single vector for each cell as: All on-site energies are shifted to zero, and each orbital of type a (b) interacts only with its neighbors of type b (a) with an interaction labeled −t i or −t o depending on if it occurs within the same unit cell of between neighboring cells. In this basis, any wave-function can be written as: while Bloch wave-functions |u k verify: where the coefficients c a k and c b k have to be obtained from the diagonalization of a 2 × 2 effective Hamiltonian: where we defined ∆ = to ti , f (k) = t i + t o e ik and θ k as the polar angle of the complex number f (k). The Bloch wave-functions are then described by: with energy: We now focus on the open boundary conditions for a SSH chain of 2m atoms. Like for the monoatomic chain, ε k = ε −k and therefore we use the same linear combination of Bloch wave-funtions of equation (A3) as trial wave-functions. Two different cases can be considered ( Fig. 12 (b)). If the chain contains only complete unit cells, we call this chain a closed-cell SSH chain. If the cells at the edges contain only one atom belonging to the chain, we call this an open-cell SSH chain. It is clear that we can transform one system into the other by exchanging the labels t i and t o . Therefore, we solve explicitly the closed-cell case, and at the end we do the needed transformations to obtain the solution of the open-cell case, which is relevant in the context of graphene ribbons.
The open boundary conditions at one edge define the general shape of the wave-function: This relation allows us to rewrite the coefficients c a l as: c a l = A(−1) p+1 sin (k (m + 1 − l)) (A15) Equation (A14) must be solved numerically, under the restriction that k ∈ (0, π), as both k = 0 and k = π lead to c a l = c b l = 0 for any l. All these real values of k lead to states delocalized over all the chain, that is, bulk states. However, unlike what happens for an infinite chain or for a chain with periodic boundary conditions, in the finite chain the loss of translational symmetry opens the door to the existence of states located close to the limits of the chain, that is, edge states. These states can also be described with a wave-vector k, but with an imaginary part. Our objective now is to determine wether these states exist in the chain or not.
The problem can be faced from the perspective of topology. The bulk-boundary correspondence establishes that we can define a topological invariant from the bulk wave-functions, whose value determines the existence or not of edge states at the boundaries. 28 This correspondence supposes a closed-cell structure at the edges. In the case of a one dimensional system that can be described with a 2 × 2 Hamiltonian H(k) in terms of the Pauli matrices σ x and σ y from a two dimensional vector d(k) = (d x (k), d y (k)) as: the relevant topological invariant is the winding number ν. ν is just the number of loops that d performs around the origin when k goes through the first Brillouin zone. Topology states that if ν = 0, all k values are real and no edge states appear, while if ν = 1 there is a k with an imaginary part that leads to a couple of edge states. Notice that, besides a global sign, d is just f (k) in the XY instead of the complex plane. Therefore, we can analyze ν by analyzing the evolution of θ k as k goes from −π to π. Fig. 13 (a) shows the evolution of θ k through the first Brillouin zone, as well as the evolution of f (k) in the polar plane. It is clear that ν = 1 (ν = 0) if ∆ > 1 (∆ < 1). Our chain of m cells must contain m values of k, whether real or complex. Looking into equation (A14), if we had θ k = 0 for all values of k, g(k) would be a straight line and the valid values of k would be just those of a monoatomic chain of n = m atoms, as shown in equation (A5). As θ k is a continuous function of k, the values of k deviate from those of the monoatomic chain, but we know that each time g(k) crosses an integer value times π in the range k ∈ (0, π), a new real solution of k arises. If ∆ < 1, θ k (0) = θ k (π) = 0, the values of g(0) and g(π) do not change from those of the monoatomic chain, and therefore the existence of m real values of k is guaranteed by the continuity of g(k). If ∆ > 1, however, θ k (π) = π and g(π) decreases a π-step from the monoatomic case. Therefore, continuity of g(k) only guarantees the existence of m − 1 real values of k. This is exactly the result obtained from ν. In other words, the winding number is just a measurement of the change of θ k through the first Brillouin zone that reduces the number of bulk states that can be guaranteed by continuity. However, this is not the whole story, as continuity of g(k) only fixes a lower bound to the number of bulk states, but it can not guarantee the existence of edge states. Looking at the behavior of θ k as a function of k for ∆ > 1 (Fig. 13 (a)), θ k is a monotonous function of k that increases first slowly, but finally fast as k is close to π. Then, g(k) can become a decreasing function around k = π. In this case, an extra real value of k appears and the system has no edge states, even although ν = 1. This condition translates to: If the length of the chain m is below a certain threshold m c , we still have m bulk values of k. Alternatively, for a fixed value of m, if ∆ is below a critical value ∆ c = m+1 m , we also have m bulk states. If this is not the case, we must find a complex value of k. We show an example of the different possible behaviors of g(k) in Fig. 13 (b).
We search for complex values of k by analytical continuation of k in the limits of its validity range, k = 0 − iq or k = π − iq. It can be demonstrated that only the second case leads to a valid solution. The Hamiltonian of equation (A9) then becomes: where f (q) is the geometric mean of the off-diagonal terms of the Hamiltonian (that is positive as ∆ > 1), and θ q = 1 2 log 1−∆e q 1−∆e −q is introduced to mimic θ k in equation (A9). We require q ∈ (−q lim , q lim ) to guarantee that θ q is real, with q lim = | log (∆)|. The solutions of the Hamiltonian are then: c a q = 1; c b q = τ e θq ; (τ = ±) (A19) with energy: ε q = τ f (q) = τ t i 1 + ∆ 2 − 2∆ cosh (q) (A20) Once again, we have to apply the open boundary conditions, with the first one defining the general shape of the wave-function: Condition (A23) is always satisfied for q = 0, but this leads to the invalid, real solution k = π. Other possible values of q must be obtained numerically, but we can determine if these solutions exist by analyzing the behavior of the functiong(q) (see Fig. 13 (c)). For ∆ < 1 we only findg(0) = 0. For ∆ > 1 this function is continuous inside the defined range of q, odd, andg (q → ±q lim ) = ∓∞. Then, there are other two solutions ofg(q) = 0, of value ±q, if: (A25) Notice that solutions of value of ±q lead to the same coefficients of the wave-function in equations (A21) and (A24), up to a sign. Therefore, it is enough to consider the solution with q > 0. Results of equations (A17) and (A25) are consistent. For a given chain defined by ∆ and m, if ∆ < 1, or ∆ > 1 but m < m c (equivalent to ∆ < ∆ c ), the chain presents m real values of k leading to 2m bulk solutions. If ∆ > 1 and m > m c (equivalent to ∆ > ∆ c ), the chain contains m − 1 real values of k to define 2m − 2 bulk states, but also a complex value of k = π − iq, leading to 2 localized edge states.
The value of q indicates the level of localization of the edge states, as q −1 is a measurement of the penetration depth of the state in units of d. The exact value of q for a given value of ∆ and m must be obtained numerically solving equation (A23), or any of the following, equivalent equations: tanh (qm) = sinh (q) ∆ − cosh (q) (A26) ∆ sinh (qm) = sinh (q (m + 1)) (A27) We can obtain an approximated value of q if it is close to q lim with the following expression: Alternatively, we propose the following iterative solution that, starting at q 0 =q lim , converges quickly to the exact value of q: Fig. 14 shows the evolution of q/q lim with m for several values of ∆. Notice that in all cases q evolves asymptotically to q lim , reaching q lim faster the larger the value of ∆. The value of q lim decreases as ∆ decreases, leading to more delocalized edge states for ∆ closer to one.
Edge states given by equations (A21) and (A24), that we can label |Ψ e τ , are non-zero eigen-states distributed over both edges and both sublattices. We can define zero-energy states, that are not eigen-states, but that are located only over the left (|Ψ e L ) or right (|Ψ e R ) edge, by: These states are not only localized over different edges, but also over different sublattices of the chain. We can then see the eigen-states |Ψ e τ as the result of the interaction of two zero-energy states, located at different edges, interacting via an effective hopping integral of value f (q).
Finally, we look at the open-cell case. We can solve again the SSH chain, now with the following open boundary conditions: However, we can also obtain this new solution making the following transformations to the closed-cell solution. First, we exchange the role of t i and t o . This changes the role of ∆ to ∆ −1 . This leads, for example, to the following changes in f (k) and θ k f (k) = t o + t i e ik = t o 1 + ∆ −1 e ik = |f (k)|e iθ k (A33) | 2023-03-02T02:15:42.632Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "3114a2a938f235f2be6fee44ac01d5860c285a89",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3114a2a938f235f2be6fee44ac01d5860c285a89",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
232259506 | pes2o/s2orc | v3-fos-license | Properties of Banana (Cavendish spp.) Starch Film Incorporated with Banana Peel Extract and Its Application
The objective of this study was to develop an active banana starch film (BSF) incorporated with banana peel extract. We compared the film’s properties with commercial wrap film (polyvinyl chloride; PVC). Moreover, a comparison of the quality of minced pork wrapped during refrigerated storage (7 days at ±4 °C) was also performed. The BSF with different concentrations of banana peel extract (0, 1, 3, and 5 (%, w/v)) showed low mechanical properties (tensile strength (TS): 4.43–31.20 MPa and elongation at break (EAB): 9.66–15.63%) and water vapor permeability (3.74–11.0 × 10−10 g mm/sm2 Pa). The BSF showed low film solubility (26–41%), but excellent barrier properties to UV light. The BSF had a thickness range of 0.030–0.047 mm, and color attributes were: L* = 49.6–51.1, a* = 0.21–0.43, b* = 1.26–1.49. The BSF incorporated with banana peel extracts 5 (%, w/v) showed the highest radical scavenging activity (97.9%) and inhibitory activity of E. coli O157: H7. The BSF showed some properties comparable to the commercial PVC wrap film. Changes in qualities of minced pork were determined for 7 days during storage at ±4 °C. It was found that thiobarbituric acid reactive substances (TBARS) of the sample wrapped with the BSF decreased compared to that wrapped with the PVC. The successful inhibition of lipid oxidation in the minced pork was possible with the BSF. The BSF incorporated with banana peel extract could maintain the quality of minced pork in terms of oxidation retardation.
Introduction
Food packaging functions to reduce the rate of gas transfer between food and the environment and the control of oxygen and water vapor permeability allow the extension of the shelf life of foods [1]. The use of packaging from synthetic plastics causes serious environmental problems, giving rise to a demand for packaging alternatives from biodegradable materials. The disposal of synthetic packaging is difficult because it is non-degradable and non-recyclable, thus taking a long time to break down. Therefore, the use of biodegradable packaging materials can solve this problem to obtain environmentally friendly ones, which can be made from natural polymers such as proteins, lipids, and polysaccharides. Among polysaccharides, starch has received special attention; it is abundant, cheap, biodegradable, edible, and renewable [2][3][4]. Starch is an agricultural biopolymer found in a variety of plants including wheat, corn, rice, beans, potatoes, etc. However, a literature search via Scopus during the last sixteen years (2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019)(2020)(2021) found that the film made of banana starch already published only five research papers [2,[5][6][7][8], and all of these mostly focused on characterization and the banana starch was not green Cavendish spp. The effectiveness and the applicability of the banana starch film on perishable food products were not found.
Bananas are one of the major fruit crops in Thailand. It is a commonly consumed fruit and, worldwide, banana production reached a record of 114 million tonnes in 2017, up from around 67 million tonnes in 2000 (Food and Agriculture Organization: FAO, 2021). In Chiang Rai, about 4 tons per day of harvested green banana (Cavendish spp.) is wasted, and all rejected green bananas are normally disposed improperly. Therefore, preparing edible films from banana starch is an alternative for using this raw material to provide a benefit. The entire banana fruit is rich in bioactive compounds, such as phenolic constituents, carotenoids, vitamins, and dietary fiber [9,10]. Unripe banana naturally consists of more than 70% starch, with the remainder being protein, lipid, and fiber [11,12]. The polysaccharides in banana powder give extra hydrogen bonding contacts between polymer chains, which is responsible for the film-forming capacity [13]. In addition, banana starch can be used as a base material for preparing the biodegradable film with good gas barrier properties as well as nontoxic properties [8,14].
The incorporation of natural active agents into edible film can be used instead of chemical agents, which can improve the functional properties of film as well as maintain the quality of food. Banana peel has stronger antioxidant activity, greater phenolic compounds, and a higher mineral content than banana pulp. Additionally, banana peels were evaluated as having powerful antimicrobial activity against bacteria, fungi, and yeast [8,[15][16][17]. Bioactive compounds such as flavonoids, tannins, phlobatannins, alkaloids, glycosides, and terpenoids are present in banana peel [15]. According to these reports, the incorporation of these into banana starch based film was not found, as well as the application of the developed film in the food system.
Meat is one of the most perishable foods because of its nutritional composition. The spoilage of minced pork or other meat products is mainly caused by chemical and microbiological deterioration. The spoilage of fresh pork by microorganisms occurs mostly due to improper handling before and after it is slaughtered [18,19]. Lipid oxidation, protein degradation, and the loss of other valuable molecules are the consequences of the meat spoilage process. The objective of the present study was to develop and characterize unripe banana starch film incorporated with banana peel extract and apply the developed film to minced pork. Table 1. Significant differences in thickness were observed between BSF and PVC film (p < 0.05). The BSF was four times thicker than the PVC film. PVC film had a thickness value of 0.010 mm, while the BSF had a thickness value range of 0.030-0.047 mm. This result also showed that when the concentrations of banana peel extract increased, the film thickness was also increased. However, the thickness of the BSF was not influenced by banana peel extract concentration (p > 0.05). According to Gutiérrez et al. [20], a greater interaction between the starch and plasticizer could result in thicker films probably due to the formation of hydrogen bonds between the glycerol and starch. Pérez et.al. [21] reported a significant increase in the thickness of cross-linked starch-based films derived from Dioscorea trifida. Cross-linking apparently strengthens internal bonds in starch, which increases molar volume [22]. This factor, together with a greater interaction between the starch and plasticizer during gelatinization, could result in thicker films. Film thickness generally affects the properties of films such as mechanical properties (tensile strength (TS) and elongation at break (EAB)), water vapor permeability, light transmission as well as film transparency.
Mechanical Properties
For packaging films, good mechanical properties such as TS and EAB are required for the films to resist external stress and maintain their integrity, as well as to act as barriers during the packaging process. The mechanical properties of the BSF incorporated with banana peel extract at different concentrations in comparison with PVC film were expressed in terms of TS and EAB, as shown in Table 1. The BSF had a slightly decreased TS (43-31.20 MPa) and EAB (9.66-15.63%) when the banana peel extract was added, but no significant difference was observed in EAB (p > 0.05). The addition of glycerol to the film forming solution (FFS) significantly affected the TS of the BSF. As compared to PVC film, it showed a greater TS value (44.10 MPa) over the BSF. For the EAB, BSF still had a lower value than the PVC. According to Pelissari et.al. [6], this behavior could be explained by the differences in the amylose content of the films, since it is known that films rich in amylose exhibit good mechanical strength but little flexibility. High tensile strength is generally required, but deformation values must be adjusted according to the intended application of the films.
Thermal Properties
The melting temperature (Tm) and enthalpy (∆H) of the BSF incorporated with different concentrations of banana peel extract are shown in Table 1. The Tm and ∆H of the BSF incorporated with banana peel extract were in the range of 89.83-98.50 • C and 192.30-273.90 J/g, respectively. This hydothermic peak has been associated with the disruption of inter-chain interaction. According to Kaewprachu et.al. [23], the Tm of the films indicated the temperature and caused a disruption of the polymer interaction that formed during film preparation. Changes in Tm and ∆H values were observed in the control films (0%) and banana starch film incorporated with banana peel extract, regardless of the level of banana peel extract concentrations used (p < 0.05). Additionally, lower enthalpy values (∆H) associated with the glass transition have been related to the weakening of the inter-and intra-molecular interactions between starch-starch chains [24]. The thermal properties of banana starch film were markedly affected by the concentration of banana peel extract used.
Film Appearance and Color
The appearances of the BSF incorporated with banana peel extract at different concentrations in comparison with PVC film are shown in Figure 1. The lightness (L*), redness/greenness (a*) and yellowness/blueness (b*) values were significantly different when observed between the BSF and PVC film (p < 0.05). The results showed that the color attribute of the BSF had slightly increased L* and b* values when the banana peel extract content increased from 0 to 5 (%, w/v), but no significant difference was observed in b* value (p > 0.05). The a* value of the BSF was slightly decreased because of the browning color of the peel extract. According to the composition of banana peel, which consists of sugar and amino acid, powder preparation using an oven drying browning reaction may result in a dark brown color. When compared with PVC film, the BSF containing banana peel extract had slightly lower lightness, while redness and yellowness showed higher values than PVC film. These results are similar to those reported by Gutierrez et al. [20], which confirmed that films with higher amylose content are more opaque. According to these findings, it can be concluded that the addition of banana peel extract in the BSF might limit the use of the film only in packaging such as pouch bags for oil. The moisture content of the BSF was affected by the incorporation of banana peel extract (p < 0.05) ( Table 2). The moisture content of the experimental films gradually increased with increasing concentrations of banana peel extract. According to Verbeek and Bier [25], moisture content directly affects the banana starch films' mechanical properties because of their characteristic hydrophilicity. Typically, films that contain larger amounts of water have high flexibility and low strength compared with others. Moreover, films with high moisture content, if used for application, can cause microbial growth on the surface of packaged food [19,23]. According to the visual observation, the BSF maintained its integrity after a 24 h dip in water. The solubility of the BSF was in the range of 26-41%, while the PVC film was not soluble in water (Table 2). Significant differences in film solubility were observed between the films incorporated with banana peel extract and the control film (p < 0.05). The film solubility of the BSF was increased with increasing concentrations of banana peel extract (p < 0.05). The highest amount of banana peel extract addition showed the highest film solubility (40.8%), while the lowest film solubility was found in the control film (26.3%). So, the soluble substances could escape from the film during immersion in distilled water. Banana peel extract could be released into distilled water due to its hydrophilic nature. In general, high film solubility may indicate poor water resistance [4]. Higher film solubility when compared with others may have the possibility of developing active food packaging, which can be easily soluble and can release active agents contained in the active film [17].
The water vapor permeability (WVP) of the BSF incorporated with banana peel extract at different concentrations in comparison with PVC film is shown in Table 2. The WVP of the BSF showed in the range of 3.74-11.0 × 10 −10 g mm/sm 2 Pa, while the WVP of the PVC film was 1.15 × 10 −10 g mm/sm 2 Pa. The WVP of the BSF was slightly increased when increasing banana peel extract (p < 0.05). The higher amount of moisture content in the banana peel extract incorporated films also led to higher WVP values. When compared with PVC film, the BSF had a lower WVP than that of the PVC film. Similar results have been reported by Ma et.al. [26], who reported that nanoparticles such as those in starch modified with citric acid also reduced the film WVP. Thus, the BSF could be related to film thickness (Table 1). A low WVP of the BSF may not suitable to apply as food packaging. However, the characteristics of the film depend on the specific requirements for food preservation.
Antioxidant Properties of the Films
The antioxidant activity of the BSF incorporated with banana peel extract at different concentrations is expressed in terms of DPPH radical scavenging activity, and the results are shown in Table 2. The BSF incorporated with 5 (%, w/v) banana peel extract showed the highest radical scavenging activity (97.9%). The addition of banana peel extract increased the radical scavenging activity of the film. The incorporation of banana peel extract 1 (%, w/v) into the film did not show any significant difference in the radical scavenging activity when compared with the control film (p > 0.05). This was mainly due to the effective phenolic compounds in banana powder and banana peel extract [8,27]. These results are similar to those reported by Ramirez-Hernandez et.al. [5], who showed DPPH radical scavenging activity results obtained for the active films. As resulted, the increase in the added amount of rosemary extract to the formulations led to edible films with higher polyphenol content. Moreover, the DPPH-radical scavenging activities of the films increased progressively when the polyphenol content in the samples increased. Therefore, BSF behavior could be useful for applications where a longer release time of the antioxidants is required.
Light Transmission and Transparency
The transmission of UV (200-280 nm), visible light (350-800 nm) and the transparency of all the films are shown in Table 3. The light transmission in the UV ranges was 18.9-84.8%, while the transmission in the visible range was 31.7-91.9%. The transmission of UV and visible light decreased with increased concentrations of banana peel extract. This result suggests that banana peel extract treated film could prevent UV transmission and therefore could reduce food deterioration, especially the lipid oxidation that is induced by UV light. A lowered visibility of light transmission was observed when banana peel extract increased. According to these results, the light transmission of developed film depended on the amount of banana peel extract in the film.
The transparency value of the resulting films was in the range of 2.85-3.23, while the transparency value of PVC film was 3.95 (Table 3). All films had no significant effects on the transparency in concentrations (3-5%, w/v)) of banana peel extract (p > 0.05). The transparency value of developed film decreased when increasing the banana peel extract concentration. According to Pelissari et al. [6], the opacity of films can vary depending on the content of amylose, whose molecules of linear nature tend to favor tight hydrogen bonds between the hydroxyl groups of adjacent chains. Therefore, the BFS incorporated with banana peel extract had an effect on the optical properties of the resulting film.
Antimicrobial Activity of the Films
The antimicrobial activity of the BSF incorporated with banana peel extract at different concentrations is shown in Figure 2. The control film and the BSF at a concentration of 1 (%, w/v) banana peel extract did not show any inhibitory activity against Gram-negative bacteria (E. coli O157: H7) and Gram-positive (S. aureus TISTR 1466) food borne pathogenic bacteria. On the other hand, the BSF at the concentration of 3 and 5 (%, w/v) banana peel extract presented inhibition activity only against E. coli (O157: H7), and the value of the inhibition zone area ranged between 0.1 and 0.2 mm, respectively. As expected, the antimicrobial activity of the BSF was mainly attributed to phenolic compounds of banana peel extract. In the present study, the inhibitory zone presented much lower values due to the lower banana peel extract concentration that was applied to the films.
Color Changes of Minced Pork during Storage
Color is one of the most important attributes of meat quality. The L*, a*, b*, ∆E and the visual quality of minced pork wrapped with the BSF and PVC films during refrigeration (4 ± 1 • C) for 7 days are shown in Table 4. All minced pork showed an increase in L* value when storage was extended (p < 0.05). However, the samples wrapped with the BSF showed significantly different lower L* than the PVC film during refrigeration (p < 0.05), while the a* value of the samples wrapped with both films exhibited a significantly different decrease during storage (p < 0.05). This indicated that the sample became less red in color. The redness of minced pork depends on the myoglobin content in the meat. The increase in b* value was probably related to an increase in the thiobarbituric acid reactive substances (TBARS). Glycation occurs in vivo through the covalent binding of aldehyde or ketone groups of reducing sugars to free the amino groups of proteins, forming advanced glycation end product structures that may have a yellowish-brown color or fluorescence [28]. The minced pork also showed high ∆E values in both films, whereas the PVC film retained the external color of minced pork better during refrigerated storage. Hence, myoglobin oxidation was postulated to occur, which explains the reduction in the redness. This could be due to the antioxidant properties of the BSF. This suggests that the sustained release of banana peel extract from the film to the flesh surface resulted in retarded oxidation. In addition, the BSF wrapped sample could protect against the loss of redness in the minced pork.
Weight Loss, TBARS and pH Changes of Minced Pork during Storage
The weight loss of minced pork wrapped with the BSF (5% incorporation) was less than that wrapped with PVC film (Figure 3). These values had a significantly different increase when the storage time increased (p < 0.05). The weight loss value of the samples wrapped with the BSF was 0.78%, while the sample wrapped with PVC was 1.36% after 7 days of storage. In addition, the higher weight loss of the sample wrapped with PVC may be caused by the prevention mechanism of external moisture to enter into packed meat. While the minced pork that was wrapped with the BSF had a higher weight gain during storage because of its hydrophilic nature, banana starch absorbed moisture from the atmosphere. Therefore, the BSF could hold moisture inside the minced pork. The TBARS of minced pork wrapped with the BSF incorporating 5 (%, w/v) banana peel extract and PVC film was significantly increased during refrigerated storage conditions (p < 0.05) (Figure 3). The minced pork wrapped with the BSF showed lower TBARS than the minced pork wrapped with PVC film (p < 0.05). This suggests that lipid oxidation in minced pork could be delayed when the BSF was applied. However, the BSF could not completely inhibit the oxidation reaction. This is probably due to the low migration rate of active compounds (banana peel extract) incorporated in the BSF. Additionally, the oxidation retardation is probably because UV light was inhibited from penetrating the wrapped film into the samples. Benjakul et.al. [29] reported that lipid oxidation in meat can be initiated and increased by photosensitized oxidation, autoxidation, lipoxygenase, and peroxidase. In lipid oxidation, free radicals bind with hydrogen to form hydro peroxide. The continuous increase in free radicals of fatty acid, which react with oxygen to form hydro in TBARS values was observed in minced pork wrapped with developed film during 7 days of storage (p < 0.05). Therefore, the BSF incorporated with banana peel extracts enhanced the antioxidant properties that resulted in a lower TBARS range from 1.79 mg malonaldehyde/kg sample than that wrapped with PVC film (3.38 mg malonaldehyde/kg sample). The delay of oxidation is probably due to low UV light passing through the film. Additionally, banana peel extract has been reported to be a good source of antioxidants [15,17].
The pH values of minced pork wrapped with the BSF incorporating 5 (%, w/v) banana peel extract and PVC film during refrigerated storage for 7 days are shown in Figure 3. The pHs of all minced pork were significantly different (p < 0.05). The pH of the minced pork wrapped with the BSF slightly increased from 5.99 to 6.96 after storage for 7 days. When compared with the sample wrapped with PVC film, the BSF showed a higher pH than that of PVC film. The increase in pH value for all of the minced pork samples wrapped with the BSF was observed when compared with the sample wrapped with PVC film. This is related to the higher microbial count in the wrapped samples using the BSF.
Microbiological Quality of Minced Pork during Storage
The total plate count, yeasts and molds of minced pork wrapped with the BSF incorporating 5 (%, w/v) banana peel extract and PVC film under refrigerated storage for 7 days are shown in Table 5. The sample wrapped with the BSF showed higher total plate count, yeasts, and molds than the sample wrapped with PVC film (p < 0.05). Meat and meat products provide excellent growth media for a variety of microflora (bacteria, yeasts and molds), some of which are pathogens [30]. The mentioned microbial groups are considered to be food spoilage microorganisms, and their presence in high amounts could affect the organoleptic properties of the samples. The relatively high amount of yeasts and molds may also cause the formation of slime and greening on the surface of the sample [18,19]. The total plate count of the sample wrapped with the BSF was 4.14 log CFU/g after 7 days of storage, which was higher than the PVC film. For yeasts and molds of the sample wrapped with the BSF, they were also higher than the PVC film. Starch is the primary component of unripe banana and it undergoes several changes during ripening [4,31]. Foodborne microorganisms can derive energy from carbohydrates, alcohols, and amino acids. Most microorganisms will metabolize simple sugars such as glucose. Others can metabolize more complex carbohydrates, such as starch or cellulose found in plant foods, or glycogen found in muscle foods.
Banana Flour and Peel Powder Preparation
Unripe banana (Cavendish spp.) was obtained from the Phaya Mengrai Kankaset in Chiang Rai, Thailand. Banana flour was prepared according to the method of Alves et al. [32] with some modification. Unripe banana fruits and peel were dipped in 50 ppm of chlorine solution for 15 min, cut into 5-mm slices and immediately dipped in a 0.1% citric acid solution for 2 h. Unripe banana fruit slices were dried at 60 • C for 16 h, while banana peel slices were dried at 55 • C for 16-18 h in a tray dryer. The dried slices were ground in a pulverizing machine (RT-08, Rong Tsong Precision Technology Co., Taiching, Taiwan) and passed through a 60-mesh sieve. Then, banana flour and peel were stored at −20 • C in a vacuum plastic bag before further use.
Banana Starch Preparation
Banana starch was prepared using a water-alkaline extraction process described by Salama, Z. H. [33] with some modification. Banana flour (100 g) was added to 1 L distilled water and macerated at low speed for 20 min; the homogenate was sieved through 60-mesh screens. The collected milk was centrifuged at 4000 g for 10 min, and 1 L NaOH solution 0.2 (%, w/v) was added to the sediment to remove soluble fiber. The cleaned starch sediment was dispersed in distilled water and repeatedly washed until neutrality. The resulting materials were then placed on trays and dried in a vacuum oven at 45 • C for 24 h. The dried banana starch was ground and sieved through a 120-mesh sieve.
Banana Peel Extraction
Extraction of banana peel was performed according to the method of Aboul-Enein et al. [33] with some modification. One hundred grams of dried banana peel was dispensed in 1 L of 80% acetone, shaking at room temperature for 24 h. The mixture was filtered through Whatman No 1 filter paper and the extraction step was repeated twice. The filtrate was then concentrated at 60 • C in a rotary evaporator before being freeze dried for 15-16 h. The peel extract was ready to use and freshly prepared every time.
Preparation of Banana Starch Film
Banana starch solution was prepared according to the method of Pelissari et al. [6]. Banana starch was dissolved at a concentration of 4 (%, w/v) with distilled water and then stirred at 81 • C for 15 min. Glycerol at a concentration of 25 (%, w/v) was used as a plasticizer mix on stirrer under gentle stirring and then the solution was cooled. The film forming solution was prepared by mixing different concentrations, 0, 1, 3, and 5 (%, w/v), of banana peel extract with banana starch, and then four grams of solution was cast on rimmed silicone resin plates (18 cm × 21 cm) and dried at 54 • C. The films were conditioned in a desiccator under 50% Relative Humidity, at 25 • C for 48 h. The final dried films were manually peeled.
Thickness
The film thickness was measured by using a hand-held micrometer (Bial Pipe Gauge, Peacock Co., Tokyo, Japan). Nine random locations around each of the ten film samples were used for thickness determination.
Color
The color of the film was determined by using a Color Quest XE (Hunter Lab, Virginia) and expressed as the average L* value (lightness), a* value (redness/greenness) and b* value (yellowness/blueness) of the film. Measurement was performed in triplicate.
Light Transmission and Transparency
The light transmission of the films against ultraviolet (UV) and visible light was measured according to the method of Kaewprachu et al. [23] at select wavelengths between 200 and 800 nm using a UV-Vis spectrophotometer (G105 UV-VIS, Thermo Scientific Inc., MA, USA). The transparency value of the film was calculated according to the equation (Han and Floros 1997): where T600 is the fractional transmittance at 600 nm, and x is the film thickness (mm).
Mechanical Properties
Prior to testing the mechanical properties, the films were conditioned for 48 h at 50 ± 5% Ralative Humidity (RH) at 25 • C. The tensile strength (TS) and elongation at break (EAB) were determined according to the method of American Society for Testing and Materials (ASTM) by using a Universal Testing Machine (Lloyd Instrument, Hampshire, UK). Three samples (2 × 5 cm) with an initial grip length of 3 cm were used for testing. The cross-head speed was set at 30 mm/min with 100 N load cell use.
Differential Scanning Calorimetry
The thermal properties of film were analyzed according to the method of Pe-lissari et al. [6] on a differential scanning calorimeter (DSC; TA-Instruments, model 2920, PA, USA) equipped with a cooling system. The film (7-8 mg) was accurately weighed into aluminum pans, hermetically sealed, and scanned over the range of −60 to 150 • C with a heating rate of 10 • C/min in a nitrogen atmosphere (20 mL/min). The empty aluminum pan was used as a reference.
Moisture Content
The moisture content of films was analyzed according to the method of Rhim and Wang [34]. Film samples were cut into squares of 3 cm × 3 cm and were first weighed (W1). The film samples were placed in an oven at 105 • C, and after drying for 24 h, the films were weighed (W2) again. The water content (WC) was determined as the percentage of initial film weight lost during drying and reported on a wet basis.
Triplicate measurements of WC were conducted for each type of film, and an average was taken as the result.
Water Vapor Permeability
The films' WVP was measured by using a modified ASTM method, as described by Kaewprachu et al. [23]. The films were sealed onto a permeation cup containing silica gel (0% RH) with silicone vacuum grease and an O-ring to hold the film in place. The cups were then placed in a desiccator saturated with water vapor at 25 • C. The cups were weighed at 1 h intervals over a period of 8 h, and the films' WVP was calculated as follows: W is the weight gain of the cup (g); X is the film thickness (mm); A is the area of exposed film (cm 2 ); t is the time of gain (h); and (P2 − P1) −1 is the vapor pressure differences across the film (Pa). The WVP was expressed as g mmh −1 cm −2 Pa −1 .
Film Solubility
The film solubility was determined according to the method of Kaewprachu et al. [15]. The weighed conditioned films were placed in 10 mL of distilled water in a 50 mL centrifuge tube, then shaken at a speed of 250 rpm for 24 h. The un-dissolved debris was removed by centrifugation at 3000 g for 20 min. The pellet was dried at 105 • C for 24 h and weighed. The weight of the solubilized dry matter was calculated by subtracting its difference from the initial weight of the dry matter using the following equation: W0 was the initial weight of the film, and Wf was the weight of the un-dissolved film residue.
Antioxidant Properties
Film extract solution was prepared according to the method described in Tongnuanchan et al. [35]. The film (25 mg) was added with 3 mL of distilled water. After stirring for 3 h, the mixtures were centrifuged at 3000 g for 10 min and the supernatant obtained was then determined for DPPH radical scavenging activity. The film extract solution (1.5 mL) was added to 0.15 mM 2,2-diphenyl-1-picryl hydrazyl (DPPH) in 95% ethanol (1.5 mL), followed by mixing, and then kept in the dark for 30 min at room temperature. The DPPH assay solution recorded the absorbance at 517 nm using a spectrophotometer. The films' antioxidant activity was expressed as the percentage of DPPH radical scavenging activity.
Antimicrobial Activity
Antimicrobial activity of the films was performed using the agar diffusion method according to the method of Kaewprachu et.al. [23]. E. coli (O157: H7) and S. aureus (TISTR 1466) were cultured into Muller-Hinton broth (MHB) and incubated in a shaker incubator at 37 • C for 18-24 h. A loopful of the microorganism working stocks was streaked onto a Muller-Hinton (MH) agar plate and further incubated at 37 • C for 18-24 h to obtain a single colony. The optical density of the cultures was adjusted to 0.5 McFarland turbidity standards with 0.85% normal saline and then inoculated on MH agar plates using a sterile swab. A film sample was cut into a circular shape (6 mm in diameter) and then placed on a Muller-Hinton (MH) agar surface, which had been inoculated with microorganisms. Ampicillin (30 µg/disc) was used in this study as antibiotics for the strains tested. After incubation (37 • C for 18-24 h), the plate was investigated for inhibition zones on the film discs.
Application of the Banana Starch Films
Minced pork was purchased from the Tesco Lotus supermarket in Chiang Rai, Thailand. Minced pork samples (50 g each) were placed on a polystyrene (PS) tray (9.5 cm × 15.9 cm × 2 cm). The banana starch film incorporated with banana peel extract and PVC films were used to cover the minced pork in the PS tray. This was performed in atmosphere conditions and by avoiding contact between the film and the minced pork [15]. All prepared samples were stored in a refrigerator (4 ± 1 • C) and monitored at the days 0, 1, 3, 5, and 7 for quality attributes.
Color
The color of the minced pork was determined according to the method of Kaewprachu et al. [15] by using a Color Quest XE (Hunter Lab, Virginia) and expressed as the average L* value (lightness), a* value (redness/greenness) and b* value (yellowness/blueness) of the film. Measurement was performed in triplicate and then the ∆E value was calculated.
where this denotes values at 0 day of storage time, and L*, a*, and b* denote values at days 1, 3, 5, and 7 of the storage.
Weight Loss
The weight loss of the minced pork was determined according to the method of Herring et al. [36] by comparing the weights of the initial sample with the weight of the sample after each day of storage: Weight loss (%) = {(W0 − Wf)/W0} × 100 (6) W0 was the initial weight of the samples and Wf was the weight of the sample after each day of storage.
PH Determination
The pH of the minced pork was determined according to the method of Kaewprachu et al. [18]. Ten grams of the minced pork was homogenized with 50 mL of chilled distilled water before being subjected to pH measurement by using a digital pH meter (Model pH 510, Eutech Instrument, Ayer Rajah Crescent, Singapore).
Lipid Oxidation
Thiobarbituric acid reactive substances (TBARS) were determined as described by Buege and Aust [37]. One gram of minced pork sample was homogenized with 5 mL of solution containing 0.0375 g/100 g TBA, 15 g/100 g TCA, and 0.25M HCl. The mixture was heated at 95 • C for 10 min. The heated sample was cooled and centrifuged at 3600 g for 20 min. The absorbance of the supernatant was measured at 532 nm. As a standard curve, 1, 1, 3, 3-tetramethoxypropane at a concentration ranging from 0 to 10 µg/mL were used. TBARS was expressed as the mg malonaldehyde/kg sample.
Microbiological Quality
Microbial analysis was determined according to the method described in Siripatrawan and Noipha [38]. Minced pork (25 g) was homogenized in 225 mL of 1 g/100 g peptone and 0.5% (w/v) NaCl for 2 min using a Stomacher Lab Blender (Seward, Model 400 CTF, Bangkok, Thaliand). The prepared samples were spread-plated on Plate Count Agar and incubated at 37 • C for 24 h for total plate counts. For yeasts and molds, the prepared samples were pour-plated on Potato Dextrose Agar and incubated at 25 • C for 5 days. For lactic acid bacteria, the prepared samples were pour-plated on Lactobacillus MRS Agar and incubated at 37 • C for 72 h.
Statistical Analysis
Analysis of variance (ANOVA) was performed. The mean comparison was carried out by Duncan's Multiple Range Test. Significance of difference was defined at p < 0.05. The analysis was performed by using an SPSS package (SPSS 16.0 for window, SPSS Inc., Chicago, IL, USA).
Conclusions
The present study was focused on the development and application of banana starch film incorporated with banana peel extract in order to improve the quality attributes and extend the shelf life of minced pork. The BSF presented low mechanical properties and water vapor permeability as compared with PVC film. For barrier properties to UV light, the BSF had better barrier properties than PVC film. Furthermore, the application of the BSF could maintain some quality attributes of minced pork, especially lipid oxidation. However, the BSF was not effective for the inhibition of microbial growth during refrigerated storage. Moreover, the results showed that the BSF may be used as active packaging for any food products as it showed antioxidant (dominant) and antimicrobial properties. | 2021-03-18T05:13:21.136Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "5c02248aa0c1b2169e853bd874525c4b4e493712",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/5/1406/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c02248aa0c1b2169e853bd874525c4b4e493712",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
219761502 | pes2o/s2orc | v3-fos-license | Uremic leontiasis ossea inducing respiratory failure in a patient with stage 5 chronic kidney disease
691 The patient underwent hemodialysis with ul‐ trafiltration of 3500 ml and, because of progres‐ sive respiratory failure, he was intubated and transferred to the intensive care unit where se‐ dation, mechanical ventilation, and continu‐ ous veno ‐venous hemodialysis with ultrafiltra‐ tion were used. A computed tomography scan of the head demonstrated hypertrophy of the max‐ illa and the mandible with almost complete oblit‐ eration of the maxillary sinuses (FIGurE 1A and 1B). Symmetric replacement of the osseous matrix of the mandible and the maxilla by expansile soft tissue was found, including tunneling of this soft tissue through the residual osseous matrix (FIGurE 1C and 1D), which corresponded with a form of RO—ULO. The patient received alfacalcidol at a dose of 1 μg/d, and calcium and phosphorus levels were balanced by total par‐ enteral nutrition. Because of the changes of the facial skeleton, tracheotomy had to be performed to enable res‐ piration. Finally, when the patient did not re‐ quire ventilation therapy anymore, he was trans‐ ferred to the Department of Nephrology. During the hospitalization, the patient could not speak and eat because of RO changes, thus percutane‐ ous endoscopic gastrostomy was performed. Ad‐ ditionally, he underwent intermittent hemodi‐ alysis. However, progressive pneumonia and ca‐ chexia led to cardiac arrest and the patient died after 16 days of treatment. Differential diagnosis of ULO should include fi‐ brous dysplasia, cherubism, giant cell tumor, Paget disease, gigantism, hyperparathyroidism, and RO. According to the available data, leontiasis os‐ sea is a very rare manifestation of widening of the interdental spaces, flattening of the nares and nasal bridge, and jaw enlargement. Soft tissue Uremic leontiasis ossea (ULO) is one of the most severe complications of renal osteodystrophy (RO) caused by chronic kidney disease (CKD) and is characterized by cranial and facial bone thickening. Proper diagnosis and treatment are main factors to avoid severe esthetic and func‐ tional disorders.1,2 A 61 ‐year ‐old man with CKD, on hemodialysis therapy for 6 years, was referred to our depart‐ ment because of respiratory failure. The patient’s medical history revealed a 2 ‐month hemodialysis episode during his hos‐ pitalization in the Department of Gastrology and Hepatology due to hepatorenal syndrome, proba‐ bly induced by alcoholic hepatitis. Two years lat‐ er, the patient had an arteriovenous fistula creat‐ ed and started long ‐term hemodialysis. In 2014, single ‐photon emission computed tomography of the parathyroid glands, performed owing to high levels of total parathormone and phospho‐ rus, revealed high radioisotope marker uptake in the left lower parathyroid gland. The patient did not agree for surgical or pharmacological treat‐ ment and, additionally, he very often omitted he‐ modialysis sessions. On admission, he had pulmonary edema with saturation of 64%. Physical examination showed constrained head position and facial deformities, such as enlarged facial bones, malocclusion, de‐ formed palate, splayed dentition, and narrowing of the pharynx. Blood test results showed high levels of total calcium (2.58 mmol/l), phospho‐ rus (2.02 mmol/l), parathormone (1168 pg/ml), 25 ‐hydroxy vitamin D (4.47 ng/ml), and alka‐ line phosphatase (511 U/l). Chest X ‐ray showed cephalization of the pulmonary vessels, peri‐ bronchial cuffing, increased heart size, and flu‐ id in the right pleural space. CLINICAL IMAGE
The patient underwent hemodialysis with ultrafiltration of 3500 ml and, because of progressive respiratory failure, he was intubated and transferred to the intensive care unit where sedation, mechanical ventilation, and continuous veno -venous hemodialysis with ultrafiltration were used.A computed tomography scan of the head demonstrated hypertrophy of the maxilla and the mandible with almost complete obliteration of the maxillary sinuses (FIGurE 1A and 1B).Symmetric replacement of the osseous matrix of the mandible and the maxilla by expansile soft tissue was found, including tunneling of this soft tissue through the residual osseous matrix (FIGurE 1C and 1D), which corresponded with a form of RO-ULO.The patient received alfacalcidol at a dose of 1 μg/d, and calcium and phosphorus levels were balanced by total parenteral nutrition.
Because of the changes of the facial skeleton, tracheotomy had to be performed to enable respiration.Finally, when the patient did not require ventilation therapy anymore, he was transferred to the Department of Nephrology.During the hospitalization, the patient could not speak and eat because of RO changes, thus percutaneous endoscopic gastrostomy was performed.Additionally, he underwent intermittent hemodialysis.However, progressive pneumonia and cachexia led to cardiac arrest and the patient died after 16 days of treatment.
According to the available data, leontiasis ossea is a very rare manifestation of widening of the interdental spaces, flattening of the nares and nasal bridge, and jaw enlargement.Soft tissue Uremic leontiasis ossea (ULO) is one of the most severe complications of renal osteodystrophy (RO) caused by chronic kidney disease (CKD) and is characterized by cranial and facial bone thickening.Proper diagnosis and treatment are main factors to avoid severe esthetic and functional disorders. 1,2 A 61 -year -old man with CKD, on hemodialysis therapy for 6 years, was referred to our department because of respiratory failure.
The patient's medical history revealed a 2 -month hemodialysis episode during his hospitalization in the Department of Gastrology and Hepatology due to hepatorenal syndrome, probably induced by alcoholic hepatitis.Two years later, the patient had an arteriovenous fistula created and started long -term hemodialysis.In 2014, single -photon emission computed tomography of the parathyroid glands, performed owing to high levels of total parathormone and phosphorus, revealed high radioisotope marker uptake in the left lower parathyroid gland.The patient did not agree for surgical or pharmacological treatment and, additionally, he very often omitted hemodialysis sessions.
On admission, he had pulmonary edema with saturation of 64%.Physical examination showed constrained head position and facial deformities, such as enlarged facial bones, malocclusion, deformed palate, splayed dentition, and narrowing of the pharynx.Blood test results showed high levels of total calcium (2.58 mmol/l), phosphorus (2.02 mmol/l), parathormone (1168 pg/ml), 25 -hydroxy vitamin D (4.47 ng/ml), and alkaline phosphatase (511 U/l).Chest X -ray showed cephalization of the pulmonary vessels, peribronchial cuffing, increased heart size, and fluid in the right pleural space.tunneling through the residual osseous matrix remains a key imaging finding. 3arly treatment of secondary hyperparathyroidism, regular hemodialysis therapy, 4 and maxillofacial surgery may be the proper treatment strategy in patients with ULO.
ArtICLE INForMAtIoN
ACkNowLEDGMENts Computed tomography was performed in cooperation with Prof. Andrzej Urbanik, head of Department of Radiology at Jagiellonian University Medical College, Kraków, Poland.
CoNFLICt oF INtErEst None declared.
opEN ACCEss This is an Open Access article distributed under the terms of the Creative Commons Attribution -NonCommercial -ShareAlike 4.0 International License (CC BY -NC -SA 4.0), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material, provided the original work is properly cited, distributed under the same license, and used for noncommercial purposes only.For commercial use, please contact the journal office at pamw@mp.pl.
FIGurE 1
FIGurE 1 Computed tomography in a patient with leontiasis ossea.A -coronal view: diffuse extension of the maxilla (arrow); B -coronal view, bone window: diffuse extension of the maxilla, decreased maxillary sinus volume (arrow); C -axial view: diffuse extension of the mandible (arrow); D -axial view, bone window: diffuse extension of the mandible with the characteristic pattern of osseous matrix remodeling with soft tissue tunneling (arrow) Gołasa P, Chowaniec E, Krzanowski M, et al.Cast -like calcification in the superior vena cava in a young woman with lupus nephritis on hemodialysis.Pol Arch Intern Med.2019; 129: 712-713. | 2020-06-04T09:05:37.220Z | 2020-05-28T00:00:00.000 | {
"year": 2020,
"sha1": "3f729b879e50142fe7c157ac34713df206b52f2a",
"oa_license": null,
"oa_url": "https://www.mp.pl/paim/en/node/15400/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "07aff81a00e91678f159ec840b04f090ce2481b1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.