id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
234457907
|
pes2o/s2orc
|
v3-fos-license
|
Vocational High School Infrastructure Conditions and The Challenges in Facing The Era of Literation and Industrial Revolution 4.0
Infrastructure is an important component to support the learning operational in Vocational High Schools (SMK). This is a challenge for Vocational High Schools to manage their infrastructure so that they can keep up with the developments of technology and information. This study aims to: 1) determine the condition of the infrastructure and facilities of lightweight vehicle engineering of Vocational High Schools (SMK TKR) in Yogyakarta; and 2) describe the challenges of SMK, especially SMK TKR in the field of infrastructure to face the era of literaction and industrial revolution 4.0. This research is a quantitative research with survey method and literature review. The subject of this research is the Head of TKR Department or the Head of TKR workshop from 10 State and Private Vocational Schools in Yogyakarta. Data were collected by questionnaire and analyzed using descriptive analysis techniques. The results showed: 1) There are still infrastructure facilities at SMK TKR in Yogyakarta that have not met the demands of Indonesia Minister of Education Regulation No. 40, 2008 and the demands of the Curriculum 2013 syllabus; and 2) Infrastructure and facilities at SMK TKR is very difficult to follow the technological advances responsively, because technology is developing very quickly. Collaboration with the industry and the use of technology information in learning process are important to resolve the infrastructure limitation in SMK TKR.
Introduction
Vocational high school is responsible for preparing students to enter the workforce and developing a professional attitude (Government Regulation of the Republic of Indonesia Number 29 of 1990, Article 3). To achieve this, vocational high school organizes certain competency education and training, so that graduates can have qualified competencies and have a professional attitude. Thus, vocational high school graduates will find it easier to find jobs and this will reduce the number of unemployed, which is due to the presence of unskilled human resources.
The fact is inversely proportional to the goal of vocational high school (SMK) where the number of unemployed from vocational high school graduates is still high. In Indonesia, the number of unemployed is still very high. Based on data from the Central Statistics Agency (BPS) in 2015, the percentage of unemployed SMK graduates was 12.65 percent; in 2016 it fell to 11.11 percent; 2017 rose 11.41 percent; and 2018 to be 11.24 percent. The percentage is higher than the open unemployment of high school graduates, 7.95 percent, elementary school graduates 2.43 percent, while unemployment for junior high school graduates is 4.8 percent.
The unemployment condition of SMK graduates in the Special Region of Yogyakarta, the condition is not much different from the overall condition at the national level. Unemployment of SMK graduates is still in the first rank when compared to graduates from other education levels.
It is necessary to examine the causes of these problems. Minister of National Development Planning / Head of Bappenas Bambang Brojonegoro said unemployment for SMK graduates was caused by several factors including; 1) curriculum factors that cannot adapt to the needs of the times; 2) the number of SMKs in Indonesia is privately owned, but their capacity is small. SMK management foundations do not have the capacity for teacher development, let alone curriculum development that involves companies; 3) the problem of teachers is that there are not many productive teachers or teachers who are experts in the vocational field in SMK (Yoga Sukmana, 2019).
The conditions conveyed by the Minister for National Development Planning / Head of Bappenas in general also describe the conditions that exist in the Special Region of Yogyakarta. Where the number of private SMKs is very dominant based on SMK Principal Data from 220 SMK in Yogyakarta 170 are private schools and the remaining 50 are state schools (Data Pokok SMK, 2020). Where the conditions of the foundations that overshadow private vocational schools also vary, it will affect the quality of the SMK being held.
Based on the description above, problems related to vocational high school in Indonesia are numerous and complex. This can be understood because vocational high school has special characteristics as a vocational education institution, where the characteristics of vocational education must be able to keep up with the times and technology both in terms of curriculum, learning, infrastructure and teacher competence. To be able to unravel problems and find solutions and future development directions, it is necessary to map and identify problems from the school. Vocational high school must be able to anticipate changes and developments in the world of work in the era of literacy and industrial revolution 4.0 as it is today. The industrial revolution that occurred has changed the work process in the industrialized world which emphasizes effectiveness and efficiency. In the future there will be work done by robots, this of course will change the needs of workers. The demand for the labor market has changed, many economists have pointed out that this revolution can pose a risk of damaging the labor market. In the future, automation will replace workers, especially low and middle-skill workers, the education that is carried out will be redundant if it is not quickly adapted to changes in production (Vu, T. L. A., & Le, T. Q., 2019). To eliminate the gap between graduate competencies and real competencies that needs in the workforce, industry must be actively involved in curriculum development and learning evaluation (Suroto, S., & Hung, N. T., 2018). With these changes and developments, vocational high school needs to prepare a strategy to prepare the learning process and suitable infrastructure.
Methodology
This research is a quantitative research with survey method and literature review. Aspects which assess related to three things, namely quantity, quality and relevance. The subject of this research is the Head of TKR Department or the head of TKR workshop from 10 State and Private Vocational High Schools (SMK) in Yogyakarta. Data were collected by questionnaire and analyzed using descriptive analysis techniques. In the Indonesian Minister of Education Regulation No. 40, 2008 the practice facilities that are regulated, namely, furniture, equipment, educational media, and other equipment for the workshop of engine, electricity, chassis and power train. The condition of the facility is viewed from three aspects, namely quantity, quality and relevance. The condition of automotive engine, electricity, chassis and power train workshop in SMK TKR are as follows: The conditions of the SMK TKR infrastructure are still not optimal to meet the standards of the National Education Minister Regulation No. 40 of 2008. In terms of quantity, there are still many schools whose conditions do not meet with the number of students in the school. In terms of quality, only a small proportion of SMKs have complete equipment and in good condition. Where most of the conditions are equipment owned by SMK, the condition is damaged and incomplete. Therefore, maintenance management is very important to maintain the quantity and condition of the infrastructure. So far, the weak points in SMK are the limited number of technicians and funding for maintenance which tends to be low. In terms of relevance, most of the equipment in SMK TKR is still relevant to technological developments, however, there are still SMK TKR whose infrastructure conditions are no longer relevant to technological developments.
Result and Discussion
Various systematic efforts need to be implemented to overcome existing deficiencies. The condition of infrastructure is very vital, to meet the characteristics of vocational high schools including oriented to individual performance in the workforce, special justification on the demand of the workforce, curriculum focus to improve students psychomotor, affective, and cognitive aspects, the benchmark of success is not only limited to school but it is important to improve the sensitivity of the workforce development, requires adequate facilities and infrastructure, and support from the community (Yahya, M., 2018).
The condition of the infrastructure and facilities of SMK TKR in Yogyakarta based on the demands of the Curriculum 2013 syllabus
Currently the curriculum applied in SMK is the 2013 curriculum, of course, to achieve the target, learning quality is needed in accordance with the demands of the curriculum both in methods, models, and assessment techniques. The infrastructure aspect is very important in order to achieve existing learning objectives. Moreover, the 2013 curriculum for SMK TKR emphasizes contextual learning and contains some of the latest technology in the automotive field. In addition, SMK TKR is closely related to practical competencies, and appropriate assessment techniques in practice, namely assessment of practice so that it requires sufficient facilities (Pambayun, N. A. Y., & Haryana, K., 2020).
In this study, based on the existing learning outcomes, practical facilities and infrastructure needed for the automotive engine, electricity, chassis and power transfer systems were identified. Then it is used to identify the condition of the infrastructure at SMK TKR. Based on the study, obtained data as follows: Based on the results of the study, there are still SMKs whose infrastructure facilities are still not satisfactory in terms of quantity, quality and relevance. Where the respondents of this study are public and private SMKs that have a good reputation in Yogyakarta. If this is done thoroughly to all Vocational High Schools in Yogyakarta, surely the gap in infrastructure conditions with ideal conditions will be even higher.
The Challenges of Vocational High School to Face the Era of Literation and Industrial
Revolution 4.0 3.2.1. Vocational high school get difficulties to meet their infrastructure with the technological development Vocational high schools must be able to anticipate and respond to developments in the world of work with relevant and systematic programs such as curriculum adjustments, development of facilities and infrastructure, breakthroughs in teaching-learning activities and assessments, the students should be adapted to Industry 4.0 (Alias, S. Z., Selamat, M. N., Alavi, K., & Arifin, K., 2018). The development of technology, especially in the automotive sector, is very fast, and the variety of technology applied to vehicles is very high. Based on the research data about infrastructure of vocational high school in Yogyakarta, fulfillment of infrastructure in accordance with technological developments and needs in the world of work is very difficult for both public and private SMKs to do with all the existing limitations. Moreover, the trend of changes in the world of work which is very dynamic and dynamic from the need for manpower and the competencies of workers needed make the fulfillment of supporting facilities for competence very difficult to be pursued. Vocational high schools must be able to equip graduates with abilities that are suitable for career development in the 21st century. Where institutions must be able to prepare competencies to face increasingly complex challenges, through 4C namely critical thinking and problem solving, creativity and innovation, communication and collaboration (Yahya, M., 2018).
The need for labor will change
In the future, many jobs will be replaced by machines, the tendency of the automotive industry to develop technology in manufactured vehicles that is easy to maintain and technology that is plug and play so that maintenance and repair work is minimized. Vocational high school needs to anticipate this so that there is no over-supply of the workforce because existing jobs require fewer people. Current conditions in the world of work, higher quality jobs can be created, but digitization can also lead to job shifting (Hoffmann, R. DGB Vorstand, 2015). With the digitalization and modernization of the industry, work has become more effective and efficient. However, the impact is that the labor required is less. Therefore, Industry 4.0 and future manufacturing require theoretical and vocational skills to be able to master the future complex technologies. . The development of this technology is a must to always be strived for, and the impact that will occur need to be considered and anticipated. The impact that occurs will be systemic, The industrial revolution 4.0 which has the characteristics, the emergence of high technology, smart machines and robots with artificial intelligence will bring major changes to the labor market and job structures at various levels. More specifically, labor supply-demand, the structure of the workforce and the nature of work will be greatly affected (Junaid, S., Gorman, P.C., & Leslie, L.J., 2018). The dynamics of the requirements for the professional jobs of the future not only requires a fixed qualification profile, but rather an applied on competency development professional life -from vocational education up to retirement. Therefore, lifelong learning can be considered as a requirement for a long lasting curriculum vitae of workers (Gebhardt, J., Grimm, A., & Neugebauer, L. M., 2015). Everyone must be able to develop themselves and be able to take advantage of technological developments and digital communication to anticipate the impact of the development of the industrial revolution 4.0
Strengthening cooperation with industry and industrial apprenticeships is important
There is a need for strong cooperation with the world of work or industry in the fulfillment of infrastructure and transfer of technology. Education 4.0 makes the students to adopt real-world skills that are representative of their jobs (Hariharasudan, A., & Kot, S., 2018). This is important because Vocational high school has difficulty developing its infrastructure independently. A strong relationship between vocational education institution and the industries should be maintained thus the school can balance and adapt fast technological developments in the industry. Other vocational schools should also establish partnerships with the industries to organize industry standard classes since it is beneficial for all parties, particularly for the students (Suroto, S., & Hung, N. T., 2018).
There is a need for training or apprenticeship in industry for SMK TKR teachers to develop competencies and find out about technological developments in the industry. Expertise certification for SMk teachers is mandatory to ensure teacher competence. In addition, expertise certification has a positive impact on teachers in terms of career advancement, competence and motivation to develop themselves (Pambayun, N. A. Y., Haryana, K., & Yuswono, L. C. (2020). Improvement the elements of educators and education personnel in vocational high schools include, provision, distribution, qualifications, certification, training, career and welfare, appreciation and protection (Setiyawami, S., & Sugiyono, T. J. R., 2019).
Strengthening the apprenticeship program / industrial work practice for students needs to be done. The need for the development of special instruments to regulate the important components / curriculum of apprentices, so that apprenticeship activities can be directed according to targets and achieve the expected goals. In fact, developing vocational high schools in Indonesia requires revitalization in many aspects, where these aspects are interrelated. Revitalization that needs to be done is in the aspects of learning systems, education units, students, educators and education personnel (Setiyawami, S., & Sugiyono, T. J. R., 2019).
Infrastructure management needs to be strengthened
The management of infrastructure for maintenance and development plays an important role to be enforced so that existing limitations can be minimized and in procurement the priority scale of the most essential practical equipment can be determined. Currently the management of infrastructure facilities in SMK according to the rules and regulations that have been made is sufficient but the action of implementing the rules in management needs to be improved. There needs to be a strong synergy from every element in the vocational high school. Management of facilities and infrastructure in vocational high schools can be improved optimally through the following stages, 1) planning based on needs analysis, determining priority scales, calculating budget, and preparing proposals; 2) school facilities and infrastructure procurement activities are carried out by first disbursing funds in accordance with the school activity plan and budget, to purchase school equipment; 3) Maintenance of school facilities and infrastructure is carried out by all school components, all of them are responsible for school facilities and infrastructure; 4) inventory is carried out by recording, coding, and reporting; 5) elimination activities are carried out by sorting out items that are not feasible and then replaced with new items. (Agustin, H. Y., & Permana, J., 2020). With current technological developments, the management of infrastructure facilities should be changed to a digital system so that all data can be better integrated and organized. This can also be used as a control function of the central government to see the condition of vocational high schools more easily, practically, effectively and efficiently.
Utilization of information technology and technological developments to minimize gaps
Information technology is developed fastly, which needs to be used to reduce the gap in the lack of practical facilities with ideal conditions. Moreover, at this time there has been a change in the learning style of students who tend to learning from the internet so this needs to be accommodated by the use of technology that allows students to be able to learn independently. The need for workforce qualifications in the 4.0 industrial revolution has become a question of various parties, especially for vocational high schools to form successful workers and leaders in that era. To answer this question, not only educational content must be revised, but also skills development methods that are in line with the development of the industrial revolution 4.0 are needed (Richert, A., Shehadeh, M., Plumanns, L., Grob, K., Schuster, K., & Jeschke, S., 2016).
Industry 4.0 primarily aims to unify information technology and industry. In other words, industry 4.0 can be interpreted as a smart factory (Baygin, M., Yetis, H., Karakose, M., & Akin, E., 2016). Employment and unemployment are common problems as the impact of industry 4.0, especially in the early stages when the workforce failed to adapt to new industrial working conditions and there was a strong shift in the employment structure between sectors. This condition has become a reality, there has been a change of jobs in the labor market, robots have taken over humans to do manual work (M. R. Cabrita, V. Cruz-Machado, & S. Duarte, 2018). The use of virtual reality technology can be a solution to reduce the cost of certain high-cost objects. The fact that students are currently very adaptable to learn and practice in an online environment such as cloud data can improve personal, interaction and communication skills. The augmented reality (AR) and virtual reality (VR) environments may not be as realistic as factories and workshops, but it is safe and facilitate to upgrade of new skills today, such as dealing with in-process issues to minimize the risk of harm (Vu, T. L. A., 2018).
Conclusion
Conclusions that can be drawn in this study are, 1) There are some infrastructure facilities at SMK TKR in DIY that have not met the demands of Indonesia Minister of Education Regulation No. 40, 2008 and the demands of the Curriculum 2013 syllabus; 2) Infrastructure and facilities at SMK TKR is very difficult to follow the technological advances responsively, because technology is developing very quickly. Collaboration with the industry and the use of technology information and the development of technology in learning process are important to resolve the infrastructure limitation in SMK TKR.
|
2021-01-07T09:06:20.760Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6ca229680fac1fe8e525204a4c44424f26002620",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1700/1/012068",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "140d254892f8c53b33a159118bab8abbbe9f887a",
"s2fieldsofstudy": [
"Engineering",
"Education"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
}
|
13750837
|
pes2o/s2orc
|
v3-fos-license
|
Risk Factors for Postoperative Complications after Percutaneous Nephrolithotomy
Methods: A prospective study of all the percutaneous nephrolithotomy performed by standard technique within 1.5 years at Bir Hospital was made. Possible demographic, preoperative and intraoperative variables were included in the study and patients were followed up postoperatively for any complications. All complications were classified according to modified Clavien scoring system and analyzed to identify the prognostic variables.
INTRODUCTION
9][10] The Clavien classification is widely used in grading surgical complications.This system was modified in 2004 11 to increase its accuracy and applicability.[14] We present the results of PCNL with the aim to determine the preoperative and perioperative prognostic factors of complications associated with PCNL.
METHODS
After ethical clearance for the study from the subject committee of department of urology and institutional review board, a prospective observational study of all patients who underwent PCNL at Bir hospital of Kathmandu, Nepal, was undertaken from August 2015 to January 2017.Informed consents for the study were taken from all patients.
All patients were assessed preoperatively and records of demographic parameters, medical history including those of recurrent urinary tract infections, hematuria, pain, medical comorbidities were made.In addition to routine investigations; CT urography, along with stone density measured in unenhanced film, were done for all the patients.Staghorn stone was defined in the study as stone in the pelvis with branches extending to all three poles and partial staghorn as pelvic stone with branches extending to calyces in one or two polar calyces.
PCNL were performed under General or Spinal Anaesthesia.Six French ureteral catheter were placed in ipsilateral pelvicalyceal system by cystoscopy or ureteroscopy in lithotomy position under fluroscopic guidance in the beginning.All PCNL done in prone position, transpapillary puncture made with help of fluoroscopic guidance using 18 gauze two-part needle after retrograde opacifiation of the pelvicaliceal system via the ureteral catheter were only included in the study.The tract dilatation and number were either by single shot technique or serial telescopic dilatation and single or multiple tracts respectively.Nephroscopy were done with a 21 French rigid nephroscope.Large stones were fragmented with pneumatic lithotripter.Small stones and fragments were removed either by continuous normal saline irrigation using a pump or removed with forceps.The exit strategies were total tubeless, tubeless or standard.Intraoperative variables studied included operative time and number and location of the tracts.
Postoperatively, patients were managed with intravenous fluids, antibiotics and analgesics.They were discharged when clinically stable.The patients were routinely followed up at 2 weeks with an X-ray and ultrasound of KUB and those who were not stone free were followed up again at 4 weeks of operation.
Any complications during this period were classified by the modified Clavien Score for PCNL.Complications classified as Clavien 3a, 3b, 4a, 4b and 5 were categorized as major complications. 9In cases where patients had more than one complication, only the highest Clavien score were included.Final analysis was done for patients fulfilling all the inclusion criteria.
RESULTS
During the study period, a total of 289 PCNL were performed by two consultant urologists.Two hundred and forty six patients fulfilled the inclusion criteria and 43 patients had to be excluded due to various reasons as shown in table 1.
The demographic variables studied and the outcomes have been summarized in table 2. The mean age of the cohort was 37.56 years and the mean BMI was 25.08 Kg/ m 2 .Male population made up 60.17% of the cohort.The most common presentation was pain in the abdomen or flank, in 59.76%.Fifty six patients had undergone surgery on the ipsilateral side in the past; 13% with history of open surgery and 4.88% with history of PCNL.Full staghorn calculus were present in 7.32% and partial staghorn were present in 19.92%.The average number of calyces involved was 1.3.The mean stone burden, calculated from non enhanced CT scan as size of an ellipsoid, was 411.26 mm 2 , and mean stone density of 1051.58HU.Renal anomalies such as horse shoe kidney and ectopic kidney and collecting system anomalies such as bifid pelvis was present in 6.5% of the cohort.
Single upper pole access, either supracostal or infracostal, to the kidney was chosen in 15.45% and 30.08 % of PCNL were done with multiple tracts.The mean operative time, i.e. time from the first puncture to exit, was 45.54 minutes, ranging from 15 minutes to 125 minutes.The mean post operative hospital stay was 3.38 days.
As summarized in table 4, a total of 101, i.e. 41.06 % had some postoperative complications, with 9.35% developing major complications.The most common complication was postoperative fever.
The relation of different variables to outcome is briefly summarized in table 2. Age, BMI, gender, clinical presentation, history of previous surgery and ASA score did not significantly correlate with complications.Diabetes was significantly related with development of complications, with 75% of diabetic cohorts developing some form of complication, most common being postoperative fever.Among the stone characteristics, the preoperatively estimated stone burden (p = 0.0023), number of calyces involved by the stones (p = 0.0002), and presence of staghorn calculi were significantly associated with development of postoperative Risk Factors for Postoperative Complications after Percutaneous Nephrolithotomy complications.Similarly, multiple tracts (p = 0.0151) and longer operative time (p < 0.001)was needed in the patients bound to develop complications.
Bleeding was the most common major complication.Out of 7 patients (2.84%) who required blood transfusion, only one had solitary stone and PCNL was done through a single tract.Three patients (1.22%) in the entire cohort underwent angioembolisation for development of pseudoaneurysm.Seven (2.84%) patients required placement of intercostal drains for chest complications.One patient had colonic perforation and managed with controlled colocutaneous fistula.We had no patients requiring ICU care, nor were there any mortality.The presence of comorbidity has been reported to increase the risk of complications during or after PCNL.
Major complications after PCNL have been reported to be more common in patients with diabetes mellitus. 15mong the patients with comorbidities, only diabetes had a higher chance of complications in our study.PCNL is considered safe in patients with history of renal surgery on ipsilateral side. 16,17This was also reflected in our study.
9][20] This was also seen in our study, with more number of febrile episodes, bleeding and chest complications occurring in these population.Laterality, stone density and calyceal location of stones were not significantly associated with complications.
Risk Factors for Postoperative Complications after Percutaneous Nephrolithotomy
The study was conducted in a single center in a short period of time.Further studies taking into account both high and low volume centers; and also general and specialist centers with multiple observers are recommended for better validation of the results.
CONCLUSIONS
PCNL has lesser complications.Diabetic patients are more prone to develop complications.Larger stone burden, involvement of multiple calyces by stones and staghorn calculi are associated with need of multiple tracts and longer operative time, thus predisposing to higher incidence of complications.
|
2018-05-08T05:00:03.930Z
|
2018-03-18T00:00:00.000
|
{
"year": 2018,
"sha1": "c118a2435cd2b7658fe690fb9d6cc39755a88a2b",
"oa_license": "CCBYNC",
"oa_url": "https://www.nepjol.info/index.php/JNHRC/article/download/19371/15851",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c118a2435cd2b7658fe690fb9d6cc39755a88a2b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
57572787
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence of Vitamin D Inadequacy Among Chinese Postmenopausal Women: A Nationwide, Multicenter, Cross-Sectional Study
Purpose: We aimed to investigate the status of serum 25-hydroxyvitamin D [25(OH)D] among Chinese postmenopausal women in a multicenter cross-sectional study. Methods: Non-institutionalized postmenopausal women aged ≥55 years were recruited from urban and rural areas in 7 geographically different regions in China. Subject enrollment was executed during the summer and the winter. Vitamin D insufficiency and deficiency were defined as 25(OH)D < 30 and< 20 ng/ml, and was measured by liquid chromatography-tandem mass spectrometry. Women were referred to a dual-energy x-ray absorptiometry (DXA) if they had a medium-to-high fracture risk suggested by Osteoporosis Self-Assessment Tool for Asians (OSTA). Results: Among all subjects, 91.2% (1,535/1,684, 95%CI: 89.7, 92.5) had vitamin D insufficiency and 61.3% had vitamin D deficiency (1,033/1,684, 95%CI: 59.0, 63.7). The prevalence of vitamin D deficiency was significantly higher in urban dwellers (64.9 vs. 57.7% in rural, P = 0.002) and in winter-enrolled subjects (84.7 vs. 41.3% in summer, P < 0.0001). The prevalence of vitamin D inadequacy did not increase in trend by latitude and was numerically lower in women who had high fracture risk and osteoporosis. A non-curvilinear change of intact parathyroid hormone (iPTH) levels was observed at 25(OH)D >16.78 ng/mL. Conclusions: The prevalence of vitamin D inadequacy was remarkable among Chinese postmenopausal women and independent of fracture risk assessed by OSTA or osteoporosis suggested by DXA. Winter season, urban residence, however not latitude, were significantly associated with a higher likelihood of vitamin D deficiency. Optimal vitamin D status for iPTH and bone-related outcomes merits further investigation in this population.
INTRODUCTION
Vitamin D plays an important role in bone health by increasing intestinal absorption of calcium and phosphate and acts as a critical component in the regulation of bone turnover. Sunlight exposure is the primary source of vitamin D followed by dietary intake of vitamin D-rich or fortified foods, where available. Vitamin D deficiency, as measured by serum 25hydroxyvitamin D [25(OH)D] (1), is associated with increased bone turnover, muscle weakness and falls, osteoporosis and fractures, and endocrine disorders including rickets in the young, osteomalacia in the elderly, and secondary hyperparathyroidism (2,3). Inadequacy in vitamin D is a worldwide problem with unfavorable consequences, especially in elderly women (4)(5)(6). Global evidence has shown a considerable prevalence of vitamin D insufficiency among North American (6), European (7,8), as well as Asian populations (9). However, vitamin D status of the population in Southeast Asian countries received relatively less attention. Despite efforts to determine the optimal or sufficient concentration of serum 25(OH)D, there is still no universal consensus on a definition of vitamin D deficiency or insufficiency (i.e., 25(OH)D <30 or 20 ng/mL) as reflected in the recent disagreement between the Institute of Medicine (IOM) guidelines (10) and the recommendations made by the Endocrine Society (11).
Postmenopausal women are at high risk of vitamin D deficiency. Maintenance of serum 25(OH)D may protect this population from adverse skeletal outcomes (1). Observational studies in North and Northeast China (12)(13)(14)(15)(16)(17)(18) investigated vitamin D inadequacy across various urban populations. Different cutoff values were used to define vitamin D deficiency and variations in serum 25(OH)D measurement occurred due to non-standardized assay methods. Comparison of study results is somewhat difficult. In examining previous studies, epidemiological data focusing on Chinese postmenopausal women were insufficient. In addition, the current standardized laboratory method for measuring 25(OH)D by liquid chromatography-tandem mass spectrometry (LC-MS/MS) (19) was not universally used. Therefore, those studies may not have accurately estimated the prevalence of vitamin D inadequacy in postmenopausal women in China. Therefore, we aimed to describe the distribution of serum 25(OH)D levels among Chinese postmenopausal women who lived in both rural and urban areas by conducting a nationwide, multicenter, cross-sectional, epidemiological study. Secondly, risk factors for vitamin D deficiency and the relationship between 25(OH)D and intact parathyroid hormone (iPTH) were explored in this sampled population.
Study Design
This was a nationwide, multicenter, cross-sectional study to investigate the distribution of 25(OH)D levels among rural and urban-dwelling Chinese postmenopausal women from 7 geographic regions in China by different latitudes (from 45.75 to 23.17 • north, Supplementary Figure 1). The selected regions represented a variety of geographic locations by latitude in China in order to assess the regional difference in serum 25(OH)D levels and risk factors for vitamin D deficiency. One tertiary hospital in each region was selected as the coordinating site based on the site's location (a major city in the region) and the investigator's medical specialty (endocrinologist, orthopaedist, or rheumatologist). Subject enrollment began in July 2013 and completed in February 2014. Considering the seasonal impact on sunlight exposure, a two-season enrollment strategy by summer vs. winter was executed for the study. The study sample size was evenly allocated across 7 sites where subjects were equally enrolled from the urban and rural areas, and from summer and winter seasons, respectively. MSD designed and sponsored the study and analyzed the data. The study was conducted in accordance with the guidelines of the International Conference on Harmonization and local regulatory guidance and was approved by independent ethics committees of all sites before the initiation of any study-related procedure.
Subject and Enrollment
Women were eligible if they were Chinese aged 55 years or above and postmenopausal (defined as absence of menses either naturally or surgically for at least a year by self-reporting), and willing to comply study procedure as judged by the investigators; women were excluded if they were hospitalized or institutionalized (i.e., patient in long-term care or elderly care facility), had severe kidney disease under physician's care, had mentally or legally incapacitated, or other conditions that may preclude the completion of health-related questionnaire or informed consent process, or had participated in a study with an investigational medicinal product or device within 30 days prior to giving informed consent.
Subject enrollment was conducted in the summer (between July and September 2013) and the winter (between January and February 2014), respectively, in one calendar year for six geographic regions where seasons are distinct. A single enrollment period (between December 2013 and January 2014) was applied in the southern region (Guangdong) due to limited seasonal variation and relatively warmer climate. The investigators obtained a population list for potential participants from local rural committees or urban residents' committees and recruited subjects accordingly with the assistance from these committees via approved telephone contact, advertisements, or posters. For the recruitment of rural subjects, one representing area (one to two villages) was selected for community-based recruiting of women by approved posters and broadcasting. Women were then screened in an outpatient clinic in the coordinating site. Written informed consent was obtained from all subjects or their legal representatives before study screening.
Clinical Assessment and Laboratory Measurement
A single study visit was arranged for the subject who was assessed as per study procedure onsite. Demographic data, medical history (non-active diseases/diagnoses) for the previous 5 years, and medication use (including vitamin D and calcium supplements) within the 4 weeks prior to the study visit were recorded for each subject. Height and weight were measured with shoes and heavy clothing removed using a standardized portable stadiometer and weighing scale. Fracture risk was assessed using the Osteoporosis Self-Assessment Tool for Asians (OSTA) (20), which has a demonstrated role in predicting fracture risks in Asian populations.
Subject Questionnaire
A structured, 30-item interviewer-administered questionnaire was used by the investigator onsite to assess potential factors influencing serum 25(OH)D levels including, general health, fall or fracture history, sun exposure (including time spent outside with and without sun protection and the body parts exposed), physical activity and daily, and weekly or monthly consumption of vitamin D-containing foods, such as eggs and fish. A sun exposure index was calculated using the reported number of hours per week spent outside without sun protection in the previous month multiplied by the percentage of the body exposed to sunlight (9% for the face, 1% for each hand, 9% for each arm, and 18% for each leg). Sun exposure of the chest, back, and abdomen were not included (21).
Laboratory Measurement
A fasting blood sample by 10 ml was collected from each subject and sent to a central laboratory (Quest Diagnostics, Shanghai, China). Serum 25(OH)D samples were measured using an API4000 (SCIEX TM ) LC-MS/MS, which quantified concentrations of 25(OH)D 2 and 25(OH)D 3 for the determination of total 25(OH)D. The limits of quantification (LOQs) for 25(OH)D 2 and 25(OH)D 3 were 2 and 3 ng/ml, respectively (LOQ total 25(OH)D = 3 ng/ml). The concentration of iPTH was measured using chemiluminescence (DPC Immulite 2000, SIEMENS) with an LOQ of 0.3 pg/mL. An additional fasting blood sample by 5 ml was collected from a subset of subjects enrolled from 3 regions (Beijing, Shanghai and Hunan) during the winter to measure bone turnover markers including serum C-terminal telopeptide of type I collagen (β-CTX) and serum N-aminoterminal propeptide of type I collagen (P1NP) using electrochemiluminescence (Roche E601 platform).
BMD Measurement
Bone mineral density (BMD) of the total hip, the lumbar spine, and the femoral neck were measured by dual-energy X-ray absorptiometry (DXA) with either a Hologic R or GE Lunar machine. Calibration with the manufacturer's phantom performed for routine clinical practice at each testing site was accepted.
Objectives and Outcome Measures
The primary objective was to describe the distribution of serum 25(OH)D levels among postmenopausal women aged 55 years and older in different geographic regions of China overall and by the risk of fracture (low, medium or high) as assessed by OSTA. Secondary objectives were to examine the risk factors for vitamin D deficiency and to estimate the relationship between serum 25(OH)D and iPTH levels. An exploratory objective was set to estimate the correlation of serum 25(OH)D with bone turnover markers β-CTX and P1NP levels.
Sample Size and Statistical Analysis
Based on the literature (13)(14)(15)(16)(17)(18), a prevalence estimate of 50% for vitamin D deficiency was used for sample size consideration. Assuming a 5% precision as expressed by a 95% confidence interval (CI) of the point estimate and a 10% discontinuation rate, a sample size of 424 postmenopausal women was needed. In order to evaluate residential (rural vs. urban) and seasonal (summer vs. winter) differences, the above sample size was quadrupled to a total of ∼1,680.
Descriptive statistics were used to address the primary objective. Distribution of vitamin D levels was analyzed and presented categorically as the proportion and corresponding 95%CI of vitamin D inadequacy as defined and numerically as the mean ± SD (SE) for serum vitamin D levels among all subjects whose serum 25(OH)D were measured. The primary analysis was also performed in subjects stratified by fracture risk level as assessed by OSTA, and grouped by region, residence and season. A post-hoc Chi-Square test was used to compare the prevalence of low vitamin D status among subgroups whenever applicable. Univariate and multivariate logistic regression analyses were applied to identify risk factors for vitamin D deficiency. Results were represented as odds ratio (OR) with corresponding 95% CI and P-value. In multivariate logistic regression, variables including latitude, travel to the sunny area and walking outside were excluded due to collinearity with sunlight exposure. All selected variables from univariate analysis retained in the multivariate model presented by adjusted OR with corresponding 95%CI and P-value. In terms of the relationship between 25(OH)D and iPTH, Pearson's correlation coefficient was performed. Levels of iPTH were plotted against serum 25(OH)D to assess any relationship between the two values. A quadratic fit model with plateau was used to evaluate the association between serum 25(OH)D and PTH levels (18). In addition, the association between serum 25(OH)D and bone turnover markers ß-CTx and P1NP was explored by using univariate linear regression analysis. Clinical and biochemical variables including age, BMI, iPTH, and years since menopause were fit into the model. Analysis of variance (ANOVA) was used to compare the mean levels of β-CTx and P1NP in different subgroups categorized by serum 25(OH)D levels.
For demographics and clinical data, descriptive statistics were made to display the results. Chi-square or t-test was used to test the statistical significance for categorical or continuous variables wherever appropriate. There was no imputation for missing data in terms of 25(OH)D or other variables. All statistical analyses were performed using SAS 9.3 (SAS Institute, Cary, NC, USA) and a P-value of 0.05 was considered statistically significant unless otherwise specified.
Subject Enrollment and Characteristics
The study recruited a total of 1,713 women from 7 regions in China, among which 25 women had screening failure and therefore, 1,688 subjects were included in the study analysis (Figure 1). Of those eligible, 1,684 postmenopausal women had 25(OH)D levels measured whereas four women did not complete the blood sampling procedure.
Serum 25(OH)D Distribution and Vitamin D Inadequacy
Among all subjects with quantified 25(OH)D levels, the mean (SD) serum 25(OH)D was 18.0 (8.4) ng/ml. Mean serum 25(OH)D levels appeared comparable among geographic regions but were significantly higher in women who were enrolled in the summer as compared that in those who were enrolled in the winter within each region and overall (all P < 0.0001). A numerically higher mean serum 25(OH)D value was observed among rural residents compared to urban residents (mean difference ∼1.3 ng/mL),which remained consistent across the regions, except for the Northwest and Southwest regions where higher mean serum 25(OH)D levels were observed in urban residents. Data on mean serum 25(OH)D levels by season and by residential region are displayed in Supplementary Table 1 and Supplementary Figure 2, respectively. Table 2 presents the prevalence of vitamin D inadequacy as defined by different cutoff values overall and in subjects grouped by fracture risk assessed by OSTA and by osteoporosis measured by DXA. Vitamin D insufficiency, defined as 25(OH)D <30 ng/mL, was 91.2% (1,535/1,684, 95%CI: 89.7, 92.5%) among FIGURE 1 | Study flowchart for enrolment. Four subjects had 25(OH)D < LLoQ and were not included in the analysis for the continuous variable. Subjects who had missing femoral neck DXA may have BMD measured at another anatomical site. Study completion was deemed as subjects who had all study procedures as per protocol. Subjects failed to return for the remained study procedure were deemed lost-to-follow-up. All subjects with non-missing data were included in the corresponding analysis.
Frontiers in Endocrinology | www.frontiersin.org Data are expressed as mean (SD) and number (%). † There are 62z subjects with missing femoral neck BMD values.
∧There are 25 (1.5%), 9 (0.5%), and 7 (0.4%) subjects reported fractures at the spine, the pelvis, and the hip, respectively; 41 (2.4%) reported unspecified leg fractures. ¶Includes calcium and calcium-containing supplements. Subjects who reported the use of more than one type of calcium supplement were only counted once. §Includes vitamin D and vitamin D-containing supplements. Subjects who reported the use of more than one type of vitamin D supplement were only counted once. **Calculated as the number of hours per week spent outside without sun protection multiplied by percentage body part exposed to sunlight (9% for face, 1% for each hand, 9% for each arm, and 18% for each leg). The prevalence of vitamin D insufficiency or deficiency was significantly lower in women who enrolled in the summer, compared to those who enrolled in the winter (all P < 0.01, 41.3 vs. 84.7% for serum 25(OH)D <20 ng/mL, Figure 2A). A statistically significant difference in the prevalence of serum 25(OH)D inadequacy defined by different 25(OH)D cutoffs was seen between rural and urban dwellers enrolled in the summer (all P < 0.01, 46.8 vs. 35.7% for serum 25(OH)D <20 ng/mL, Figure 2B). Cumulative distribution curves of serum 25(OH)D levels by region and season (summer and winter) are shown in Supplementary Figure 3. Regional variation in the prevalence of vitamin D inadequacy was more distinct in the summer than that in the winter.
Risk Factors for Vitamin D Deficiency
The subject's demographic, lifestyle and clinical characteristics were analyzed individually to explore factors associated with vitamin D deficiency defined as serum 25(OH)D <20 ng/ml by using univariate logistic regression (Supplementary Table 2). Overweight, region (Northwest, North, and Southwest), winter season, no exercise, no milk products or fish consumption, no vitamin D supplement, and no or low sunlight exposure were individual factors significantly associated with an increased risk of vitamin D deficiency (ORs between 1.28 and 7.90, all P < 0.05). Notably, a high fracture risk assessed by OSTA or densitometric osteoporosis was not associated with vitamin D deficiency.
A multivariate logistic regression model was established to explore the likelihood of vitamin D deficiency presented by adjusted OR by accommodating all variables analyzed in single logistic regression ( Table 3). Variables with collinearity including latitude, travel to the sunny area and walking outside were excluded from the model analysis, as sunlight exposure was retained. Rural dwellers were less likely to have vitamin D deficiency (adjusted OR: 0.59, 95%CI: 0.40, 0.86, P < 0.01); women enrolled in the winter season had a 7.62-fold likelihood of having vitamin D deficiency as compared with those who were enrolled in the summer season (adjusted OR: 7.62, 95%CI: 5.52, 10.54, P < 0.0001); women with no vitamin D use had a 1.75-fold higher relative risk for vitamin D deficiency (adjusted OR: 1.75, 95%CI: 1.08. 2.85, P = 0.02); women who reported fair/poor health or no parental fractures had a relatively lower risk of vitamin D deficiency (P < 0.01). Sunlight exposure and fracture risk level by OSTA appeared to have no statistically significant association with vitamin D deficiency as adjusted for other variables in the model. Figure 3A suggested changes of iPTH levels over serum 25(OH)D intervals in 1679 subjects with quantified iPTH and 25(OH)D. A significant inverse correlation between serum iPTH and 25(OH)D was observed (r = −0.21, p < 0.01). The relationship between these two parameters was further analyzed by using a quadratic fit with plateau model (Figure 3B). Serum iPTH levels reached a plateau at a serum 25(OH)D level by 16.78 ng/ml, suggesting that the observed inverse relationship occurred below this cutoff value and iPTH remained stable for serum 25(OH)D above the cutoff.
DISCUSSION
Findings from this multicenter, cross-sectional study in China suggested a considerable prevalence of vitamin D inadequacy among postmenopausal women from different geographic regions across the country. Although fracture risk level assessed by OSTA did not impact significantly on vitamin D status, urban dwelling, and the winter season contributed to lower 25(OH)D levels and were the associated Subjects with "Unknown" in any category were excluded from analysis; **Reported consumption of fish at least once in the past month; ++Excluded subjects using active analogs (alfacalcidol and calcitriol). ***Calculated as the number of hours per week spent outside without sun protection multiplied by percentage body part exposed to sunlight (9% for face, 1% for each hand, 9% for each arm, and 18% for each leg). Sun exposure index was categorized into tertiles.
risk factors on vitamin D deficiency among the analyzed population. In addition, our analyses did not show a curvilinear relationship between serum iPTH and serum 25(OH)D.
This study is among the few epidemiological studies to investigate serum 25(OH)D status using the standard LC-MS/MS assay in a large, geographically diverse population in China and Southeast Asia. The study compared serum 25(OH)D levels and prevalence rates in various scenarios including the season, residence location, and geographic region. The study well defined non-institutionalized postmenopausal women and obtained potential participant lists from the community. All blood samples were taken at the coordinating sites and sent to a central laboratory for further determination. Therefore, interlaboratory variations were diminished and comparison with external study results can be realized and underestimation of 25(OH)D levels may be lessened (23). Nevertheless, this study has limitations. The interview-based questionnaire caused the subject's recall bias on self-reporting events including fracture history and use of health supplements. DXA quality control was not done centrally which may cause measurement variations on BMD. Due to a cross-sectional design, a snapshot of the disease status at a certain time is given. The incident hypovitaminosis D over time cannot be assessed and causality cannot be inferred between suboptimal vitamin D and risk factors. The investigators recruited community and village women; however, a randomized sampling method was not performed to minimize selection bias. The study cannot be considered a strict population-based research and therefore, findings may not be generalizable to all postmenopausal women in China.
The prevalence of low serum 25(OH)D varies by region, population, and cutoff value used for hypovitaminosis D. Worldwide investigations (24) suggested prevalence of vitamin D deficiency [25(OH)D<20 ng/mL] was 8-57% in Caucasians and 2-70% in Southeast Asians across different age groups. Higher prevalence rates of hypovitaminosis D were seen in children, pregnant women and elderly people including postmenopausal women (24), or if 25(OH)D<30 ng/mL is used as cutoff (1). Among postmenopausal Caucasians, a study reported that hypovitaminosis D [defined 25(OH)D <30 ng/mL, <20 ng/mL] was 52, 18%, respectively, in North American elderly women who received concomitant bisphosphonates (21). A mean serum 25(OH)D of 30.4 ng/ml was observed. Numerically higher prevalence rates were found in Europe. In a large study on 8,532 postmenopausal subjects, the prevalence of 25(OH)D inadequacy (defined as 25(OH)D <50 ng/mL, <80 ng/mL) was 32.1 and 79.6%, respectively (25), and a mean serum 25(OH)D level of 27.2 ng/mL was observed. In a systematic review of 36 published studies (26), the prevalence of vitamin D deficiency [serum 25(OH) <20 ng/ml)] ranged from 1.6 to 86% among community-living and institutionalized postmenopausal women, and a higher prevalence was seen in women with osteoporosis (12.5-76%) or with a history of fracture (50-70%) by using lower cutoff values in defining hypovitaminosis D. Our study, however, suggested a considerably higher prevalence of suboptimal vitamin D among Chinese postmenopausal women [25(OH)D <30 ng/mL: 91.2%, 25(OH)D <20 ng/mL: 61.3%], which was supported by a few published studies from East Asian regions (89.7, 72.1% in Beijing and central south China, and 65.0% in Korea] (13,16,27), and South Asia (>70% across different age groups in India) (28). It is difficult to directly compare attributable socioeconomic and lifestyle factors among populations. Differences in hypovitaminosis D in Caucasians and Chinese may be explained at the study level, including recruitment strategy, population sampling or assay method. As a low proportion (9.5%) of women in our study had vitamin D supplementation, hypovitaminosis D would be unsurprisingly frequent. Based on observational studies and randomized trials (1,29), a serum 25(OH)D level of 20 ng/mL was shown to protect most people against bone-related events such as fractures and falls. Screening vitamin D deficiency and prescribing vitamin D supplements need more clinical attention in Chinese postmenopausal women.
Vitamin D is critical for bone mineralization. There has so far been no robust evidence on skeletal benefits associated with 25(OH)D levels based on pooled analyses on observational studies (30,31). Moreover, the association of vitamin D with BMD and osteoporosis remains controversial. Several studies have shown that 25(OH)D is positively correlated with BMD (32-35) when 25(OH)D levels are low (36), whereas no such an association has been found in other studies (37)(38)(39). In our study, vitamin D inadequacy was not associated with higher fracture risk as assessed by OSTA or densitometric osteoporosis. Further, there was no statistically significant difference in bone turnover markers including ß-CTx and P1NP among a subset of subjects grouped by 25(OH)D interval from three geographic regions. Results must be interpreted with caution as very few women had bisphosphonates in addition to a low proportion of vitamin D or calcium supplementation. Another interesting finding is the lack of correlation between vitamin D insufficiency and fragility fracture prevalence. The subject's self-reporting on fragile fractures was prone to bias and thus, limited the precision in the analysis of association between these two morbidities. The prevalence of fragile fractures was low and major fragile fractures were very few, compared with previous reports (40). Results might not be generalizable even if this small cohort had more vitamin D supplementation or markedly lower vitamin D levels than those among most women in the study. For risk factors on fragile fractures in Chinese postmenopausal women, no sufficient evidence has been generated so far. Our study focused on risk factors for hypovitaminosis, but some results might be interpreted to link the lower prevalence of major fragile fractures. We recruited women who were relatively younger and had higher BMI, lower proportion of high OSTA or osteoporosis, and infrequent falls. These clinically meaningful factors have an association with fragile fractures, as supported by treatment guidelines (41,42). Our study did not analyse the association between fragile fractures and fracture risk assessment (BMD or OSTA). However, results suggested an observed parallel trend among these clinical parameters. A lower proportion of women with densitometric osteoporosis or OSTA high risk was consistent with fewer fragile fractures. Findings indicated the primary role of BMD or OSTA in the screening of patients with high fracture risks among Chinese postmenopausal women, albeit further evidence needed.
Vitamin D levels may be affected by a number of factors including age, cultural behavior, latitude and season, and outdoor activity (24). Our subgroup analyses suggested the prevalence of hypovitaminosis D was significantly higher in women enrolled in the winter, and in women living in the urban community, respectively. Adjusted for all variables in the multivariate logistic model, urban dwelling, winter season, parental history of hip fracture, no consumption of eggs with yolks, and lack of vitamin D supplement was found to have a significant association with vitamin D deficiency among postmenopausal women. The study did not include smoking as a lifestyle factor in the study analysis, because we considered a very low proportion of subjects who smoke. Influence of occupation on vitamin D levels were measured as "engage in strenuous exercise or farm works" and "sun exposure index" as potential risk factors in the multivariate logistic model. As expected, season and residence location were major factors affecting vitamin D status. Latitude, however, was not a substantial influence. No prevalence gradient of vitamin D deficiency by latitude was observed, although rates varied by region (ranging from 50 to 71.3%). Similarly, one recent population-based study on 33 Chinese healthy adults suggested vitamin D deficiency was independent of latitude changes (43). These results suggested that there might be factors other than distance to the equator affect vitamin D status among Chinese postmenopausal women.
Our data indicate that the relationship between iPTH and 25(OH)D is not curvilinear, similar to previous studies. An inverse correlation between these two parameters was seen but iPTH levels remained stable in women who had 25(OH)D at 16.78 ng/mL or above. Historically, the normal lower limit of 25(OH)D was set at 30 ng/mL because PTH levels rise as 25(OH)D falls below this threshold, along with optimal calcium absorption (1,44). In addition, the rationale for such a threshold extended to extra-skeletal benefits including cancer prevention (45). Recent osteoporosis guidelines have also suggested the optimum level of 25(OH)D is 30 ng/mL or above (41,42). On the contrary, the IOM indicated that 20 ng/mL is appropriate for at least 97.5% of population (16 ng/mL for 50% of population) to maintain bone health as regulated by calcium and PTH status (46). A most recent trial has demonstrated ineffectiveness of maintaining vitamin D 30 mg/mL to cancer prevention in women >55 years of age (47). In the present study, the 25(OH)D threshold for such a rationale was 16.78 ng/mL, although calcium absorption was not possible to measure. A few studies in Chinese (48) and African (49) populations revealed very similar vitamin D levels (17-19 ng/mL) for PTH stability. The existing literature brings concerns the current cut-off for vitamin D deficiency among Chinese postmenopausal women. In terms of 25(OH)D <20 ng/mL, a majority of women were vitamin D deficient and thus the use of vitamin D supplements was far from satisfactory, which is similar to other countries or regions. However, the proportion of vitamin D deficiency was dramatically lower when 25(OH)D <15 ng/mL was used for the definition (61.3 vs. 37.4%). Although there is no consensus, there may be a reason to tune down 25(OH)D levels in the clinical setting. Future endeavors can be made to confirm optimal vitamin D status managed by diet intake and nutrient supplementation in relation to skeletal outcomes for this indicated population.
In conclusion, the prevalence of vitamin D inadequacy was remarkable among Chinese postmenopausal women and was independent of fracture risk assessed by OSTA or osteoporosis suggested by DXA. Winter season, urban residence, however not latitude, were significantly associated with a higher likelihood of vitamin D deficiency. Optimal vitamin D status for iPTH and bone-related outcomes merits further investigation in this population.
AUTHOR CONTRIBUTIONS
ZX, WX, ST, JG, JC, SP, TW, and EL conceived and designed research. ZX, WX, WW, ZZ, CL, LW, TW, and EL collected data and conducted research. ZX, WX, ZZ, CL, JC, SP, and TW analyzed and interpreted data. ZX, WW, ST, SP, and TW wrote the initial paper. ZX, WX, CL, LW, JG, JC, SP, HY, and EL revised the paper. EL had primary responsibility for final content. All authors read and approved the final manuscript.
ACKNOWLEDGMENTS
The authors would like to thank all subjects, investigators, and study personnel who participated in this study. The investigators who contributed to the study were: Dr. Houde Zhou and Dr. Zhifeng Sheng. Study personnel who contributed to the study included: Xingshu Zhu and Tengfei Man. We would also like to thank the National Natural Science Foundation of China (81072219, 81272973, 81471055, and 81672646) for the support.
|
2019-01-07T14:02:12.059Z
|
2019-01-07T00:00:00.000
|
{
"year": 2018,
"sha1": "70973245f59e51eafe9cb953bb3dbf7c03a4f036",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2018.00782/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "70973245f59e51eafe9cb953bb3dbf7c03a4f036",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260013670
|
pes2o/s2orc
|
v3-fos-license
|
Reliability of the g factor over time in Italian INVALSI data (2010-2022): What can achievement-g tell us about the Flynn effect?
Generational intelligence test score gains over large parts of the 20th century have been observed to be negatively associated with psychometric g. Recent reports about changes in the cross-temporal IQ trajectory suggest that ability differentiation may be responsible for both changes in g as well as increasingly (sub)domain specific and inconsistent trajectories. Schooling is considered to be a main candidate cause for the Flynn effect, which suggests that school achievement might be expected to show similar cross-temporal developments. In the present study, we investigated evidence for cross-temporal changes in achievement-based g in a formal large-scale student assessment in Italy (i.e., the INVALSI assessment; N = 1,900,000). Based on data of four school grades (i.e., grades 2, 5, 8, and 10) over 13 years (2010-2022), we observed little evidence for changes in achievement g in general. However, cross-temporal trajectories were differentiated according to school grade, indicating cross-temporal g decreases for lower grade students whilst changes for higher grade students were positive. These findings may be interpreted as tentative evidence for age-dependent achievement-g differentiation. The presently observed achievement g trajectory appears to be consistent with recently observed evidence for a potential stagnation or reversal of cognitive test score gains.
Introduction
Generational IQ test score changes in the general population (i.e., the Flynn effect) were for the first time systematically documented in 1984 (Flynn, 1984). Despite a general positive trend on a global scale over most of the 20th century, these changes have been observed to be differentiated according to domain (i.e., fluid IQ gains are typically larger than crystallized gains) and yielded about 30, 35, and 25 IQ points increases in terms of full-scale, fluid, and crystallized intelligence globally from 1909 to 2013, respectively (Pietschnig & Voracek, 2015).
However, performance change trajectories were not linear over time and varied in strength across nations (Pietschnig & Voracek, 2015). Flynn effect patterns that have been observed in more recent years have become less consistent. Whilst in some countries Flynn effect decelerations were observed (e.g., USA; Rindermann & Thompson, 2013), others showed a stagnation, or a reversal (e.g., Denmark; Dutton et al., 2016). Notwithstanding, most of the available evidence on the Flynn effect so far had been framed on the taxonomy of Cattell's fluid and crystallized IQ distinction.
However, modern CHC model-based investigations (i.e., based on the Cattell-Horn-Carroll intelligence model; Schneider & McGrew, 2018) indicate that test score changes are differentiated according to more fine-grained stratum II CHC-based abilities (Lazaridis et al., 2022). Whilst some stratum II domains showed a positive Flynn effect (e.g., comprehension knowledge), others showed negative (e.g., working memory capacity) or ambiguous ones (e.g., fluid reasoning), whilst further domains were unaffected (e.g., processing speed).
However, one of the most striking and consistent results is the negative association of the Flynn effect with psychometric g (Pietschnig & Voracek, 2015). This seems paradoxical because predominantly cross-temporally increasing IQ (sub-)test scores appear to be incompatible with negative g associations. This can be thought of in terms of a decathlon. If performances on all ten events of the decathlon would be factor analyzed, a general factor (i.e., athletics-g) would emerge, because the performances on the subordinate factors (e.g., hurdling, shot putting) are intercorrelated.
If a decathlete were to train hurdling intensively, they would improve in this discipline, thus getting more points for their hurdling performance and their overall score. However, the performance in other subdisciplines would not increase, thus leading to a weakening of the positive athletics manifold. If we think about intelligence in an analogous manner, but with training taking place between generations instead of within individuals, increasing test scores that are negatively associated with psychometric g could be plausibly explained.
This has implications for population abilities, which may have undergone changes in terms of increasing specialization, thus conceivably manifesting itself in increasing ability differentiation. If this is the case, one should be able to observe a cross-temporally decreasing strength of the positive manifold. This idea is consistent with recent observations of increasingly differentiated Flynn effect patterns (Lazaridis et al., 2022). However, positive manifold strength changes have so far not been formally investigated.
Changes in the positive manifold are informative in regard to causes of the Flynn effect, regardless of the sign or strength of cross-temporal changes at a given time, because negative associations between g and the Flynn effect are unaffected by the direction and strength of changes. Understanding g-based changes is important, because cross-temporal decreases in g would suggest that (any) type of IQ changes are essentially shaped by increasing ability differentiation.
Here, we present the first formal assessment of cross-temporal changes in achievement g as a proxy for psychometric g. This is reasonable, because school achievement is a good proxy for intelligence (e.g., Pokropek et al., 2021). We assess cross-temporal g changes of g in population-representative samples of 2nd, 5th, 8th, and 10th graders in Italy (N = 1,900,000+) from 2010 to 2022 in large-scale formal educational assessment data from Italy (INVALSI; Istituto Nazionale per la VALutazione del Sistema di Istruzione).
Participants and materials
We analyzed data from the annual INVALSI assessments on population-representative Italian 2nd, 5th, 8th, and 10th grade student samples in mathematics and reading from 2010 to 2022. The INVALSI tasks are similar to PISA assessments and consist of 30-45 items, depending on survey year and grade. Test administration takes about two hours. Here, we analyzed data from 1,951,334 individual assessments (median n = 27,153 per year and grade) for math and reading (math subscales: "data and predictions", "numbers", "relations and functions", "space and figures", reading subscales: "grammar", "reading comprehension"; raw data is available upon request from the INVALSI institute).
Data analysis
To assess the meaningfulness of our data analyses, we first calculated the mean accuracies divided by sub-areas/types of items. Then, we performed confirmatory factor analyses to establish that scores were clustered according to the conceptually assumed reading and math areas. Consequently, we fitted a bifactor model to account for the expected data structure and facilitate g-factor-related examinations. Two (orthogonal) factors emerged, representing the g factor and a specific reading factor, modeling the two reading sub-areas ( Fig. 1).
Bifactor model fitted across years and grades.
This approach allowed us to extract within-year i) model fit indices (RMSEA, SRMR, CFI, NNFI), ii) McDonald's omega (ω) reliability indices for both latent factors, and iii) average explained variances (R2). Subsequently, we used generalized linear models to assess the effects of year, grade, and their interaction, on both ω and R2. We fitted a series of alternative models with all combinations of predictors on both indices of interest to assess model relevance using the Widely Applicable Information Criterion (WAIC; see Supplement S1 for a detailed description of our data analysis approach as well as specifics about our used R-packages).
Secular test score changes are unavailable (such changes cannot be meaningfully reported for our data, because INVALSI data are restandardized within each assessment, thus rendering betweencohort comparisons uninformative for standard scores; due to potential changes of betweencohort item difficulties, raw score-based changes are uninformative).
Results
A total of 39 bifactor models were fitted, including 12 for grade 2, 12 for grade 5, 8 for grade 8, and 7 for grade 10 (data for year 2010 and since 2018 were unavailable Fig. 2). R2 values were largely consistent, ranging between 0.40 and 0.60 across years, but with a slight tendency to increase at least in the latter grades (8 and 10).
Interestingly, the sign of the main effect for year reversed when grade by year interactions were included, thus indicating decreases in achievement g over time, although the main effect did not reach significance. Results based on ω were virtually identical to R 2 -based analyses, indicating the best fit for the model including the year by grade interaction (Table 1; Fig. 3).
Figure 2
Model variance by achievement g as a function of grade and year. Shaded areas represent 95% Bayesian Credible bands.
Figure 3
Reliability ( ) of achievement g as a function of grade and year. Shaded areas represent 95% Bayesian Credible bands.
Discussion
We observed no substantial evidence for a weakening of g-related INVALSI-based achievement over time. Both explained variances as well as omega-based reliabilities showed non-significant decreases over time in our best-fitting models. Significant interactions between school grades and years indicated differentiated effects of elapsing time on the positive manifold strength.
Although the main effects of year on g were (non-significantly) negative for both R2and ω-based analyses, the significant interaction with grade suggested cross-temporal trajectory differences. In this vein, results for younger participants (i.e., those in lower grades) showed evidence for a decreasing strength of the positive manifold. Older students (i.e., those in higher grades) did not show decreases or even showed increases. Conceivably, these findings may be attributed to the lagged structure of the data.
Specifically, when shifting parameters of students in higher grades towards those years when they had been in second grade (i.e., thus artificially assuming identical ages of participants when they had been the same age as the youngest cohorts), a curvilinear pattern of initially increasing, then stagnating, and subsequently decreasing values would emerge.
Such a pattern resembles the inverse u-shape of the Flynn effect in spatial task performance, as observed previously in Austria in a similar time frame (Pietschnig & Gittler, 2015). Similar curvilinear patterns have been found in Norway (Bratsberg & Rogeberg, 2018) and are consistent with findings from other countries (Dutton et al., 2016).
Curvilinear g-based trajectories may mean that changed Flynn effect patterns are rooted in increasing ability differentiation across cohorts. Conceivably, the presently observed interaction represents a manifestation of this very curvilinearity. Increases in ability differentiation are consistent with recent reports of distinct, non-monotonous Flynn effects in CHC model-based stratum II domains, which differed in terms of their signs (Lazaridis et al., 2022). Cross-temporal decreases in g are necessary consequences of diverging cross-temporal population IQ (sub-)domain trajectories.
However, it cannot be ruled out that the observed interaction of grade and year on g represents a consequence of a comparatively late onset of changes in achievement trajectories. Specifically, if g-related cohort-based changes were only to emerge in adolescence instead of childhood, then only higher-grade students would be expected to show cross-temporal change trajectories. This idea is consistent with evidence of no IQ test performance changes in preschoolers . Conceivably, only comparatively old students (e.g.,10th graders) are subject to g-related changes which drives the presently observed interaction and indicates the cross-temporal stagnation or even increase of g. Notwithstanding, even if this were the case, the negative sign of the main effect for our best-fitting model contrasts this idea.
Limitations
First, it is possible that the cross-temporal g changes may be a consequence of periodical revisions of the INVALSI assessment (e.g., in terms of administration mode; Cornoldi et al., 2013) interesting, the interpretation of g-related changes does not depend on changes of scores.
Conclusions
In all, we present here evidence for cross-temporal stability of achievement g in a large-scale assessment of Italian students. Changes were differentiated according to school grade, indicating declines for younger students but potential increases of older ones. These school achievement-based findings appear to be consistent with the recently observed evidence for a potential stagnation or reversal of cognitive test score gains.
Data analysis
For each child in each year and grade, we calculated the mean accuracies divided by sub-areas/types of items. Subsequently, confirmatory factor analyses were performed.
Although we focused on the g factor in terms of our research question, we had to assess if item types were clustered according to the reading and math areas. Both to account for this data structure and to facilitate the calculation of the indices of interest (i.e., reliability of the g factor and average amount of observed performance variance explained by the g factor), we fitted a bifactor model. The model featured two (orthogonal) factors: the g factor and a specific reading factor that modelled the residual variance of the two reading sub-areas (because the sub-areas were only two for reading, their loadings were constrained to equality to facilitate convergence). Following the suggestions by Eid et al. (2017), we concluded that there was no need to add a specific "math" factor to model the residual variance for math sub-areas: this choice reflects the idea that math ability is a core aspect of general intelligence, and it facilitates model convergence without losing any fit with the data (see excellent fit indices in the results). Figure 1 illustrates the bifactor model fitted across years and grades. Generalized linear models were used to assess the effects of year, grade, and their interaction, on both and R². Since both indices were continuous but constrained between 0 and 1 (and clearly presented no independence between their mean and variance), we modeled them using the beta distribution. Because this is not in the exponential family offered for the generalized linear models in R, we fitted the models using STAN via the interface implemented by the "brms" package of R. STAN is a probabilistic language written in C++ which allows the fitting of statistical models via MCMC algorithms for Bayesian inference, implementing a wide range of statistical distributions. Models are fitted with four chains each with 2,000 iterations (the first half are discarded as warmup, leaving with a total of 4,000 effective iterations per model). Uninformed default priors were used for all parameters. A Bayesian framework to model fitting was adopted only for ease of adopting the Beta distribution, which is not as easily implemented in other packages of R.
To assess the relevance of the effects of year, grade, and their interaction, we fitted a series of alternative models with all combinations of predictors on both indices of interest. All analyses were conducted with the R free software (R Core Team, 2022), and the following packages: "lavaan" (Rosseel, 2012) for fitting CFA, "semTools" (Jorgensen et al., 2022) for extracting models reliabilities, "brms" (Burkner, 2017) for fitting generalized linear models via MCMC, "ggplot2" (Wickham, 2016) for data visualization.
|
2023-07-21T15:23:57.873Z
|
2023-07-22T00:00:00.000
|
{
"year": 2023,
"sha1": "a9359c85f5f177557179df42c9bd273d569db761",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e5676fd7a06b829e54ac5354a8553ed0603f917c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
234798541
|
pes2o/s2orc
|
v3-fos-license
|
Revalidation of Xysticus tuberosus Thorell , 1875 ( Aranei : Thomisidae ) with notes on the related species Âîññòàíîâëåíèå âàëèäíîñòè Xysticus tuberosus Thorell , 1875 ( Aranei : Thomisidae ) ñ çàìåòêàìè î áëèçêèõ âèäàõ
Two important papers dealing with spiders from the southern regions of the Russian Empire were been published in the same year: Kroneberg [1875] and Thorell [1875], in which 45 and 64 new species, resoectively, were described. Therefore, it was hardly surprising that some of the species were found to be synonymous: Micaria modesta Kroneberg, 1875 and M. rossica Thorell, 1875, or Xysticus lugubris Kroneberg, 1875 (currently considered in Ozyptila Simon, 1864) and X. tuberosus Thorell, 1875 (see WSC [2021]). In a study of thomisid spiders from Middle Asia, Marusik & Logunov [1990] illustrated two types of the Ozyptila lugubris male palp. Specimens from the western part of Kazakhtsan showed the embolus bent near the tip, while material from the eastern regions had a straight embolus. Recently, we have got the opportunity to revise the types of both species to find out that their males differ in embolus structure, while females have indistinguishable epigynes. The objective of this paper is to revalidate X. tuberosus, providing a new combination, to demonstrate the differences between the sibling species, as well as to briefly discuss their assignments.
Introduction
Two important papers dealing with spiders from the southern regions of the Russian Empire were been published in the same year: Kroneberg [1875] and Thorell [1875], in which 45 and 64 new species, resoectively, were described. Therefore, it was hardly surprising that some of the species were found to be synonymous: Micaria modesta Kroneberg, 1875 and M. rossica Thorell, 1875, or Xysticus lugubris Kroneberg, 1875(currently considered in Ozyptila Simon, 1864 and X. tuberosus Thorell, 1875(see WSC [2021).
In a study of thomisid spiders from Middle Asia, Marusik & Logunov [1990] illustrated two types of the Ozyptila lugubris male palp. Specimens from the western part of Kazakhtsan showed the embolus bent near the tip, while material from the eastern regions had a straight embolus.
Recently, we have got the opportunity to revise the types of both species to find out that their males differ in embolus structure, while females have indistinguishable epigynes. The objective of this paper is to revalidate X. tuberosus, providing a new combination, to demonstrate the differences between the sibling species, as well as to briefly discuss their assignments.
Material and methods
Specimens were photographed using an Olympus Camedia E-520 camera attached to an Olympus SZX16 stereo microscope. Scanning electron micrographs were taken with a SEM JEOL JSM-5200 scanning electron microscope at ABSTRACT. Revision of the types of Xysticus lugubris Kroneberg, 1875 (now considered in Ozyptila) and X. tuberosus Thorell, 1875, thought to be a junior synonym of X. lugubris, and a study of recently collected material reveals that two names are not synonyms. A new combination, Ozyptila tuberosa comb.n., is suggested for X. tuberosus. Males of two species show clear differences in the shape of the embolus tip, although females are indistinguishable. Both species are illustrated and their copulatory organs are described. In addition, we illustrate Ozyptila inaequalis (Kulczyński, 1901 Thorell, 1875, который рассматривали как синоним предыдущего вида, и изучение недавно собранного материала показали, что эти два названия не являются синонимами. Для X. tuberosus предложена новая комбинация Ozyptila tuberosa comb.n. У самцов этих видов выявлены четкие различия в форме вершины эмболюса, тогда как самки неразличимы. Оба вида проиллюстрированы, даны описания их копулятивных органов. Дополнительно проиллюстрирован Ozyptila inaequalis (Kulczyński, 1901), вид, близкий к O. lugubris и O. tuberosa. Все эти три вида не род- NOTE. The original species description is based on a female specimen from Samarkand, collected by A.P. Fedchenko [Kroneberg, 1875: 35] and additional specimens of both sexes from Sarepta, collected by A. Becker (his female specimens were also used as syntypes of X. tuberosus). It is noteworthy that the figure of the male palp of X. lugubris presented by Kroneberg agreed well with the specimens known from the central and eastern parts of Middle Asia, although he had male specimen from Saratov (that should the Zoological Museum of the University of Turku, Finland. Digital images were prepared using CombineZP image stacking software. Geographical names in Central Asia are spelled like in the labels. Depositories: ISEA -Institute for the Ecology and Systematics of Animals (Novosibirsk, Russia); NRS -Naturhistoriska Riksmuseet (Stockholm, Sweden); TNU -Taurian National University (Simferopol, Crimea); ZMMU -Zoological Museum of the Moscow State University (Moscow, Russia); ZMUH -Zoological Museum of University of Helsinki (Finland);
Male palp with 2 tibial apophyses, both abrupt, not tapering; cymbium 1.2 times longer than wide, with a distinct and non-pointed tutaculum (Tt) in about a 4 o'clock position; tegulum lacking apophyses; tegular ridge located in a 10 o'clock position; pars pendula almost as wide as long (length/width ratio, 1.2); embolus originating in an almost 12 o'clock position (ca. 11:45), straight, gradually tapering, with fine transverse ridges. belongs to X. tuberosus). Most likely, Kroneberg overlooked the small turn of the embolus tip. Kulczyński [1901] provided no arguments to favour the species' transfer from Xysticus to Ozyptila. Most likely, the reason was the clavate numerous setae on the body and legs that are only known in Ozyptila.
DIAGNOSIS Epigyne with a septum raised over epigynal plate, rounded or abrupt anteriorly, slightly longer than wide; copulatory opening located on lateral sides of septum. Endogyne not studied.
DISTRIBUTION. This species seems to range from South Kazakhstan to northern Iran, in West Kazakhstan being is replaced with O. tuberosa. We checked the figures of the male recorded as O. lugubris from South Khorasan [Zamani et al., 2014] and found out that it had the same embolus as that from the Jambyl O. lugubris Ozyptila tuberosa (Thorell, 1875), comb.n., sp.revalid. COMMENTS. Revision of the syntypes of Xysticus tuberosus and the material from Middle Asia identified as Ozyptila lugubris reveals that they are not conspecific, as it was erroneously treated by several authors [Kulczyński, 1901;Charitonov, 1932;Marusik, Logunov, 1990;Mikhailov, 2013;WSC, 2021]. Kulczyński [1901] synonymized the two names without comments. He indicated that he had studied the female from Saratov. Most probably, part of the O. lugubris original type series (from Sarepta; not labelled as "sp.n." = not indicated as types) which is kept in NHRS belongs to O. tuberosa.
DIAGNOSIS. Males of both sibling species, O. tuberosa and O. lugubris, differ by the shapes of the embolus tip (bent,vs. straight, and retroventral tibial apophysis. Females of these closely related species are indistinguishable.
Male palp with 2 tibial apophyses, both abrupt, nontapering; cymbium 1.2 times longer than wide, with a distinct and not pointed tutaculum (Tt) in about a 4 o'clock position; tegulum lacking apophyses; tegular ridge located in a 10 o'clock position; pars pendula almost as wide as long (length/width ratio, 1.2); embolus originating in a 12 o'clock position, straight, gradually tapering, bent in terminal ¼, covered with fine transverse ridges.
Epigyne with a septum raised over epigynal plate, rounded or abrupt anteriorly, slightly longer than wide; copulatory opening located on lateral sides of septum. Endogyne not studied.
DISTRIBUTION. The species seems to range from Crimea to the Ustyurt Plateau, and south to Turkey. Records of O. lugubris from the Caucasus (Dagestan, Georgia and Azerbaijan) [Otto, 2021] most probably refer to this species. Based on the figures of O. lugubris reported from Turkey [Demir, Seyyar, 2020], the samples undoubtedly belong to O. tuberosa. A single record from the environs of Varna, Bulgaria by Drensky [1936] cannot be verified in the lack of figures. If that was no misidentification, this record should belong to O. tuberosa.
Discussion
Both species considered in the paper, O. lugubris and O. tuberosa, are close to each other and differ from the type species of Xysticus C.L. Koch, 1835 (Aranea audax Schrank, 1803) or Ozyptila Simon, 1864 (Thomisus claveatus Walckenaer, 1837) by the shape of the copulatory organs. From all other genera of Coriarachninae, they differ by the female carapace and abdomen being covered with sand grains (a character unknown in other genera) and a raised cephalic part (vs. non-elevated). Establishing a new genus for O. lugubris was proposed by V.Ya. Fet (in litt.) back in the 1980's. Since the 1990's, some papers referred to the name "Ozyptila" for O. lugubris in quotation marks. There is one more species, O. inaequalis (Kulczyński, 1901), related to both O. tuberosa and O. lugubris. It was described based on the holotype female from "Khalgan" (= Kalgan, currently Zhangjiakou), China. It is known range from Eastern Kazakhstan [Marusik, Logunov, 1995] to Hebei [Li, Lin, 2016]. Since the male palp of this species has never been properly illustrated, we do so here and also provide the first digital photographs of the epigyne. The specimens illustrated in Figs 1G, 2D-E, 3G-M are from China, Shanxi, 4.IX.1980(IZCAS-Ar.1812). The epigynes of specimens from the Almaty Area (Fig. 3I, M) agree well with those from China and have spaced fertilization ducts, while the female from Chimkent shows fertilization ducts touching each other (Fig. 3J-K). This may indicate that they belong to different species.
|
2021-05-21T16:58:12.691Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "94a6a6ba6085657c15541beaaceac2f1d691ecbc",
"oa_license": null,
"oa_url": "https://kmkjournals.com/upload/PDF/ArthropodaSelecta/30/30_1_119_124_Marusik_Mikhailov_for_Inet.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9fdf092f8106c4e8292710f93ba3f5dd32d65a04",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
261133632
|
pes2o/s2orc
|
v3-fos-license
|
Investigating the efficacy of topical application of Ipomoea carnea herbal cream in preventing skin damage induced by UVB radiation in a rat model
Ultraviolet-B irradiation is a common environmental stressor that has detrimental effects on human skin. Natural sunscreens are well-known for their ability to benefit inflamed sunburn and dry skin. This study examined the effect of formulated Ipomoea carnea herbal cream on UVB-induced skin damage. We screened the bioactive compounds of I. carnea crude extract, showing significant antioxidant activity. Additionally, we evaluated the cytotoxicity, revealing that I. carnea extract has less toxicity to vero cells (IC50 98.45 μg/mL) than to A375 cells (IC50 48.95 μg/mL). Based on this, we formulated the I. carnea herbal cream (FIHC) at 50, 100 and 200 mg concentrations and evaluated its organoleptic characteristics. Then, the rats were exposed to UVB radiation (32,800 J/m2) four times/week (on alternate days) before the cream was applied topically to the dorsal skin surface. Under UVB stress without treatment, rats showed deep dermal damage. In contrast, rats treated with the FIHC exhibited significantly reduced sunburn. Moreover, the histopathological and biochemical assays were confirmed by the topical application of FIHC, which had potentially reduced the skin elasticity and maintained the imbalanced enzyme and non-enzymatic antioxidant activity. Our findings amply demonstrate that the FIHC significantly accelerated the recovery of UVB-induced lesions through antioxidant and down-regulation of skin photodamage.
Introduction
The sun is the primary source of life and energy.It generates a steady energy flow from electromagnetic radiation with wavelengths between 290 and 4000 nm that reaches the earth's surface.Approximately 40% of sunlight is visible, 50% is infrared, and 10% is ultraviolet [1][2][3][4].The UV region is categorized into three subcategories: UVC (200-280 nm), UVB (280-320 nm), and UVA (320-400 nm).Extended exposure to solar ultraviolet (UV) radiation can damage the skin.The main cause of skin damage is UVB radiation, with UVA radiation contributing to a lesser extent [5,6].UVB radiation causes skin damage followed by sunburn, skin pigmentation, premature aging, and photo-carcinogenesis [7,8].According to researchers, the sun has detrimental consequences, such as acute impacts (e.g., sunburn) and chronic dangers, such as wrinkling, melanoma, cancer and immunological suppression [9,10].Reactive oxygen species (ROS) are principally responsible for skin damage caused by UVB radiation because they interact with proteins and lipids to change them [11].When the production of reactive oxygen species (ROS) generated by UV radiation surpasses the skin's ability to eliminate them, damage occurs.Extensive research has demonstrated that the accumulation of ROS can trigger the production of inflammatory cytokines, metalloproteinases (MMPs), and the degradation of collagen, ultimately leading to skin photoaging [12][13][14].To efficiently reduce the extent of ultraviolet (UV) radiation exposure and minimize the risk of sunburn on the skin, it is advisable to utilize topical sunscreen as a recommended approach [15].
Sunscreens shield the skin from the sun's harmful rays, which may cause erythema to briefly emerge, as well as actinic photoaging and skin cancer to develop over time [16,17].Sunscreens may lessen the risk of sun-induced skin cancer by decreasing the intensity of UV radiation that reaches the skin [6].An ideal sunscreen should provide adequate protection throughout the whole UV spectrum and be non-irritating, non-toxic, and allergen-free [18].Due to their antioxidant properties and UV-R absorption, plant-derived extracts have lately been explored as possible sunscreen components [18,19].Much of the research focuses on developing herbal plant products (cream, lotion, gel and paste), as natural substances are safer and more biocompatible than manufactured materials [20].Herbal cream incorporate one or two natural ingredients for specific cosmetic benefits.These compositions are applied to the skin to reduce the ROS imbalance and protect it from the harmful effects of UV light [21,22].The sun protection factor refers to the degree to which the sun's harmful effects are mitigated by sunscreens.Several clinical trials have demonstrated that sunscreens may reduce the incidence of skin cancer, primarily squamous cell carcinoma and melanoma [23][24][25].
The phytocompounds in antioxidant plant extracts work together to create a synergistic effect that is less harmful [26].While referring to the context of plant drugs, topical application of Panax ginseng [27], Aloe saponaria [28], Acalypha indica [29], Viola tricolor [30], Annona muricata [31], Centella asiatica [32] and Hyptis mociniana [33] these plant parts used to formulate herbal treatments must possess anti-inflammatory, antioxidant, antimicrobial and also have potent ameliorative effect against to UVB induced skin damages [34].Phyto-extract loaded creams are semi-solids frequently applied to the skin and consist of two immiscible phases: oily and aqueous.Due to the emulsified structure of the skin, cream-formulated medications interact effectively with the skin and penetrate biological membranes more readily [35].
Many of the popular medicinal plants have been explored to find out the therapeutic compounds which can provide remedy for UVB induced skin damage, but there has been limited progress in exploring weed plants for similar ameliorative purposes [36,37].The weed plant of Ipomoea carnea is a member of the Convolvulaceae family commonly known as 'Morning glory'.It can be found in various parts of India, including Tamil Nadu, Kerala, Chandigarh, Madhya Pradesh, West Bengal, Rajasthan, and Maharashtra [38].I. carnea has rapid propagation, wide ecological range, and exceptional competitiveness [39].The plant exhibits allelopathic effects, its boiled roots are used as a laxative and to stimulate menstruation.Traditional healers utilize various parts of the plant to treat skin diseases, while the milky juice is specifically employed for the treatment of Leucoderma and related skin conditions.It contains a variety of phytochemicals, including glycosides, reducing sugars, alkaloids, flavonoids, esters, fatty acids, alcohols, and tannins.The leaves specifically contain alkaloids, hexadecanoic acid, saponins, stearic acid, 1,2-diethyl phthalate, phenolic compounds, n-octadecanol, octacosane, hexatriacontane, tetracontane, 3-diethylamino-1-propanol, xanthoproteins, and flavonoids [40,41].Some of these compounds are known to possess properties such as anti-diabetic, hypolipidemic, anti-inflammatory, antibacterial, hepatoprotective, and anti-cancer effects.Additionally, a significant chitinase/lysozyme activity has been observed during screening [42][43][44][45].To the best of our knowledge, there is a lack of scientific reports available in support of its traditional treatment.Although no notable herbal cream based studies have been conducted on I. carnea.Based on this background, we developed a herbal cream using I. carnea to assess its efficacy against UV-B irradiation induced skin damage in Rattus norvegicus under laboratory conditions.
Plant collection and extraction
Ipomoea carnea leaves were collected in April around Sivakasi (latitude: 9.463898 • N &77.760829• E), Tamil Nadu, India and confirmed by taxonomists at the Centre for Research and Postgraduate Studies in Botany, Ayya Nadar Janaki Ammal College, Sivakasi, Tamil Nadu, India.The voucher specimen was registered under the number TPH-1552.The leaves were shade-dried at 35-40 • C, ground into a fine powder, and 100 g of the fine powder were subjected to Soxhlet extraction (Borosil, Madurai, India) using 600 mL of 95% methanol (AR grade) for 12 h at 60 • C. The extracted solvent was then dried under vacuum, yielding the final extract used for further analysis.
Gas chromatography-mass spectrometry analysis
The dried plant residues were dissolved in methanol and subjected to Gas chromatography-mass spectrometry (GC-MS) (GC-QP2010, Shimadzu, Tokyo, Japan) was performed using Thermal Desorption.The GC-MS had an Rtx-5 capillary column (30 m × 0.25 mm x 0.25 μm film thickness).Helium was used as the carrier gas at a constant flow rate of 1.21 mL/min.The oven temperature was programmed to increase from an initial 60 • C-200 • C at a rate of 5 • C per minute and then to 280 • C. The electron multiplier was set to auto-tune, and the scan was performed over a mass range of 40-650 m/z Da.To identify the fragmentation, the retention time and mass spectra of each separated peak were compared to NIST 20 and the WILEY spectral library searching program, as well as the literature.
Cell viability(
where OD sample = absorbance of the treated sample, OD blank = absorbance of DMSO, and OD conrol = absorbance of non-treated sample.
The IC 50 values were calculated by using GraphPad Prism software by plotting the percentage of inhibition against the logarithm of the extract concentration.
Formulation of I. carnea herbal cream
The herbal cream was formulated by using the I. carnea extract and water/oil emulsion according to the adapted method [52] (Table 2 details are given in the supplementary file).
UVB irradiation and topical application
A total thirty number of Swiss female Wistar rats (Rattus norvegicus; weighing 170 ± 10 gm) were used in the study were performed at Dept. of Pharmacology, K.M. College of Pharmacy, Madurai.The animals were housed in polypropylene cages under controlled environmental conditions of a temperature maintained at 22 ± 1 • C, a regular light/dark cycle, and given free access to food and water.All experimental protocols were authorized by the Ethical Committee (No. 661/PO/Re/S/02/CPCSEA) in accordance with internationally accepted guidelines for the use and care of laboratory animals outlined by the NIH.The experimental study was approved by the institutional animal ethics committee (Process number-IAEC/SUNDAR.M/PhD/MKU/F9884/KMCP/70/2019).
The UVB chamber box setup was performed as described previously [57].The UVB radiation source consisted of two UVB lamps (Philips 20 W Sunlamp, Holland).It was placed 30 cm above the animals and continuously produced a light spectrum with a peak emission of 315 nm in the UVB chamber.A spectroradiometer (IL-700, International Lights, USA) equipped with a broadband light sensor (SEE 400 type) was used to measure the energy delivered by the regulator [58].To prevent the animals from moving during the exposure to UVB, they were injected intraperitoneally with a mixture of ketamine (80 mg/kg) and xylazine (10 mg/kg) for anesthesia prior to the procedure.
For our studies, we slightly modified the methodology described earlier to adapt the short-term UVB irradiated dose [59,60].Each rat received a total energy of 32,800 J/m 2 , administered as follows: 600 J/m 2 during the 1st week, 1800 J/m 2 during the 2nd week, 2200 J/m 2 during the 3rd week, and 3600 J/m 2 during the 4th week.The exposure lasted 5 min per dose, four times/week on alternate days.After the exposure of 30 min for each individual, 500 mg of formulated I. carnea herbal cream was topically applied to the dorsal skin of each rat.At the end of each week, the rat dorsal skin (5 cm 2 ) was marked, and red spot occurrence and size were recorded by laying a transparent sheet over the skin and taking photographs using a Nikon-D90 DSLR camera with a macro 105 lens (DX-format, Nikon crop, Japan).At that time point, the dimensions of all the rashes and red spots on each rat were recorded and calculated (Eq. (2)).
Skin elasticity or pinch test
The flexibility of the rats dorsal skin was examined before and after treatment with a formulated cream using the skin recovery ability test, often known as the pinch test [2,61].In brief, the fingers were used to raise the midline of the rat's dorsal skin until its feet were barely touching the desk.The pinch was then released, and the time taken for the skin to recover (in seconds) was immediately measured and calculated.
Histopathological analysis
At the end of the experiment period, animal dorsal skin was detached freshly and fixed in 10% neutral buffered formalin, followed by being embedded in paraffin and sectioned at 2 μm using a microtome (Weswox Optik-1090A).These sections were then affixed to slides and stained using Haematoxylin-eosin (H&E).The degree of skin structure alteration and elastosis were assessed microscopically (Olympus microscope, CH20iBIMP with micro view ×86 software).
Biochemical assays
After completing four-week UV exposure and formulation application as per the protocol, the skins were excised from the animals were homogenized in 50 mM phosphate buffer (pH 7.0).The homogenized samples were then centrifuged at 15,000 rpm for 15 min at 4 • C, and the resulting supernatant was utilized for biochemical analysis.The levels of enzymatic and non-enzymatic antioxidant status were analysed using the following standard protocols: estimation of protein content [62], superoxide dismutase activity (SOD) [63], catalase activity (CAT) [64], and lipid peroxidation (MDA) [65] (details are mentioned in the supplementary file).
Statistical analysis
The data were presented as mean ± SD.To identify the inter-group differences, multiple group comparisons were conducted using one-way analysis of variance (ANOVA) followed by Duncan's comparison test.A value of p < 0.05 was considered to be statistically significant.All statistical analyses were performed using SPSS (version 21.0;IBM Corp. Armonk, NY, USA).
Antioxidant activity
The antioxidant effect of I. carnea was assessed by various assays, including the DPPH assay, metal chelation assay, hydroxyl radical scavenging activity, and superoxide anion radical scavenging assay, as shown in Fig. 2. The antioxidant potential of the methanolic extract of I. carnea crude was compared to that of the standard ascorbic acid.The DPPH activity of I. carnea extract has significantly increased with the increase in its concentration and exhibited higher antioxidant activity of 83.02% at 1000 μg/mL.However, the standard Ascorbic acid demonstrated 94.26% inhibition in the same concentration (Fig. 2a).The I. carnea extract also showed significant hydroxyl radical scavenging activity of 72.2% and ascorbic acid to 85% at 1000 μg/mL (Fig. 2b), while the ferrous ionchelating capacity was found to be increase with increasing the concentration at maximum concentration 1000 μg/mL the chelation was 75.6%, and 84.3% with Ipomoea extract and ascorbic acid standard.(Fig. 2c).Moreover, the maximum inhibition of superoxide radical scavenging activity was found to be 82.4% by I. carnea as compared to the activity of ascorbic acid 90.3% at a concentration of 1000 μg/mL (Fig. 2d).The reducing power of I. carnea was determined by the increase in absorbance of the reaction mixture, which indicated an increase in its reducing power (Fig. 2e).The results demonstrate that the antioxidant potential of I. carnea extract is significantly greater as compared to standard ascorbic acid.Similar to our study, previous reports of strong antioxidant capacity were noted from other plant species, including Aristolochia indica, Piper nigrum, Ocimum basilicum Aspalathus linearis, Tabernaemontana divaricata and Camellia oleifera [26,[76][77][78][79][80].Antioxidants have the ability to boost the endogenous antioxidant capacity of the skin and help neutralise reactive oxygen species (ROS) that are caused by external causes such as UV radiation from the sun.Almost every living thing on earth has some defense against the harmful effects of ultraviolet radiation [81].
Cytotoxicity assay
The MTT assay was employed to evaluate the cytotoxicity of the I. carnea extract.In this study, A375 cells and Vero cells were utilized in this investigation.I. carnea exhibited anti-proliferative activity against A375 with an IC 50 value of 48.95 μg/mL.Plant extracts containing high levels of phenolics, flavonoids, and terpenes, which are major classes of secondary metabolites, have been found to increase cell death.The possible mechanism of phenolic compounds could be to enhanced the reactive oxygen species, which are amplified in cancer cells promote their proliferation and differentiation [82].To validate the morphological features of apoptosis, the extract-treated cells were examined under a phase contrast light microscope (Fig. 3A & B).The treated cells appeared to undergo apoptosis, as evidenced by prominent features such as detachment from the culture plate, cell contraction, aggregation of nuclear chromatin, and loss of contact with neighboring cells [83,84].In the case of the Vero cells, I. carnea extracts showed an IC 50 value of 98.45 μg/mL.They were found to be non-toxic even at high concentrations.The non-toxicity was observed not only in cell morphology but also in proliferation rate (Fig. 3A and B).The non-toxic property plays a vital role in the successful formulation of cream and pharmaceutical products [85].Moreover, the I. carnea extract exhibited anti-proliferative activity against A375 and was less toxic to Vero cell lines.Based on this property, the extract of I. carnea was used to formulate a herbal cream against UVB sunburns.
Physiochemical evaluations of herbal cream
Based on the pilot screening the I. carnea extract were used to formulate herbal creams in three different concentrations: FIHC-50, FIHC-100, and FIHC-200.Plain cream was also prepared without plant extract.The formulated creams were evaluated for their color, physical appearance, odor, homogeneity, spreadability, pH, stability, in-vitro permeation and viscosity analyses before topical application to experimental animals.From the physiochemical evaluation, the formulated creams were pale green, semisolid with a uniformly smooth texture and showed no phase separation (Table 1).The pH values of the formulated creams were slightly acidic, ranging from 6.1 to 6.3 throughout the observation period, whereas the plain cream had a pH of 6.2.The spreadability range of the formulated creams was observed at 12.66, 12.33, and 9.63 s/cm/g, while the spreadability of the plain creams was 18.3 s/cm/g.After 90 days, the spreadability rate was slightly changed to 13.1, 12.66, and 11.62 s/cm/g for the different concentrations of the formulated creams, while the plain cream remained at 18.6 s/cm/g (Table 1).The spreadability time values of the I. carnea creams were in the range of 9-12 s.The formulated creams of I. carnea had high stability, withstanding up to 90 days without losing their shelf life and
In vitro permeation study
In vitro permeation is a laboratory technique used to assess the ability of substances to penetrate through biological barriers such as skin, membranes, or biological tissues [87].This technique is commonly used in the pharmaceutical industry to evaluate the effectiveness of drug formulations and delivery systems [88].Fig. 4 displays the permeation analysis results, which evaluated the duration of I. carnea herbal cream release across the dialysis membrane.The percentage of time taken for the control sample to release was used to determine the release time of the formulated creams through the membrane.Compared to the control group, FIHC-50 showed a gradual increase to 6.4% and 52.3% after the first and eighth hours, FIHC-100 had 7.7% and 59.6%, and FIHC-200 had 9% and 70.3% after the first and eighth hour of cream suspension.The sustained permeation of 200 mg of formulated I. carnea cream was 70.3%.(Fig. 4).The in vitro scavenging activity results of the formulations released through the artificial membrane towards DPPH substantiate the adequate release of polyphenolic compounds.According to the previous report the DPPH was found to be effective as a marker for detecting the release of the extract and for assessing the release in terms of antioxidant activity in the receptor solution [56].Therefore, this experiment concluded that the I. carnea cream formulations could penetrate the skin well when applied.
Viscosity measurement
Viscosity is an important physical property of any topical cream formulation [89,90].As the rpm increased, all the tested creams exhibited gradual changes in viscosity.FIHC-50 mg and FIHC-100 mg displayed a viscosity range from 19,650 to 4781.667 and 19,855 to 7283.333 cPs, respectively.These data showed that FIHC -50 mg and FIHC-100 mg creams had equal viscosity.The viscosity of FIHC-200 was found to be greater, ranging from 23,545 to 5293.33 cPs.In contrast, the plain cream had the least viscosity range from 18,538 to 4241 cPs (Fig. 5).Viscosity can be affected by temperature changes and other factors.Thus, the formulated cream is easier to Fig. 6.Photographic images demonstrate the physical appearance of UVB irradiated dorsal skin with treatment group, G1-refers to normal control skin, unexposed UVB and without any treatment; G2-refers to skin exposed UVB without any treatment; G3-refers to skin that has been exposed to UVB radiation but treated with plain cream.G4-refers to skin that has been exposed to UVB radiation but treated with FIHC-50 mg cream.G5-refers to skin that has been exposed to UVB radiation but treated with FIHC-100 mg cream.G6-refers to skin that has been exposed to UVB radiation but treated with FIHC-200 mg cream.At the end of every week, the rat's dorsal skin was photographed, spots were noted and size was measured using an overlaying clear plastic sheet.apply smoothly onto the skin while still penetrating deep into body tissues for healing purposes [91].
UVB induces skin damage and topical treatment
The impacts of UVB on the skin tissue of R. norvegicus female Wistar rats were studied.Initially, reddening of the skin was noticed on the exposed skin area.Later, lesions developed with severe sunburn on the surface of skin layers.The severity of skin damage and mitigation of this damage by the topical application of formulated I. carnea herbal creams are shown in Fig. 6 for G4, G5, and G6.At the same time, no such effect was observed in G2 and G3 rat skin.We observed significant changes in rats exposed to UVB but not provided any treatment (G2) compared to the unexposed UVB control group (G1).Due to UVB exposure, there is an increase in the number of red spots, the area of wrinkles, and skin color.Some epidemiological changes were also observed in the UVB-exposed animals (Fig. 7A & B).The structural changes of the epidermal layer lead to non-melanoma spots on the skin, appearing as coarse red spots by rapid generation of ROS level [92,93].The overproduction of ROS results in the depletion of the tissue's inherent antioxidants, which Fig. 7. Histogram demonstrates the physical appearance of extent of skin damage in experimental rats with different treatments groups (A) The effect of I. carnea cream can reduce the visual score of red spots on the skin surface, (B) The effect of I. carnea cream treatment reflect the macroscopic changes of healing progression between phase (All groups except G-2).The data are expressed as mean ± SD, and significant differences among the groups were identified using one way ANOVA with Duncan's multiple range test at P < 0.05, indicated by different superscripts in the values.curtails the cells' ability to protect themselves [94][95][96][97].The Duncan's test revealed that topical treatment of FIHC 200 cream was found to be a beneficial restoration of reddening (G6).It reduced the red spots and expedited wound contraction compared to the UVB irradiation alone (G2) in Fig. 7A & B. Conversely, the application of plain cream (without plant extract) over the irradiated skin surface failed to produce any positive effects in terms of restoration of damage (G3).The topical application of formulated creams to the irradiated surface offered protection only at high concentrations.While the treatment of 50 & 100 mg concentrations of FIHC cream had failed to produce significant changes in the reversal of UVB damages as noticed in the earlier weeks (G4 & 5).High concentration of 200 mg FIHC cream was effective enough to cause maximum reduction of UVB-induced damage on the dorsal skin surface (G6).In the healing process, numerous enzymatic and non-enzymatic pathways are engaged in reducing the elevated ROS levels in the skin [98].Antioxidants like ascorbic acid (vitamin C), carotenoids, α-tocopherol (vitamin E) and plant phenols play a role in averting premature skin aging and cellular harm [25,28,33,99].The primary reason for using sunscreen is to protect ourselves from harmful UV rays, which helps prevent and minimize premature aging, tanning, sunburn blotchiness on the face, promotes skin health, and reduces the risk of skin cancer [100].An earlier study has indicated that the application of UV sunscreen can reduce the formation of free radicals by approximately 55% [101].However, the presence of antioxidants in sunscreen has a greater ability to decrease free radicals compared to using sunscreen alone.Our research revealed that I. carnea crude possesses abundant antioxidant properties.Based on these findings, we prepared I. carnea herbal cream that may provide greater protection against shorter wavelengths.
Skin elasticity/pinch test
To quantity the dorsal skin elasticity in rats, a pinch test was carried out every week for four weeks immediately after UVB treatment.The photographs of the dorsal skin after being stretched are illustrated in Fig. 8.The time taken for the skin to return to normalcy after pinching was measured in animals exposed to UVB and formulated cream for four weeks and results are shown in Fig. 9.The time taken by the untreated animals (G2) skin was much shorter compared to the treated animals (G3-6).The exposure to UVB radiation caused a reduction in skin elasticity initially and eventually led to the development of wrinkles [102].One possible explanation could be that the elastases secreted by surrounding cells degrade the elastic fibers.Which are directly induced in fibroblasts by UVB radiation, resulting in the curling and/or reproduction of elastic fibers.Alternatively, it could be due to preventing the regeneration of elastic fibers after the breakdown of existing elastic fibers by the presence of collagen fibers, leading to a comparable convoluted pattern of newly formed elastic fibers [103].The time for rat skin to regain its initial shape after UVB-induced deformation was significantly longer, up to 5-fold, compared to plain cream treated skin (G3).There was no significant difference between the UVB-induced without treatment group and the plain cream treatment group, indicating that plain cream had no ameliorative effect on the skin surface.However, the application of FIHC 200 mg cream had a tendency to enhance skin elasticity, and the recovery time was significantly shorter than the UVB-induced without treatment group.Interestingly, the effects of 50 and 100 mg FIHC cream treatment were also compared.In addition, the treatment of FIHC-50 and 100 mg cream showed no lesions, but a few shallow wrinkles were observed, and the recovery time was significantly longer than in the group of rats that were not exposed to UVB (G1).As shown in Fig. 9 at the end of this experiment, the expected results were observed in rats treated with FIHC 200 mg cream, which could promote skin elasticity.The presence of the inhibitor caused a decrease in skin elasticity and inhibited wrinkle appearance [104]..
Histopathological analysis
Histological studies provide strong evidence that the treatment of I. carnea herbal cream had an ameliorative effect on UVB-induced skin alterations.Under microscopic assessment, the non-irradiated UVB group (G1) exhibited a relatively complete structure.Fig. 10A showed that the skin had a stratified epidermis consisting of the stratum corneum, stratum granulosum, stratum spinosum, and stratum basal layer, which were distinguishable.A thick connective dermis containing a papillary layer, a reticular layer with collagen fibers, and hair follicles was also evident.After UVB exposure, the tissue network was lost in the dermal papillary and reticular layers.The UVB induction of the metabolic activity of fibroblasts was noticed.The dermal vessels became enlarged and leaky, accumulating excessive basement membrane-like material.Inflammatory cells congregated around the vessels, proliferation of mast cells, and signs of degranulation were observed in connection with the UVB treatment of the dorsal side of R. norvegicus skin.Thus, the severity of damage caused by UVB to skin tissue is evident from our studies Fig. 10B.Upon application of plain cream to the UVB-exposed area resulted in a reduction in connective tissues and the loss of hair follicles (Fig. 10C).On the contrary, the application of herbal creams prepared from I. carnea was effective enough to restore the tissue structure in the irradiated area, especially at 200 mg concentrations, whereas 50 and 100 mg concentrations of the formulated creams were not as effective (Fig. 10D-F).Thus, histological observations confirm that the formulated creams from I. carnea extract were found to ameliorate the UVB-induced damage as far as the skin structure is concerned.The robust antioxidant properties and the presence of diverse phytochemicals in the cream may account for the potential mechanism underlying the healing process of epidermal repair through the scavenging of free radicals [105].
Biochemical analysis
Typical changes in soluble protein, SOD, CAT activity, and MDA content of R. norvegicus skin that was exposed to UVB and I. carnea herbal cream treatments are shown in Fig. 11.Upon short-term UVB exposure, a downregulation of protein content, SOD, CAT activity was noticed in G2 in Fig. 11A, B, and C. Exposure to solar-simulated UVR caused a temporary decline in SOD activity in human skin, which was succeeded by an elevation in the level of conjugated diene double bonds, indicative of lipid peroxidation [106].The f values statistically highly significant difference (P < 0.05) when compared with group-6 values overproduced free radicals from administering UVB irradiation to the rat skin spiked the lipid peroxidation, evidenced by the increased MDA level in Fig. 11D.It was also compared to unexposed UVB rats in G1.However, applying plain cream (without any plant extract) to the UVB-irradiated skin was found to enhance the level of protein content, SOD, and CAT compared to the UVB control.Proteins are known to be major oxidative modification targets.Protein amino acid alterations caused by oxygen radicals and other activated oxygen species frequently result in structural or enzymatic protein functioning changes [107].Moreover, promoting the metastatic process by melanoma cells might involve an increase in oxidative damage to the surrounding tissue as one of its mechanisms [108,109].
Conclusion
Our findings suggest that topical application of formulated I. carnea herbal cream exhibited a stable and potent ameliorative effect on a UVB radiation-induced skin burn, due to the presence of bioactive compounds with higher antioxidant properties.The efficiency of creams can be improved by increasing the concentration of I. carnea extract.Clinically and histopathologically, the formulated I. carnea has a potent defense mechanism, and its topical application can diminish the UVB-induced damage on the skin surface.Similarly, the effectiveness of formulated I. carnea herbal cream in maintaining the imbalanced enzymatic and non-enzymatic activity in post-treatment of UVB-irradiated skin is supported by the results of biochemical assays.Thus, the cream showed the potential to mitigate the damages caused by UVB irradiation.Based on the current research findings, it is suggested that the I. carnea herbal cream shows promise as a viable option for medicinal applications.
Author contribution statement
Madasamy Sundar: Conceived and designed the experiments; Performed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper.Krishnasamy Lingakumar: Conceived and designed the experiments.
Fig. 1 .
Fig. 1.Bioactive compounds of I. carnea were analysed by GC-MS.The peaks on the graph represent the percentage of identified phytoconstituents of I. carnea extracts.
Fig. 3 .
Fig. 3.The morphological changes in A375 and Vero cells after 48-h treatment of I. carnea extract observed under phase contrast microscope (a-A375 cells control, b -A375 cells treated I. carnea extract, c -Vero cells control and d-Vero cells treated I. carnea extract).Fig.3B MTT assay to determine the IC 50 value of the I. carnea and analyze their effect on A375 and Vero cell viability.
Fig. 3.The morphological changes in A375 and Vero cells after 48-h treatment of I. carnea extract observed under phase contrast microscope (a-A375 cells control, b -A375 cells treated I. carnea extract, c -Vero cells control and d-Vero cells treated I. carnea extract).Fig.3B MTT assay to determine the IC 50 value of the I. carnea and analyze their effect on A375 and Vero cell viability.
consistency.The cream excipient had potent cling with the plant extract and the physical and chemical nature of the formulations[86].
Fig. 4 .
Fig. 4. A comparison of in vitro permeation profile of formulated 50 mg, 100 mg and 200 mg I. carnea cream.Each value represents the mean ± SD (n = 3).
Fig. 8 .
Fig. 8. Amelioration efficiency of formulated I. carnea creams on the UVB induced damages in skin elasticity assessed through pinch test.The yellow arrow denotes the skin sagging.
Fig. 9 .
Fig. 9.The time-response curve was examined to evaluate the anti-aging effects of I. carnea herbal cream on UVB radiation-induced skin damage.The topical application FIHC treatment G4-6 significantly decreased compared to G2 and G3.Statistical analysis.All the data are represented as mean ± SD.Values with different superscripts are significantly different among the groups by ANOVA with Duncan's multiple range test at P < 0.05.a values statistically highly significant difference (P < 0.05) when compared with group-1 values.b values statistically highly significant difference (P < 0.05) when compared with group − 2 values.c values statistically significant difference (P < 0.05) when compared with group − 3 values d values statistically highly significant difference (P < 0.05) when compared with group-4 values.e values statistically highly significant difference (P < 0.05) when compared with group-5 values.f values statistically highly significant difference (P < 0.05) when compared with group-6 values
Table 1
Evaluation of stability and organoleptic parameter studies of I. carnea herbal cream.
|
2023-08-26T15:14:58.541Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "073229065fe0f6858f29da13fd5f7ec7c276248a",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844023063697/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1223350756eb5e3511fdb2579275d74488db4353",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234545537
|
pes2o/s2orc
|
v3-fos-license
|
“Back to the future”: Socio-technical imaginaries in 50 years of school digitalization curriculum reforms
This paper examines major Swedish school digitalization curriculum reforms over the past 50 years by analyzing similarities and differences between the late 1960s, mid-1990s, and early 2010s curricular reforms. By drawing on Jasanoff’s (2015) socio-technical imaginary concept, we examine how digitalization reforms are constituted discursively and materially in struggles over curricular knowledge content, preferred citizenship roles, and infrastructural investments and especially by relating curricular reforms to governance transformations. One recurrent strategy of reform is what we call the back to the future argument, where curricula address an ideal citizenship of future societies, politically used to support change. We suggest that in the more than 50 years of school digitalization issues, it has been surrounded by strong and shifting struggles over the curriculum content and governance transformations. This pendulum movement (Englund, 2012) has taken place partly through central, state-led or new monopolized technology governance and infrastructures and partly through decentralized forms of governing (e.g., in municipal contexts and via IT-supported networks). 1 Corresponding author Socio-technical imaginaries in curriculum reforms Seminar.net International journal of media, technology and lifelong learning 2 Vol. 16 – Issue 2 – 2020
Introduction
In Sweden, it has been well over 50 years since the first political and curricular initiatives on digital technologies in schools similar to the ones that exist today were introduced. Two major curriculum reforms, in 1980 and 2017, are exemplified in the introductory quotations and point to some recurrent features of the reforms. One is the struggle over the knowledge content and what the relevant technologies are considered to be for a future society and citizen, in terms coined by the changes in terminology and actual technologies. Digitalization is the more recent term, but it has been preceded by the terms and technologies of different eras, like computerization and data knowledge in the 1960s and information and communication technology (ICT) based learning in the 1990s. Another reform feature is what purposes and outcomes school digitalization curricula should have, including when and how it should be introduced. The first quotation above, from an early "pioneer" in the field, illustrates some of these tensions and ongoing attempts to introduce digital technologies, as well as the early exchanges and borrowing of ideas on school digitalization between different countries. Arguments for introducing new school digitalization curricula over time can provide examples of how and what the discursive struggles are or have been. The 50 years of digitalization reforms therefore raise questions of what the reemerging ideas of the reforms have been and especially how certain desirable knowledges and technologies for an imagined society and future play an important part in the construction of school digitalization curricula. As our title, "Back to the Future" suggests, such social and performative discourses, together with digital technologies, constitute certain desirable future characteristics, for example, being at the forefront of technological advances as a nation by setting the stage for the future society through education reform and infrastructure investments. Desirable concepts like "modernization," "innovation," and "disruption" have commonly been used for the purpose of motivating digitalization reform and expectations. Therefore, such discursive work is performative and represents a political will to break with the past to shape futures (Popkewitz, 2008), and they function as strategies to motivate curricular reform. We also used the back to the future argument to refer to how such dominant knowledge arguments for the future society, like programming competence and learning to code, also referred to in the introductory quotations, are repeated during the time period in focus here.
School digitalization reforms and investments in technology have always taken place in parallel with developments of digital technologies and scientific knowledge in society, and reforms often involved high expectations of digitalization powers to change, renew, and improve education. Education technologies formed an important part of school curriculum and education reforms early on, in the decades after the second world war, for example, by introducing the needs of digitalization for a science-based future society and competent citizen. Hence, the reconstructions of nation-states' public education and societies included investments in early education technology, and in comparison to many other Nordic countries, Sweden, having avoided the war, is considered to have been early with investing in a strong, equality-oriented public sector (Hallsén & Nordin, 2020). However, since the 1990s, similarly to other countries, Sweden has conducted a fast dismantling of public sector education to privatize large parts of the education system through market reforms (e.g., Ball & Youdell, 2007;Englund, 2018;Verger et al., 2017), thereby moving from state to more decentralized governance. Such governance transformations provide democratic challenges. In the second quotation at the start of this paper, the Teacher Union chair references such democratic and economic challenges by addressing the local responsibility for investment and expected competence improvements. Equality-oriented reform ambitions and equal technological accessibility and opportunity throughout the education system could be harder to achieve when school governance is decentralized to local and municipal governments or outsourced to private sector companies.
Aim, questions and approach
Swedish school digitalization curriculum reforms make up the case in this article. The aim is to explore the configurations of major digitalization reforms from 1969 to 2019, raising two questions: How are different school digitalization curriculum reforms constituted regarding imaginaries of future societies, knowledge content, and digital technologies? Following this, how are the different reforms converging or diverging over time and in relation to governance transformations?
The approach used here to critically examine curricular digitalization reform draws on Jasanoff's (2015, p. 19) conceptualization of socio-technical imaginaries as part of political reforms and advances in science and technology. Socio-technical imaginaries are formed through socially and publicly performed discourses of desirable future societies, as well as through inscriptions and the materiality of technology. Hence, socio-technical imaginaries include both the discursively and materially intertwined and negotiated formations of digitalization curricula-discourses of futures and future citizens, knowledge content, and the materiality of digital technologies, devices, infrastructures, and investments. This approach suggests that digital "things" are considered co-produced by imagined visions of digitalized futures. Digitalization curricula, covering both discourse and materiality, is one of the most powerful arenas for translations and uptakes of socio-technical imaginaries, and as such, they need to be critically examined.
Other studies have defined school curricula as historically and contingently constituting what are considered the relevant knowledge contents for education in dynamic relation to political processes and reforms (Popkewitz, 2008). In that sense, school curricula are part of political government, nation-state ambitions, and ideals of citizenry competences (Englund, 2012(Englund, , 2018. Curricular reforms can also be seen as the results of scientific and political struggles, prompting the prioritization of certain educational content and strategies (Bernstein, 2000). Therefore, curricula are not to be considered neutral but rather the result of complex negotiations between different social groups and interests (Lundgren, 1983) along with digital technologies and computers in schools (Selwyn, 2002). Jasanoff's approach allows for analytical sensitivity toward new circumstances and challenges with understanding how socio-technical imaginaries are circulated and adopted. This can include different local, national, or international curricular contexts (e.g., Verger et al., 2017;Wahlström & Sundberg, 2017). Therefore, even if curricular aspects of education reforms operate across national and global contexts, they will always be adapted and adjusted in relation to the specific needs and traditions of that country, region, or setting. Thus, there is no unidirectional transfer of curricular reform from one site to another; they are best described as multidirectional and contingent.
Materials and analysis
The materials used for this study are mainly publicly available official documents, including archived interviews, newspaper articles, syllabi, and education policies, including government strategies, evaluations (e.g., Jedeskog, 2005;Riis, 1987), and investments in digital infrastructures. Two quotations introduced the case. The first is from an open archive of testimonial interviews conducted with digitalization pioneers in 2008 for the research project "From Mathematical Machine to Information Technology" (Emanuel, 2009). It includes 10 interviews about school digitalization with "once-powerful education actors" (Selwyn, 2013), allowing for secondary use of personal but also retrospective opinions of what was at stake. Six of these transcribed interviews from 2008 (Boström, Broman & Bäck, Nilsson, Nilsson & Loftrup, Nydahl, and Riis) have been repurposed in our analysis. The second quotation is from a database search in the Swedish Media Archive covering the school digitalization reform launched in the autumn of 2017 (see also Williamson et al., 2019).
Because the reform documentation and curricula covering 1969 to 2019 revolve around imagined futures, knowledge, and digital technology, Jasanoff's concept was relevant. An analytical aim was to pay attention to discursive and material content, meaning political and curricular arguments, as well as investments in devices and infrastructures and regarding technology as co-produced by discursive powers. Due to the extensive 50-year period being examined, the overall struggles and main reform elements are primarily represented, making details, local circumstances, and other voices less visible. The long period also made us explore divergences and convergences of the political struggle over time. The time frames were borrowed from Englund's (2012) education reform conceptualization of how educational citizenship equality is addressed and operating via pendulum movements of reform and curricular politics, suggesting a centralistic government in the 1960s, and renewed in 2010 with a clear break of decentralized governance in the 1990s (Englund, 2018). Next, our analysis is presented chronologically as it resulted in three main reforms, one in the late 1960s, the second in the mid-1990s, and most recently in the late 2010s, each displaying certain socio-technical imaginaries at play.
Programming and school computers in 1960s centralized curriculum reform
Curricular ideas on computers and digital technologies were officially introduced in the Swedish parliament in the late 1960s, and in 1971, the National Board for Education (NBE) prepared a new school subject, "data knowledge" (datalära, introduced earlier). Textbooks supporting the new school subject, produced by pioneers (Nilsson & Loftrup, 2008), like Computers on Our Terms in 1976, and A Programmed Future in 1979, say something about the orientation of the subject toward student perspectives and future socio-technical imaginaries. In the 1970s and 80s, a series of government-funded computer technology initiatives (e.g., the DOS and DIS projects) aimed at developing both hardware and software, and pedagogical methods followed. A government delegation was already sent to the United States in 1966 by the Ministry of Education and Ecclesiastical Affairs to investigate the early generations of education technology, called computer-aided instruction, CAI (Karlsohn, 2009). However, it was not until 1984, 13 years after the commission instantiated, that the first integrated data knowledge curriculum for all school years started. It included 80 hours of teaching in secondary school, with the aim of critically fostering knowledge on computerization in society.
The struggles of the reform were thus a characteristic feature of the digitization curriculum. According to Riis (2008), two main conclusions had been drawn from the earlier projects and recommended to influence the new overall syllabus in the early 1980s, Lgr80 (NBE, 1980a). Riis added that "this is how she remembers it" now, based on her later evaluations. One was that computers should only be introduced in vocational upper secondary school, and another was that the new curriculum should prevent "the mistake from earlier education technology reform," echoing the US and CAI-inspired modularized curricula, so that "drill" and "automatization" would be rejected in favor of students' use and self-control (Riis, 2008).
Sweden, known for its close science and state (party politics), as well as its social and science education ties, epitomized by the state's step-by-step "social engineering," was considered as having advantages compared to other countries in terms of engineering public reform. According to Riis (2008), "Sweden has been about 10 years ahead of almost all other countries in Europe that were involved in the second world war when it comes to school reform," adding that a strong future imaginary and tactic had regulated that government: Thus, one has a vision of a different and better society . . . then one has to place that vision quite far into the future . . . so far into the future that it is possible to achieve real decision-making power, on top of or alongside yesterday's old decision. (Riis, 2008) Influential state governmental connections during these years also included the domestic industry sector, exemplified in the main vision of the first major initiative, DIS, introducing computers in 1974DIS, introducing computers in to 1980 Knowledge is required to increase the individual's influence over computer use. Knowledge of computers and the use of computers is also needed to preserve our country's role as industrial nation (NBE, 1980b, p. 1) The future imaginary presented is a computerized and industrial society, secured by the knowledgeable individual via schooling and the nation's industrial labor and made governable through social engineering and a strong future imaginary (Jasanoff, 2015). In retrospect, Broman & Bäck (2008) refers to the shared understanding of digitalization as a "social drama" that influential pioneers used based on the argumentative logic that "a major societal revolution is happening," and "this is what we must do," as "Sweden has to keep our place in the world." This self-reflection mirrors not only the social and argumentative powers of digitalization reform but also how digital technologies are interrelated with the nationally competitive computer-based societal imaginary.
Interestingly, similar to the reform in the 2010s (NEA, 2017), there was also a basic form of programming included in early computer mathematics curriculum. Locally, basic programming was often introduced via self-organized science teachers using invested school computers (Nilsson, 2008, see introductory quotation). A critical evaluation followed the implementation, conducted by Riis (1987), who stated that students had not received the required teaching hours and the results in mathematics had dropped, considered to have been caused by the time-consuming programming in mathematics. According to Nilsson (2008), "software and programming" characterized the period, but "nothing much else happened," as most projects and initiatives were stopped. In 1987, data knowledge was also excluded from the curriculum. Major curricular struggles involving teacher unions also emerged and interrupted the process, and the broad implementation, time, and resources allocated to the new curriculum were debated, despite the fact that state government had prescribed protection of teachers' workloads (Broman & Bäck, 2008). The alternative curricula also spread as many teachers preferred applied programming (Nilsson, 2008; also introductory quotation). These long curricular struggles and strong trade union resistance protracted establishing the curriculum, which is typical for strong state-centered government and the era's ideal of politically anchoring reform (Englund, 2012).
The exchange with other countries also made influential actors suggest a Swedish school computer infrastructure, something frequently echoed in the interviews. One pioneer referred to how comparisons pushed the idea forward: "Denmark had Pickoline, Finland Mikro-Mikro, and England the BBC computer. Norway had joined the Swedish project" (Nydahl, 2008). During the mid-1980s, the Swedish government and municipalities co-Seminar.net -International journal of media, technology and lifelong learning 8 Vol. 16 -Issue 2 -2020 financed secondary school computer infrastructure (SEK 60 million 1984(SEK 60 million -1987 in parallel with the prolonged implementation of the new subject. The development of relevant software for the assigned computers brought problems, however. In the mid-1980s, the Swedish Ministry of Education (MoE) initiated a school software project and a Nordic expert group to exchange software. Although severe usability and interoperability problems occurred (characteristic for micro-computers then), these Nordic-produced state-financed programs later "turned into popular market products." Because "all these software programs were freely accessible as the organization ended" (Nydahl, 2008), the commercial sector could make a profit from them. One of the interviewees stated that the software projects were also outmaneuvered by new types of operating systems, Microsoft Office packages, and new focus on information search and process writing in schools. Two different interviews included similar retrospective comments: We probably had our doubts that it was a dead-end project from the beginning, to make a Swedish blue-and-yellow computer. (Nilsson, 2008) A Swedish school computer named Compis [Computer in School, procured in 1982], which one now, a few years after and with the facts at hand, regards as one of the biggest flops in Swedish technology development history. That computer was completely impossible to use. (Boström, 2008) Hence, a Swedish-profiled school computer brand, similar to other countries, and schools and students having an automated teaching machine were considered ideal by government and industry at the time, and while the output can be considered limited in retrospect, the materialization of this strong imaginary was pervasive. The National Agency of Education (NAE, the public agency after NBE) called the 1980s investments "almost textbook examples of technological push; computers were pushed onto schools and teachers that never asked for it" (NEA, 1999, p. 24). Taken together, state-initiated reforms and industry-oriented curriculum characterize this first digitalization reform period. This includes extensive power struggles over knowledge content on computerization and programming, where materialization of school computers gained important impact through the imaginary of Swedish society.
ICTs, networks, and IT-billions in the 1990s decentralized curriculum
The 1990s, particularly 1994, comprised several major reforms in Swedish politics and curricula. The period before had been characterized by power struggles over interests and investments, but is often described as a time of undecided matters for school digitalization (Jedeskog, 1996). A major education and public sector reform had begun in the 1980s, the decentralization and deregulation reform (Englund, 2012(Englund, , 2018. With it, the responsibility for schools was regionally transferred to municipalities, including allocation of funds, which affected the school digitalization issues in the mid-1990s and onward. The overall curriculum was reduced to goal-oriented strategies to be operationalized by local schools and teachers. Local schools were also to provide for strategies and infrastructure, which according to one pioneer, made regional technological capacity better but more unevenly distributed. In 1994, when the second major reform of digitalization took off, due to decentralization, the municipalities were entrusted with responsibility for implementation, while the state-controlled NEA was responsible for coordination. The new terminology was information technologies (IT). The curriculum prescribed students' ability to use IT as a "tool for knowledge seeking and learning" (MoE, 1994). According to the new primary school curriculum, Lpo94 (MoE, 1994), in line with imagined future needs, students should be "able to orient themselves in a complex reality, with a large flow of information and a rapid rate of change." These represent a will to reform connected to the back to the future argument. Education was to foster an IT-savvy citizen who contributed to the skills and knowledge of the information society, and IT was also used to substantiate such competence. The difference this time was the technologysaturated future economy imagined.
In 1994, the white paper "Wings for Human Ability" (IT Commission, 1994) epitomized the far-flung technology-positive ideas from the mid-1990s. Persuasive visions of school use of IT or ICT added to these ideas. The internet and web had been introduced, and the information superhighway was a common socio-technical imaginary at the time. Arguments around global connectedness and investments in lifelong learning through individuals' digital skills became linked to economic competitiveness agendas (MoE, 2001), echoing the trans-and supra-national 21st century skills curriculum discourse in the United States, the Organization for Economic Co-Operation and Development (OECD), European Commission, and more. A new economic sector and strong national confidence had emerged with IT businesses establishing in Sweden, in a short-lived IT boom, and IT in schools gained new momentum (Karlsohn, 2009). For IT and education businesses, schools were increasingly seen as market investments, or "a gateway for future sales to companies," as one interviewee put it (Boström, 2008), in the belief that "students would prefer the computers they had used before" when they entered work life. Hence, the sociotechnical imaginary here draws on new forms of incentives for the desired future and schools as marketplaces. As part of this, networking became a renewed political tactic, and the public sector formed networks with private IT and education businesses. The NEA was commissioned to coordinate information networks, such as the national School Data Network, the resource-sharing platform Multimedia Department, and the Nordic Odin network. Networking also became part of the new curriculum by communicating the need to network humans and digital resources to support teaching and school development.
Social and digital networks also characterized how major investment was motivated in the 1990s. Important financial initiatives and efforts to digitize schools came from the Knowledge and Skills Development Foundation, The KK Foundation (SEK 1 billion), based on employee funds set up by the former left-wing government, now liquidated by the rightwing government. During a political hearing (IT Commission, 1998) comments were made about investments of the Lighthouse Project, set up to be a role model in 1995: The one billion already is a lot of money, add to this the extraordinarily large sums that Swedish municipalities also spend, estimated to be another three billion from 1996 to the turn of the millennium (IT Commission, 1998, p. 21) Added to these billions, teacher professionals who had made extra contributions to school development and renewal were issued an extra SEK 6 billion during the contract period, debatably referred to as forced reform adaptation for teachers. Other tensions included only 28 municipalities (out of over 280 then) having access to KK Foundation funding; the remainder had to make their own investments. As many municipalities refrained from investing in digitalization, the dissemination of results and experiences became more difficult, and the opportunities to reform were unevenly distributed economically and nationally.
A follow-up to the KK initiative was another large initiative (SEK 1.7 billion), "Tools for Learning, National Program for IT in Schools (ITIS)" (MoE, 1998). The idea was to improve schools' internet access, support teacher work teams with learning resources, and school development, also by appointing teacher education institutions. Students' learning was highlighted and set against traditional teaching, and the schools and teachers were guided to redefine their work on such grounds. All municipalities participated, but the investment did not take place generally and flexibly, meaning large investments in teacher PCs (SEK 700 million) was unevenly distributed. Even if funding provided a basic digital infrastructure in more schools, it also forced many municipalities into infrastructure agreements where technology providers got major influence over political decisions (National Audit Office, 2002). Similar to earlier initiatives, ITIS was criticized for neglecting earlier reforms and results and for being too quickly and sub-standardly prepared relative to the decentralized municipalities. It even started before the earlier KK Foundation project was ended and evaluated. One interviewee argued that the era's political initiatives had continual "government problems, [as] project after project followed but without interconnection" and said, "It would have been better if the state and municipalities had had a better collaboration," as "these changes are not made quickly" (Nydahl, 2008). Often depicted is the lack of national coordination but also the wait for local power distribution, given that decentralized municipalities were expected to strategically expand ICT into schools through their own large economic investments. One major municipality initiative became the local uptake of the global one-laptop-per-child movement in the early 2000s, connecting municipal schools with major technology providers like Apple. The purchases were supported by regional stakeholders (e.g., The Swedish Association of Local Authorities and Regions, 2019) and philanthropic non-governmental organizations, often processed by private broker companies, made possible via public procurement. Education and IT businesses now got new opportunities to sell hardware, software, and training directly to schools. In a sense, the curriculum was thereby changed as how and why digitalization was introduced in schools was substantiated by private interests (c.f. Picciano & Spring, 2013;Williamson et al., 2019).
Compared to the early 1960s and 1970s curricula, the ambition in the 1990s (and continuing in the 2010s) was greater and centered around accelerating Sweden's economic position in a global knowledge society and returning to the demand for current and future competitive citizen knowledge. The earliest reforms were more cautious and explored whether digital technology, mainly computers and computerization, came with issues that should be regarded in the curriculum. A more self-assured and internationally extended curriculum was established with the 1990s' alternative societal imaginary, which came to symbolize a society and citizen ideal with strong technology-deterministic faith (Jasanoff, 2015), confidently relying on individuals' abilities to process and exchange information. IT, or ICT, in combination with decentralized municipal education governance, shaped the subsequent curriculum, which also created further differentiation in terms of individualization and the distribution of infrastructure and education resources. A growing dependence on private-sector initiatives was also established, which resurfaced in the 2010s' major school digitalization reform, presented next.
Adequate competences, coding and platforms in monopolized centralist reform
During the 2010s, the term "digitalization" gained wide curricular impact, often described as more than digitizing information and, again, with a new terminology of ideals of an emerging digital future, a socio-technical imaginary, used to regulate the current curriculum. In the Swedish upper secondary school syllabus, Gy11 (NAE, 2011), digitalization is described in terms of students being able to "orient and to act in a complex reality with a large flow of information, increased digitalization, and rapid pace of change," and that students should "develop a critical and responsible approach to digital technology, to be able to see opportunities and understand risks and to value information." The more critical approach to information, in comparison to 1990s curricula, was developed then, but mainly, the citizenship expressed is the individuals' adaptability to an already present and changeable future society different from the 1960s ideal of a more stable, industrially prosperous society constituted by computers and scientifically informed knowledge.
Major digitalization reform came in 2017, represented by the national strategy (MoE, 2017), the overall aim of which was to increase equivalent (not equal) technological accessibility nationally. Modernization and a fundamental change of work methods, teaching, and leadership, as well as improved school cost effectiveness, were expressed. It is also said that with overall digitalization reform, "Sweden continually should be a leader in digitalization and be digitally competent" (MoE, 2017, p. 3-4). As shown here, digital competence could be used to address students and the nation-state, framed as actionable elements that form part of a global society and economy imaginary. The new agency, School Inspection, controlling juridical matters and quality (Englund, 2018), suggested teachers have an "open approach to new technology, rather than any specific technical competences" and that the desired effects of school digitalization included providing "increased student motivation, skills, and independency and support group work" (School Inspection, 2011, p. 8-9). A strong governance feature of the school digitalization curriculum was how it addressed behavioral attitudes, for both students and teachers, as part of being competent. As the NEA (2019) introduced the revised syllabi for digitalization, it was clear that the term "digital competence" was borrowed from influential actors like the OECD (2005), which had promoted an economy-based understanding of how global workforces and digital markets secure digitally competent citizens and students. A particular attribution to digital competence was introduced in the Swedish curriculum (MoE, 2017), stating it should be "adequate," a term for pointing out context adaptability and knowledge relevance. In line with Wahlström and Sundberg (2017), this suggests that the widespread competence concept from the mid-2000s was multidirectional and "domesticated" as it was transformed over time into particular situations, along with the Swedish context, and in line with globalized discourses.
Programming was formally reintroduced in the digitalization curriculum (MoE, 2017), based on several new knowledge formulations, integrated in all school years and different subject areas, similar to the late 1960s and 1970s reforms. Similar to earlier initiatives, programming knowledge and learning to code were referred to as desirable but now, however, fully oriented toward individuals' knowledge needs and future careers, as stated by a representative of NAE in 2018: Everyone needs basic knowledge in programming to be able to understand how society functions and to then being able to use it in one's work life. That is why schools need to take this content into consideration so that every student will be taught this. There is an idea of progress in the programming curriculum from preschool class to upper secondary school, starting with the concrete step-wise instruction to being able to apply this programming for problem-solving in upper secondary school. (NAE, 2018-01-09) Socio-technical imaginaries are part of how programming is positioned in society, framed as work knowledge and categorized into curricular knowledge of school levels and content areas. Even so, it is posted on NAE:s own YouTube channel as a sign of how a public agency today pictured the imagined world. Somewhat similar to the 1970s' curricular ideas of digital technology, programming knowledge is seen as an important aspect of a future emerging society. The late 2010s ambition is, however, wider than the 1970s version and suggests more experimental initiatives; programming is now framed through aspects like contribution to a digitally-based economy.
In many ways, programming and learning to code as digital competences draw on the language of computer science and the conception of programming as a problem-solving skill. A possible reason for this is how the curriculum process was politically governed. NAE was appointed to operationalize the curriculum and distribute an important part of the project, Triple Helix-National Coalition for School Digitalization. This was initiated by Swedsoft (2017), a large software interest organization of academics and industry people, where schools, industry, and universities were invited to participate in different workshops. This exemplifies a new form of education reform where public and private actors, based on different interests and during short and fragmented time contributions, had a large impact on major curricular decisions and technology-use, similar to other countries (Williamson et al., 2019). In Sweden, these processes replaced earlier, more publicly visible curriculum-making, which were slow processes, as exemplified in the late 1960s reform. The 2010s' state-led government decreased and transferred control to audits and inspections. These changes in Sweden and elsewhere exemplify a form of recentralization (Englund, 2012) of the curriculum in the 2010s. However, this recentralization is now supported via supra-and transnational organizations such as the OECD and other interest groups (Wahlström & Sundberg, 2017) rather than being limited to the nation-state and public government. In that sense, there are impacts and similarities in the curricular focus as well as differences between the forms of centralism that took place in the 1960s and 2010s reforms to follow Englund (2012). This transformation makes the possibilities of democratic influence and criticism of rapidly upcoming political proposals more difficult.
The competence feature of the curriculum was paired with certain understandings of technology. Since the 1990s, Swedish school curriculum has commonly described digital technologies as tools for learning and work processes, a means of achieving other goals. The School Regulation (MoE, 2010), for example, suggested "that schools apart from books, also should use other learning tools needed for an up-to-date education." Similar expressions were used in 2017's national digitalization strategy, but now teachers' digital competence and their ability to choose and use digital tools (also referenced by the Teacher Union chair in the introductory quote) was more in focus. The tool metaphor used here to describe a preferred technology-based curricular repertoire that teachers should be able to choose from risks neutralizing the difference of digital technologies, software, hardware, and so on and how they are always inscribed and circumstanced with powers that make it hard to criticize or act upon the curricular repertoire. Even so, it presupposes a choice, preferably based on pedagogical (not solely economic) considerations. This stands in contrast to the establishment of a highly influential infrastructure, with learning platforms or learning management systems as a dominating technology (NAE, 2016). At first, these Vol. 16 -Issue 2 -2020 platform infrastructures had served more local settings and internal school networks, but now they evolved as ideas of standardizing administrative and pedagogical processes in schools, and they were built into the platform used in school systems. Now a new form of global platform infrastructure, digital systems of hardware, software, and administration services in one package, has entered the growing competitive school market, mainly provided by major internet providers like Google or Microsoft. Major marketization reforms had opened the public education sector for commercial interests, including an independent profit-making school sector (Englund, 2018). The commercial logic of platform capitalism is that the "currency" and volume of data (Srnicek, 2017) generated by everyday school use of digital platform technologies and the infrastructures already in place make the price of the infrastructure affordable or "free" for schools in a costly public education sector (Williamson, 2017). Often, these private interests argue a philanthropic perspective that they contribute to the public good by, for example, monitoring data activity and student learning. In that sense, they impact considerably in imagining a digitalized education system as part of a society with a well-performing and well-managed digital economy. In Sweden, the media debate around this has concerned teacher workload costs and efforts and the disciplining assessment culture as the backside of standardized platform use (Swedish Teacher Union, 2019). More seldom questioned is how this "infrastructuralization" and data currency growing out of platform markets is changing power relations. The global platforms and school data practices instantiated and operating via private-public networks could now extract, assess, and compare local, national, or international performances (Hillman, Bergviken Rensfeldt & Ivarsson, 2020), working as part of a new global monopolization and centralized power (Englund, 2018).
Conclusion
Two main results appear from our analysis of similarities and differences in major Swedish school digitalization curriculum reforms over the past 50 years.
One recurrent configuration is the back to the future argument, where curricula address ideal citizenships of emerging future societies and by such argumentative power give fuel to reform-starting with the industrial and welfare prosperity of the 1960s, to a break in 1990s and 2010s societies with internationally oriented information and latest knowledge economy versions. The discursive figure is strongly intertwined with inscriptions of and investments in digital technologies through school computers, digital information networks, and learning platforms, often co-constituted by international digitalization discourses and networks. Such socio-technical imaginaries "naturalize ways of thinking about possible worlds" (Jasanoff, 2015, p. 24). The alternative and implicit imaginary produced is a value-laden counternarrative of a Sweden lagging behind others, having uninformed citizens, and poor societal and educational conditions and infrastructures for knowledge production and prosperity. Even if digitalization reforms are aligned with such political tensions, interruptions, and resistance, we argue that there are converging visions over the 50 year period of meeting democratic and economic challenges by digital means in education.
The other is that major divergences in digitalization curriculum formations are strongly related to governance transformations that had differentiating consequences for opportunity and equal accessibility around digitalization. These significant political and curricular transformations add to Englund's analyses (2012Englund's analyses ( , 2018 of the democratic role and purpose of education curricula. The pendulum movement between centralized and decentralized powers inherent in digitalization politics have been evident, in the 1960s by strongly state-led government and nation-oriented curricula, then followed by more locally distributed municipal power centers and curricular reform, and again re-centralized via more monopolized powers via supra-state and private actors influencing the curricular reform. Hence, one general insight is that the digitalization curriculum, often internationally oriented toward the economic aspirations of nation-states, such as competitiveness and human capital, in recent times is more challenged by and exposed to private interests (Picciano & Spring, 2013, p. 173). In the governance break around 1990, in particular, changes in political-administrative processes affected curricular processes, strongly consistent with (international or supra-based) contexts and a global market reform orientation and "generally favourable to decreasing the role of the state in direct provision of public services" (Verger et al., 2017, p. 328). Social networks and infrastructurally converging models, however, seem to have been internationally multidirectional since the early international exchanges around digitalization. As the 1990 break opens up for private and commercial interests, these also gain influence over school infrastructures. Together, state and privatized initiatives have, over time, created a strong infrastructure base for Swedish schools, a development not without complications, however. Inequality issues and related democratic challenges of school digitization have appeared.
Several of Englund's analyses (e.g., 2012, p. 21; exemplify how the Swedish curriculum orientation from 1960 to the mid-1980s was driven by strong education reforms aimed at counterbalancing student inequalities, followed by the radical break around mid-1980s (Englund, 2018), with free school choice, for-profit schools, and more. Even if equal access to technology has similarly been a constant struggle in digitalization reform since the early 1960s, beginning in the 2010s, a new take on equivalent technology access has been used in strategies (e.g. MoE, 2017), allowing for differences and unresolved problems of interoperability, standardization, and accessibility of digital technologies. The main guarantees provided are market offers of digital competence resources and public procurement of global platform infrastructures where digitalization is considered a powerful instrument for improving, democratizing, and making different aspects of schooling more efficient and streamlined. Implicit in socio-technical imaginary is how the opportunity and capacity of technology for learning and competence Vol. 16 -Issue 2 -2020 development is addressed toward students, schools, nations, and the future at the same time as the technologies are inscribed with certain uses and standards that counteract such concerns.
In the actual implementation of school digitalization over time, the detailed regulation of instituting reform seems to decrease, while performativity and assessment regimes increase, especially the 2010s' reform, which allows for different types of data introspection and exploitation from private and commercial interests. This includes the insertion of commercially provided platform technologies into public education, who can now profit from school-generated data activities and get direct access to public education sector performativity in different ways. Similarly, the latest decade's fast and decentralized curricular transformations differ from the earlier, more slow-paced education reforms, where curricula and investments in, for example, school computers were commonly publicly discussed via official organizations of unions and employers. This shift in power struggles over curricula makes it important to include new, more ephemeral empirical material from private sector actors and 'actants' like infrastructures and platforms. The education system and the schools may need to develop knowledge on these new forms of curricular changes and an approach to safeguard the interests of the public education sector and the values at stake, such as issues of equality, openness, personal integrity, and the utilization of schools' digital work on platforms (Williamson, 2017;Hillman et al., 2020).
|
2021-01-07T09:04:43.377Z
|
2020-12-17T00:00:00.000
|
{
"year": 2020,
"sha1": "82aa62ffa077ec52b1a0fc892429bae47c60352c",
"oa_license": "CCBY",
"oa_url": "https://journals.oslomet.no/index.php/seminar/article/download/4048/3642",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "e0934c510f011d293b5b57a3c2d2f94465b88fc8",
"s2fieldsofstudy": [
"Education",
"Sociology",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
221035920
|
pes2o/s2orc
|
v3-fos-license
|
Thermal Design of an Integrated Inductor for 45kW Aerospace Starter-Generator
This paper presents a combined electromagnetic and thermal design of integrated inductor, for the application of a 45kW aircraft starter-generator. The inductor is designed at high current density by using the apporach of area product, followed by the finite element analysis, which validates the electromagnetic performance of integrated inductor. The total power losses at 8, 20, and 32kRPM are evaluated in order to investigate the thermal design of a combined starter-generator and integrated inductor system, whilst, achieving the full integration from a thermal management point of view. As both starter/generator and inductor share common cooling configuration; performance of a direct and an indirect cooling options are compared. The direct cooling configuration, based on a semi-flooded design, can lead to a temperature reduction up to 90°C in the most critical components.
I. INTRODUCTION compact motor drive system is required, from physical and functional integration point of view, in order to achieve higher power density and efficiency. Such power dense systems are mandatory in aircraft, marine and transportation applications. Passive elements which are introduced after the drive components have been defined lead into discrete sub-systems [1]. To overcome this, the integration of passives need to be introduced both from functional and physical point of view [2,3]. There are many possibilities in the aircraft motor-drive systems to integrate the passive components. Pasives integration in such systems offer many advantages such as increased energy densities, reduced price, weight, space and eases the construction task. Thus, applications where high energy densities are required, integrative approach appears to be the best solution [4,5]. In the past, the integration of passives has been a focus in electric motor drive market that has resulted in an overall compact design of a power system. In [2,3], a new integration methodology of the inverter output filter inductor is presented for permanent magnet synchronous motor drive systems. Integrated motor uses the main motor inductance as filter inductance instead of sizing a separate inductor between the inverter and the motor, which yields to the elimination of ohmic losses and its associated mass and occupied space. The author of [4][5][6][7] introduced two new options for passive filter inductors integrated within the common housing of the motor Fig. 1. It includes: motor-shaped rotational inductor and motor-shaped rotor-less inductor. Both integrated inductors are mounted axially on to the rotor shaft, resulting in a shared cooling system. Thus, eliminating the requirement of an external cooling system. The rotor of the rotational inductor rotates with the fundamental freuqncy of the stator magnetic field which minimises the rotor iron losses. On the other hand, integrated rotor-less inductor has the identical structure, but without rotor, which makes it appropriate for smoothing inductors for DC-Link applications, grid-side input filters and isolation transformers. In comparison, the rotational inductor can only be employed for high speed motor drive systems. In [8], author has presented a design of motor-shaped rotor-less inductor adopted for 45kW aircraft starter-generator. This integrated inductor was designed and sized at current density of 18A/mm 2 which uses the existing cooling system of the starter-generator. A substantial weight and space reduction of 55.4% and 52.7 respectively is achieved, when compared with the standard EE-core inductor which had come at the expense of higher ohmic losses, where, heat generated in the windings is assumed to be taken out by the cooling system available for the starter-generator, designed for aircraft applications.
This paper presents the thermal investigation of an integrated rotor-less inductor proposed in [8] which is adopted for a 45kW aerospace starter-generator. In the next section, design procedure of the inductor using area product approach is discussed. Section III provides the detail of the startergenerator that requires a series inductor to be added with its main windings, in order to improve current and torque ripple. In section IV, sizing of the integrated inductor is discussed, followed by the thermal analysis of the starter-generator combined with an integrated inductor in section V.
II. AREA PRODUCT APPROACH
The voltage due to electromagnetic induction, accross the terminals of an inductor, can be obtained by referring Fig. 2 if the supply current and voltage across the inductor is sinosoildal. Therefore, the induced voltage is given by, Where ܭ ௪ , ܣ , ߮ , ܤ , ݂ ௦௨௬ and ܰ ௦ are waveform factor, iron core cross-section area, magnetic flux created by the electromagnetic field, peak value of flux density, supply frequency and the phase turns respectively. The turns per phase of the inductor, for a given conductor and window area can be determined by, Where, ܭ ி is the window utilisation factor In general, for inductor, window utilisation factor typically varies from 0.45 to 0.55 in order to provide enough space for bobbins, slot liner and the wire insulation [8]. By substituting (3) in (2), we have, By multiplying ܫ ௦ eon both sides, we have, Solving for the area product, we have, Where ܬ ௦ is the ܵܯܴ current density of the copper conductor which is limited by thermal loading of the inductor windings. For 3-phase inductors the area product changes to, From (6), it can be observe that the parameters such as window utilisation factor, magnetic flux density and conductor current density has an influence on the inductor's area product. The left hand side shows the physical parameters of an inductor, whereas, the right hand side shows the parameters that depends on the electrical and magnetic loading of an inductor. The core area narrates the flux permeance capacity, whereas, the window area defines inductor's current conduction capacity [4][5][6][7][8][9]. It is important to note that the area product does not depend on the fundamental supply frequency. But, the iron losses depends on the frequency squared. Hence, when designing an inductor for high frequency (kHz to MHz) applications, it is necessary to consider the flux density inside iron core is adjusted to a lower value when compared to indcutor designed for low frequency (Hz to kHz) applications [8,9].
III. 45KW AEROPSACE STARTER-GENERATOR
The circumferential cross-section and torque-speed curve of the starter-generator, used for aircraft application, are shown in Fig. 3 and Fig. 4 respectively, the key parameters of which are illustrated in Table I. The starter-generator works as a motor during engine start and is required to produce a constant power (or torque) from standstill to an engine firing speed of 8 kRPM. Between the speeds of 8 kRPM (Ȧstart) to 20 kRPM (Ȧmin), the starter-generator feeds the constant power to accelerate the engine. Once the engine reaches its steady state region, the starter-generator turned into a generation mode between the speeds of 20 kRPM (Ȧmin) and 32 kRPM (Ȧmax). In generation mode, the starter-generator induces output power of 45kW up to a maximum speed of 32 kRPM (Ȧmax). Since the phase inductance of the starter-generator is very low (99 μH), an additional inductance is needed to increase the main inductance of the starter-generator, by twice. This increase in inductance will reduce the magnitude of switching current component by half through the starter-generator. Furthermore, doubling the motor's main inductance will help the control system to be designed at lower switching frequency [8]. 6 slots 2 poles integrated rotor-less inductor, with double layer concentrated winding (DL CNW), is selected to design at current density of 18A/mm 2 (same as the current density of the starter-generator). The main reason of choosing DL CNW is to restrict overall volume of the end-windings which was the stringent requirement of the starter-generator system. In order to size the integrated inductor area product approach is used, as discussed in section II. The integrated inductor is sized by specifying the required filter inductance, maximum magnetic flux density, window utilisation factor, conductor current density and the type of stator core material, the details of which are shown in Table II. While sizing an inductor, the following design ratios were presumed: window area to core area ratio (Wa/Acore) is 0.7, window length to height ratio (G/F) is 3 and stack length to limb length ratio (Lstack/Llimb) is 0.77 [8].
The author of [9,10] has recommended to assume a low window-to-core area ratio (Wa/Acore) to keep the fringing effect at smallest level. Also, the window length-to-height ratio is selected, based on the information provided by the manufacturer in [9]. However, the stack-to-limb ratio is chosen (7) and (8) using the parameters, as shown in Table II. The window area and the core area are calculated using the area product and the assumed window-to-core area ratio ratio. Once the window area and core area are evaluated, the core length ratio ሺܮ ௦௧ Ȁܮ ሻ and window aspect ratio ሺܩȀܨሻ are then chosen to fix the tooth width, stack length, window width and window height. The back iron width and slot opening height are adjusted until the required flux density is achieved in the core (which is in the range of 2.2T to 2.3T). The number of turns per phase are computed based on the specified voltage across the inductor [8].
Once the inductor is designed, the FE model is built using the commercially available software "MagNet by Infolytica". Fig. 5(a) and Fig. 5(b) shows the physical layout and flux distribution of the integrated rotor-less inductor respectively. The inductance value of 90μH is obtained from FEA model as oppose to the required inductance of 99μH. In order to achieve the filter inductance of 99μH, the inductor's stack length is adjusted from 20mm to 22.5mm. The end winding overhang is estimated using the method explained in [6,7], giving a total inductor axial length of 46.5mm [8].
The ohmic copper losses evaluated from FE at all three speeds are illustrated in Table III, for both starter-generator and integrated inductor. It can be seen that the worst case scenario in terms of total power losses is at the speed of 8000 RPM, where, both starter-generator and inductor dissipates the maximum amount of heat. Therefore, it is a straightforward decision to pick the speed of 8000 RPM for thermal investigation of a combined starter-generator and integrated inductor system.
V. THERMAL DESIGN
In order to achieve a fully integrated solution it was decided to also have an integrated cooling configuration. Two cooling strategies were considered, one indirect and one direct. The direct option implies the use of a standard helicoidal water jacked within the outer housing of both starter/generator and inductor; the heat is expected to be primarily dissipated by conduction through the housing, whilst a smaller amount will be dissipated into the end region due to the turbulent flow induced by the rotation itself. Figure 6 shows the configuration described.
Fig. 6: Schematic of Indirect Integrated Cooling Design
The channel within the housing was assumed to be 10x3 mm, based on that a heat transfer coefficient of 1,400 W/m 2 K was estimated using the empirical correlation (10) [11]; this value could be then used as boundary condition in the models developed.
ݑܰ ൌ ͲǤͲͲͷͻ ܴ݁ Ǥଽଶ ݎܲ Ǥସ (10) The convective heat transfer coefficients used on the internal surfaces of the machines, took into account the turbulence generated by the rotation; in order to do that the following correlation (11) was used [12].
Where v is the reference velocity which is the average velocity of the rotating surfaces and the terms ݇ ଵ , ݇ ଶ , ݇ ଷ are curve fit coefficients. In particular ݇ ଵ accounts for the component of the natural convection, whilst ݇ ଶ and ݇ ଷ account for the convection due to rotation. The thermal behaviour of both components was predicted numerically by the means of a numerical tool such as ANSYS Fluent. 3D simulation were carried out and, in order to reduce the computational cost, only a limited angular sector was considered. Furthermore the two stages were analysed separately using appropriate boundary conditions. The power loss of the components was implemented as volumetric heat generation boundary condition. Values used were based on the electromagnetic predictions described above.
A worst case scenario was considered for the thermal analyses; that is the steady-state condition at the least efficient operating point which was identified to be at 8,000 rpm. Due to the negligible loss in the rotor, the rotating components were not included in the model. Table IV lists the power loss implemented. The anistotropic nature of the windings was accounted for in the model to achieve more realistic temperature gradients within the coil bundles; this was done assigning different values of thermal conductivity along each spatial direction. Those values could be determined by applying the cuboidal model, shown below (12), as described in [13].
Where the equivalent thermal conductivity value ݇ is function of the volume and thermal conductivity of the materials inside the slot, such as copper and insulation. Figures 7 and 8 show the temperature distribution within both starter/generator and inductor assuming a water inlet temperature of the jacket at 40°C. As it can be noticed the temperature levels achieved exceed the allowable limits, making the water jacket not a viable option. For this reason a more intensive cooling option was assumed instead.
The direct cooling option adopted consist of a semi-flooded design where the coolant (oil) is directly in contact with all the stationary components. Non-electrically conductive sleeves are used to contain the oil and to physically separate the stator region from the rotating components; this helps to keep the friction loss, generated due to viscous effects, as low as possible.
Such cooling option as already implemented in the starter generated and the performance fully validated as documented in [13]. Figure 9 show the schematic of the configuration described.
Fig.9: Schematic of Direct Integrated Cooling Design
The coolant flow through the first stage (starter/generator) through axial ducts located along the inner and outer diameter of the lamination as shown in Fig.4, following that the oil enters the inductor chamber and flows through 1.5 mm wide in-slot ducts, created in-between the concentrated windings, before leaving the system. Due to the gap created inside the slot, the copper losses had to be updated accordingly in order to take into account the lower fill factor; this led the power loss to increase to 2,135 W. Figures 10 and 11 show the two designs of both starter/generator and inductor which were analysed.
In order to enhance the conductive heat transfer within the inductor, a sheet of ceramic material was located in the teeth opening; aluminium nitrate with a thermal conductivity of 100 W/mK was assumed. The thermal analyses carried out did not include any fluid domain, therefore some appropriate boundary conditions, based on previous works carried out [14], had to be implemented to take into account the convective heat transfer. A list of heat transfer coefficients used is shown in Table V below. The convective heat transfer in the axial ducts was additionally verified by analytical predictions based on an empirical correlation of the Nusselt number such as the one below (13) [14]. This correlation takes into account the fact that the fluid flow inside the duct is not fully developed for most of the length of the duct itself.
To determine the length of the entrance region within the duct the following expression (15) can be used.
Where dh is the duct hydraulic diameter. The inlet temperature of the coolant at first stage (S/G) was set to be at 40°C whilst 46°C for the inductor. That takes into account the temperature increase due to the heat absorption. It was also assumed that all the losses were dissipated into the oil, so no any external convection was included; that provides a more conservative approach. Figures 12 and 13 show the temperature contour plots of the starter/generator and inductor respectively, taken in the middle section, showing the highest temperatures achieved.
As results show, the shared semi-flooded configuration proposed allows maintaining the operating temperatures well below the maximum allowable limits, leaving a significant safety margin in case of over power conditions or short circuit failure conditions. Table VI summarise the temperature predicted using the two cooling approached discussed. As it can be noticed, a significant temperature reduction can be achieved, when using a direct cooling option; this can have a considerable impact on the power density achievable, on the efficiency and on the lifetime of the components. Last but not least, a further reduction of weight and volume can be also achieved, which is essential for aerospace applications.
VI. CONCLUSION
This paper presented a high current design of a passive filter inductor integrated within the common housing of the starter-generator. A series inductor was required to smooth out the switching ripple component from the current waveforms of the starter-generator. The inductor was sized using the area product approach followed by the finite element analysis which validated the electromagnetic performance of the integrated inductor. The total power losses at 8, 20, and 32 kRPM were evaluated to realise the thermal design of a combined starter-generator and integrated inductor system, while achieving the full integration from a thermal management point of view. A shared cooling configuration can guarantee safe operating conditions even at the most demanding operating points. The design proposed can significantly contribute to reduce the overall volume and weight of the system due to the lower number of hydraulic connections required. Low operating temperatures can also help to enhance the overall efficiency of the system
|
2020-08-08T13:11:16.890Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "13bab26988f98e8ef45aa8ef1bf9e8263633f7ae",
"oa_license": "CCBY",
"oa_url": "https://nottingham-repository.worktribe.com/preview/4613424/Thermal%20Design%20of%20an%20Integrated%20Inductor.pdf",
"oa_status": "GREEN",
"pdf_src": "IEEE",
"pdf_hash": "13bab26988f98e8ef45aa8ef1bf9e8263633f7ae",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
203814098
|
pes2o/s2orc
|
v3-fos-license
|
Laparoscopic adjustable gastric banding, the past, the present and the future
The laparoscopic implantation of an adjustable gastric banding (LAGB) was first described in 1993. Thereafter, the LAGB underwent to a lot of modifications, revision and refinements to become as it is currently defined. This procedure quickly became one of the most common bariatric surgical operations in the world in the first decade of the 2000s but, over the last few years, it has turned into the fourth more common procedure. A series of more or less clear reasons, led to this decrease of LAGB. The knowledge of the history of the LAGB, of its evolution over the years and its limitations can be the key-point to recognize the reasons that are leading to its decline. The adjustability and the absolute reversibility characteristic of LAGB, make this surgical procedure a “bridge treatment” to allow the specific goal of eradicating obesity.
Introduction
The laparoscopic implantation of an adjustable gastric banding (LAGB) was first described by Belachew et al. in 1993 (1). Thereafter, the LAGB underwent to a lot of modifications, revision and refinements to become as it is currently defined. These changes affected both the technological and the surgical techniques, but above all the pre-and post-operative management.
Technological improvement
Lubomyr Kuzmak is considered a pioneer in bariatric surgery and the inventor of the adjustable gastric banding. His great merit is that he firstly recognized the potentials of adjustability in the gastric banding. In fact, in 1983 Kuzmak projected an adjustable band introducing the advantage of an adjustable inflatable part of the silicon band, which was connected by a tube to a reservoir. On June 1986, he obtained the US patent for the inflatable device (2) and performed his first operation the same month. Subsequently, he described his results showing the superiority of the adjustable gastric band compared with the nonadjustable one he had been using since 1983.
The laparoscopic gastric banding was for the very first time described in the literature ten years later, in 1993, when Broadbent et al. in Australia, and Catona et al. in Italy, reported on their initial experience. On September 1st, 1993 at the Centre Hospitalier Hutois, Belgium, the first laparoscopic adjustable silicone gastric band was implanted in a patient by Belachew et al. (1). Due to the success of the procedure, the industry started the mass production and marketing of different sizes of adjustable silicon bands ranging from 9.5 to 13 cm length. Following the initial experience, only two sizes (9.75 and 10 cm) were maintained, as they could cover all the requirements.
In July 1994, the Lap-Band ® System became available on the market to be used by the "trained surgeons". In fact, at first, the BioEnterics, which was the manufacturer of the Lap-Band ® (BioEnterics, Carpinteria, CA, USA), did not sell the device to the surgeons before participating in a training program in approved centers. In June 1995, clinical trials in selected US centers were approved by the Food and Drug Administration (FDA). Finally, in 2001, the US FDA approved the clinical application of the Lap-Band ® system for the treatment of morbid obesity (3).
This Lap-Band ® system consisted of a silicon 13 cm ring with an inner circumference of 9.75 or 10 cm. The connection tube was 50 cm long with a fill volume of 4 mL. The subsequent version, the Lap-Band VG ® (Vanguard), approved in 2003, was larger than the previous system, with an inner circumference of 11 cm, and had a fill volume of 10 mL. Furthermore, this modified version was improved by soft, precurved individual sections of the inflatable balloon in order to reduce the risk of the leakages (Omniform technology).
The current device is the result of a final refinement applied in 2006, when the Lap-Band AP ® version (Advanced Platform) was introduced. The Lap-Band AP ® version (Advanced Platform) was designed in two different size: the standard (APS) with a fill volume of 10 mL, and large (APL) with a fill volume of 14 mL. Such variety offered a great range of adjustability in order to respond all patients' anatomical differences. Moreover, the Lap-Band AP ® version included other modifications, such as an easy unlocking mechanism and a 360-degree Omniform inflating balloon component, which improved the reopening procedure and the pressure distribution, respectively (4).
In parallel in the United States, the Realize band, manufactured by the Ethicon Endosurgery Inc. (Cincinnati, OH, USA) was approved by the FDA in 2007 (5). The updated version, called "Realize-C band", was approved in 2009. The latter version stressed the importance of the low inflation pressure with a 11 mL fill volume. It is also equipped with the unlocking mechanism as well as a prelock position to ease gastric band placement.
Evolution of the surgical technique
The original technique was described by Belachew and by Favretti (1,6). The key points of the LAGB technique were: (I) pouch volume of 25 cc; (II) perigastric technique; (III) posterior dissection through the lesser sac, leaving the posterior gastric wall free to move up and down; (IV) two gastro-gastric stiches on the fundus.
The knowledge on the weakness points and on the complications of the LAGB, led to the progressive surgical technique evolution. As a consequence, the above keypoints were modified as follows: The volume of the pouch, initially set at 25 cc, has been gradually reduced and today we talk about a "virtual pouch" (<15 cc). Moreover, the band-E/G junction distance, which was initially set as 3 cm, has been shortened to 1 cm; The "pars-flaccida positioning technique" has become the most widespread technique for (I) the better handling of the instruments and the band; (II) the low complexity in dissection maneuvers; and (III) the low complication rate; The posterior dissection above the bursa omentalis is minimized to reduce the risk of posterior slippage. Finally, some authors suggest to avoid any stitches for an even better result. Furthermore, other improvements were proposed. The gastrostenometer, which was employed in the past to calibrate the pressure at the band level, is no longer used. To prevent a post-operative edema and obstructive acute complications, the connecting tube, is cut outside the abdomen, so that the natural spilling of the liquid guarantees a self-adjusted mechanism, unanimous identify as "point zero". Some authors support the port positioning using mesh fixation on the port. It could have several advantages: avoiding deep incision during the placement; postoperative pain reduction; facilitating port removal (7).
Finally, the shortening of the connection tube is introduced to reduce the risks of complications related to the tube path among ileal loops, or diaphragmatic irritation and shoulder tip pain.
The close relation between bariatric surgery, hiatal hernia and reflux disease, frequently present at the same time in the obese patients, dictated specific rules and indications to be followed.
All the bariatric surgical procedures can worsen or cause reflux and reflux diseases. As a result, the identification of the patients affected by such comorbidities is important to correctly select the right bariatric surgical procedure and possibly solve the contemporary hiatal associated dysfunction.
As to the LAGB, in order to treat the hiatal hernia, when present, the diaphragmatic esophageal hiatus is routinely dissected, the hiatal hernia reduced and the crura approximated.
In our center, a routine gastroscopy is made prior the LAGB in order to investigate the presence of direct or indirect signs of hiatal hernia and to exclude other diseases.
To confirm the diagnosis of a significant hiatal hernia two laparoscopic parameters are also evaluated: a deeper than 2 cm hernia sac, measured by a clinch, and/or the easy passage of the 20 cc inflated balloon of the calibrating tube (gently attracted by the anesthesiologist) throughout the diaphragmatic hiatus.
Pre and post-operative management
The LAGB was born as a surgical procedure supported by a surgical team. The importance of dedicated surgical skills as well as the presence of an interdisciplinary team (IDT) to assist the patient before and after the surgical procedure, has been rapidly recognized. In 2002, Favretti et al. (8) reported on the importance of the "healthy food choices", the activity and exercise, and the behavioral changes. This study emphasized the relevance in monitoring the comorbidities and the metabolic/nutritional changes during the scheduled post-operative checks, and the power of the communication and collaboration with the patient's primary care provider to support the patient weight control.
Any bariatric surgery is rationally performed within a multidisciplinary team which is the key element for the success of the treatment itself. The IDT professionals are predominantly psychologists, dieticians and motor rehabilitation professionals; the bariatric surgical team is formed and skilled on laparoscopic and endoscopic surgeries, and integrated with dedicated anesthesiologists and aesthetic/reconstructive surgeons.
In the pre-operative period, the IDT prepares the patient in order to cut down risks and failures: organic and behavioral pathologies are highlighted and treated, and, at the same time, the patient is motivated to lose weight. Furthermore, the IDT clearly describes to the patients the complexity of the pathway required to reach the desired weight loss, which include the surgical treatment but also changes in diet and life-style behaviors. Finally, the IDT explains the meaning of the informed consent regarding the procedures (9)(10)(11).
Postoperative follow-up gained more and more importance over the years and the timing of scheduled exams, although variable from one center to another, has been established. Follow-up adjustments of the band, initially based only on band filling, are nowadays performed according to numerous variables, such as the radiopaque bolus swallow appearance, the clinical interdisciplinary evaluation of weight loss, the appetite or symptoms, and the patient motivation (12,13).
Results of LAGB
Obesity is a chronic behavior disease which should be treated by ensuring a long-term benefit. As a result the efficacy of all the bariatric procedures should be assessed in a long term follow-up period. Following this consideration in the analysis were included only studies showing a 3-year minimum follow-up. Every surgical procedures, and therefore also LAGB, as well as the peri-operatory management, require a variable learning curve. To eliminate the learning phase and to obtain more homogeneous results in our analysis, only series with more than 250 cases were taken into account.
Tables 1 and 2 summarized the results of the included studies about weight loss (estimated at 3-and 5-year follow-up) and erosion and dilatation/herniation of the gastric pouch, which are considered the main long-term complications.
Discussion
The LAGB, which was the first operation proposed for a laparoscopic approach to the obese patient, due to the technical attraction and encouraging results, spread worldwide. Since its first placement in 1993, LAGB underwent a series of technical, technological and management evolutions (27) (Table 3).
Nowadays, LAGB is a standardized operation and a reproducible technique to evaluate, compare and improve results in the long-term follow-up. Technological improvements, especially the Omniform technology and the different gastric outlet size, led to a reduction in some complications and causes of re-operations. Although some authors documented the complication decrease, it remains difficult to prove because of the coexisting of a series of bias like the different experience and technique in the various studies. As for the technique, the reduction of the initial volume of the gastric pouch (17) was one of the first technical refinements, which greatly reduced the incidence of gastric pouch enlargement/dilation. The impact on the result offered by the pars flaccida technique is still debated in the literature: some authors reported data on the reduction in the incidence of erosion [from 8.5% to 2.2% (12)], slippage and dilatation of the gastric pouch (24), whereas others did not obtained any improvements (22). However, both techniques, the "parsflaccida" and the "perigastric" ones, maintain an important complementary role in specific anatomic circumstances. Another strength point of the surgical technique evolution is the perioperative diagnosis of GERD and hiatal hernia and its surgical treatment contemporary to LAGB. Worsening or de novo GERD, is a well-known problem every bariatric procedure have to deal with. The LAGB's adjustability allows to relieve reflux. In fact, the gradual modification of the pouch outlet is performed in parallel to the patient's behavioral improvements in the eating attitude. The no-compliant patients can benefit from different solutions offered subsequently, thanks to the complete and easy reversibility of the LAGB. The features of this less invasive technique, adjustable and completely reversible, impose a well-structured interdisciplinary team. The pre-operative program is fundamental to select, educate, inform and motivate patients before undergoing the bariatric surgical operation (9,11). Postoperative management in bariatric surgery is always of primary importance, and in the LAGB, which is characterized by the adjustability and reversibility, the follow-up is an integral part of the treatment to obtain the better long-term result.
The LAGB is chosen by surgeons to start the learning curve on bariatric procedures. The related, puzzling results contributed to jeopardize its standing and evolution. This procedure, when properly performed by skilled surgeons and supported by an IDT, shows effective results with percentages of excess weight loss ranging from 40% and 65%, 35% and 68% at 3-year and 5-year follow-up. Furthermore, papers with longer follow-up (more than 15 years) confirm these results with percentages of excess weight loss ranging from 47.9% and 52.6% (13,25).
The long-term results obtained in high-volume centers (13,25), both in terms of efficacy and complications, should lead to think, as said by O'Brien, that "the band must not be abandoned" (28).
F u t u r e g o a l s o f L A G B a r e s t r i c t l y r e l a t e d t o its mechanism of action: to reinforce the essential interdisciplinary bariatric treatment and to empower surgical operation through adjustability. Adjustability can counterbalance the failures mainly due to the long-term anatomic and behavior modifications of any operation. An adjustable band allows a partition of the gastric cavity in order to modify the functional pouch outlet to improve restriction or to prepare to revision surgery, in particular to the adjustable by-pass (29). Moreover, the actual limits related to the erosions could be lowered by new devices and new connections to the port.
The present and potential future advantages of LAGB are counterbalanced by the organizing difficulty in scheduling and performing the follow-up for all the patients enrolled. As a consequence, to overcome the "post-surgical management limits", as well as to response to a growing treatment demand, lots of centers propose not-adjustable surgical approaches, which do not require a close follow-up to reach the targeted results.
Our institution provides a section dedicated to obesity, which facilitates patient's recruitment, professionals' cooperation and economic acceptance. In our opinion these premises allow to invest LAGB in the future; in fact, if correctly managed in the peri-operative period, it is an effective technique to achieve the targeted weight loss and food re-education.
In conclusion, the knowledge of the history of the LAGB, of its evolution over the years, of its results and its limits, strictly related to a well-structured IDT, can be the key-points to recognize the reasons that are leading to its decline in favor to "easy", technological surgeries for a growing treatment demand. The adjustability and the absolute reversibility characteristic of LAGB, make this surgical procedure a "bridge treatment" to allow the specific goal of removing obesity. In fact, LAGB, as well as the others bariatric surgeries, does not treat the cause of the obesity, which has a complex pathogenesis, and the surgical medium-long term results highlight the distance between a feasible and a truly effective procedure. Thus, the proper treatment of obesity can be reached only by restoring a healthy life-style and by alimentary re-education. Efforts will be directed to further technological developments, mainly aimed at reducing the impact of complications. A slimmer band, alternative adjustment methods, the possibility of simple daily adjustments, new materials, including biological ones, are just a few ideas to work with.
In addition, the long-term results (15-20 years) of the other bariatric procedures, which at the moment are not present as they have a more recent history, will be important to draw definitive conclusions and a more objective comparison.
Footnote
Provenance and Peer Review: This article was commissioned by the Guest Editor (Muhammed Ashraf Memon) for the focused issue "Bariatric Surgery" published in Annals of Translational Medicine. The article was sent for external peer review organized by the Guest Editor and the editorial office.
Conflicts of Interest:
The focused issue "Bariatric Surgery" was commissioned by the editorial office without any funding or sponsorship. The authors have no conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International
|
2019-10-08T00:07:43.675Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "1e4608d016d56f450720b3db53a79d14f3fae26e",
"oa_license": "CCBYNCND",
"oa_url": "https://atm.amegroups.com/article/viewFile/29268/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b492fa877359e62c94fdd5b50b37c902e5e36364",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212523759
|
pes2o/s2orc
|
v3-fos-license
|
Neurocognitive Values of Evolvulus alsinoides and Centella asiatica on Scopolamine Induced Amnesia in Mice
Aim of the study: To study the comparative neuroprotective activity of ethanolic extracts of E. alsinoides and C. asiatica. Method: The ethanolic extracts of E. alsinoides and C. asiatica were administered orally daily along with scopolamine for a period of 14 days following which the behavioral tests i.e. elevated plus maze and Morris water maze tests were performed to assess learning and memory. Animal groups were divided in nine different groups. In-vivo antioxidant enzymes activity, inflammatory markers inhibition activity and acetylcholinesterase (AChE) inhibition activity in the brain of mice were also measured at the end of the study. Results: The study demonstrate that scopolamine induction resulted in learning and memory deficits which were partially and significantly ameliorated by the ethanolic extracts of E. alsinoides and C. asiatica. The extracts also counteracted scopolamine-induced decreases in acetylcholine levels, increases in AChE activity, and decreases in activities of the antioxidant enzymes. Conclusion: The study demonstrates the ability of the ethanolic extracts of E. alsinoides and C. asiatica to reverse scopolamine-induced learning and memory deficits in mice which may at least partially be explained by the reversal of scopolamine-induced reductions in brain antioxidant enzymes activity, inflammatory markers inhibition activity and acetylcholinesterase (AChE) inhibition activity by the both extracts.
Introduction
Alzheimer's disease is a progressive neurodegenerative disease which is characterized by the loss of learning and memory abilities with aging. This impairment of memory is correlated with the loss of cholinergic neurons. Scopolamine-induced amnesic animal models are used to screen for drugs that potentially have anti-dementia activities by stimulating the cholinergic system, make them a candidate for the treatment of Alzheimer's disease. Ayurveda, the Indian system of medicine describe a group of medicinal plants under the category of 'Medhya Rasayana', which possess the memory-enhancing effect and facilitate learning acquisition. Medicinal plants are rich sources of important metabolites, which are potential sources of antioxidant, antimicrobial, anti-inflammatory, and anticancer activities [1][2][3]. The utilization of herbal medicine in treating infectious diseases have been practiced for 1000s of years and will continue to provide mankind with new remedies [4].
Evolvulus alsinoides L., an important medicinal plant is employed for different ailments in India traditionally and grows in the open and grassy places almost throughout India and mostly in the region from India to west Cameroon and widely dispersed elsewhere in tropical Africa and worldwide. Some vernacular names of the plant in India are vishnukranta, vishnugandhiy (sanskrit), shankapushpi (hindi). Evolvulus alsinoides L. is used in Ayurveda as a brain tonic in the treatment of neurodegenerative diseases, asthma and amnesia [5].
Centella asiatica is a very important medicinal herb also known as Mandukparni, which has been used as a medicine in the Ayurvedic tradition of India for thousands of years and mentioned in many classical texts of Ayurveda. Centella asiatica (CA) is a rejuvenative nervine recommended for nervous disorders, epilepsy, senility and premature aging and a number of other medical conditions. The extract of Centella asiatica especially from roots and leaves contain a high anti-oxidative activity, which was as good as tocopherol, a natural anti-oxidant, have been reported to play a significant role in wound healing [6]. Centella asiatica extract contains four principle bioactive compounds: asiatic acids, madecassic acid, asiaticoside and madecassoside [7], in which asiaticoside was identified as the main active constituent responsible for wound healing.
We are working on the development of anti-dementia natural drug using an in vivo screening of herbal materials due to their safety and cost effectiveness [8].
Plant material
Whole plant material of Centella asiatica and Evolvulus alsinoides were collected from village Ramnapur, Varanasi, Uttar Pradesh, India in October 2015 and authentication was done by Department of Botany, Banaras Hindu University, India and also herbarium of Centella asiatica (voucher specimen no. Apia/02/2015) and Evolvulus alsinoides (voucher specimen no. Convolvul./03/2015) of plant was deposited in the Department of Botany, Banaras Hindu University, India.
Preparation of extracts
The extraction of both plants was done with Soxhlet method in ethanolic solvents at 72-82°C for 72 hours. The Soxhlet extraction has widely been used for extracting valuable bioactive compounds from various natural sources. It is used as a model for the comparison of new extraction alternatives.
Preliminary phytochemical analysis
For preliminary phytochemical analysis, the freshly prepared crude ethanolic extracts of the whole plant were tested for the presence or absence of phyto-constituents such as alkaloids, tannins, flavonoids, saponins by using standard phytochemical procedures [9].
Animals
The experimental adult Swiss albino mice (female) 7-8 weeks old of 25-30 gm weight were issued by Animal house of Institute of Medical Sciences, Banaras Hindu University Varanasi, Uttar Pradesh. Animals were divided into experimental groups, housed in plastic cages and maintained on a 12-hour light and 12hour dark cycle. They were given standard food and water ad libitum. The Central Animal Ethical Committee of Banaras Hindu University approved all experimental procedures (CAEC/196).
Experimental Design and Drug Administration
All the solutions were freshly prepared prior to use. Scopolamine was purchased from Sigma-Aldrich Chemical Co. India. A solution of scopolamine (1.0 mg/kg; dissolved in distilled water) was administered to the experimental animals through the intraperitoneal (IP) route. The ethanolic extracts of Evolvulus alsinoides (EEA) and Centella asiatica (ECA) in the following doses: EEA 250 mg/kg/day and 500 mg/kg/day, ECA 250 mg/kg/day and 500 mg/kg/day and combination of EEA+ECA 250 mg/kg/day and 500 mg/kg/day, dissolved in 0.3% CMC and was administered orally (PO). Nine groups, each consisting of 6 animals, were included in the study. Group I (normal) was treated with vehicle daily (0.3% CMC; PO). Group II (Sco control) was treated with scopolamine (1.0 mg/kg/day; IP). Group III (Sco+Doz) was treated with a donepezil (1.5 mg/kg/day; IP) and scopolamine (1.0 mg/ kg/day; IP). Group IV (Sco+EEA) was treated with low dose EEA (250 mg/kg/day; PO) and scopolamine (1.0 mg/kg/day; IP). Group V (Sco+EEA) was treated with high dose of EEA (500 mg/kg/day; PO) and scopolamine (1.0 mg/kg/day; IP). Group VI (Sco+ECA) was treated with low dose ECA (250 mg/kg/day; PO) and scopolamine (1.0 mg/kg/day; IP). Group VII (Sco+ECA) was treated with high dose of ECA (500 mg/kg/day; PO) and scopolamine (1.0 mg/kg/ day; IP). Group VIII (Sco+EEA+ECA) was treated with low dose of EEA (250 mg/kg/day; PO) and ECA (250 mg/kg/day; PO) and scopolamine (1.0 mg/kg/day; IP). Group IX (Sco+EEA+ECA) was treated with high dose of EEA (500 mg/kg/day; PO) and ECA (500 mg/kg/day; PO) and scopolamine (1.0 mg/kg/day; IP).
Behavioral study
Elevated plus maze: The elevated plus maze (EPM) is designed to study the behavioral pattern of experimental animals such as sensitivity to external stimuli (exteroceptive behavior), anxiety, exploration as well as learning and memory [10,11]. The EPM consists of four arms (two open and two closed), each 49 cm x 10 cm, with 40 cm high walls in closed arms and the open roof. The whole structure is elevated 50 cm above the ground. On the 10th day, 60 min after the drug treatment, each mice was placed at the end of an open arm, facing away from the central platform. The time is taken by the mice to enter any of the closed arms was recorded and considered as the transfer latency (TL) and served as a parameter for acquisition/learning. If the mice did not enter into any one of the closed arms within 180 sec, it was gently pushed into one of the two closed arms and the TL was assigned as 180 sec. For the next 15 sec, the mice were allowed to explore the maze before returning it to its home cage. On the 14th day, TL was recorded again, which served as a parameter for retention of memory. Between each session, the maze was carefully cleaned with 30% ethanol tissue to remove any olfactory cues.
The apparatus consists of a circular pool (45 cm in height x 100 cm in diameter) with a featureless inner surface. The pool was filled with opaque water, maintained at the temperature of 22 ± 2 °C, to a height of 30 cm, and was divided into four quadrants of equal area, marked by different visual cues. A platform (29 cm X 6 cm) was placed one centimeter below the level of water at the center of one of the four quadrants, which was considered as the target quadrant. The position of the platform was kept unaltered throughout the duration of the experiment. The MWM test was performed on the 10 th day after drug administration was started. On the first experimental day, the mice were allowed to acclimatize in the pool and swim for 120 sec without the platform. During the next four consecutive days, each animal received four learning trials of 120 sec with an inter-trial interval of 60 sec.
For each learning trial, the mice were placed in the water facing the pool wall diagonally opposite to the quadrant in which the platform was kept. The time taken by the animal to locate the submerged platform was recorded as Escape latency time (ELT) for each trial. If the animal were unable to locate the platform within 120 sec, it was directed to the platform and allowed to rest there for 60 sec, and in this case, the ELT was recorded as 120 sec. These sessions were recorded as hidden platform trials for acquisition test. On the 14 th day after the learning trial session, the platform was removed from the pool and the mice were subjected to a Probe trial session to assess memory retention. Each mouse was placed into the water diagonally opposite to the target quadrant, and for 60 sec was allowed to swim and find the quadrant in which the platform was previously placed. The swimming time of the animal to reach the target quadrant was recorded as probe trial memory retention test.
Preparation of brain homogenate
On the terminal experimental day, animals were euthanized and out of 6 animals per group, the whole brains of 3 mice were isolated after performing cardiac perfusion with normal saline for biochemical estimations. Then further rinsed in ice-cold isotonic saline and were homogenized with ice-cold 0.1 M phosphate buffer saline (pH 7.4) to form 10% w/v homogenates. These homogenates were then further centrifuged at -4°C (10,000 rpm; C-24 cooling centrifuge instrument, Model no. C-24 Remi, India) for 15 min and the supernatant was used for estimation of biochemical parameters [14].
In-vivo antioxidants assessment
Assessment of superoxide dismutase activity: Every 3 ml of reaction mixture contained 2.8 ml of potassium phosphate buffer (0.1 M, pH 7.4), 0.1 ml of brain homogenate and 0.1 ml of pyrogallol solution (2.6 mM in 10 mM HCl). The change in absorbance was recorded at 325 nm for a period of 5 min with 30 sec. interval. One unit of SOD is equivalent to the amount of enzyme required to cause 50% inhibition of pyrogallol autoxidation per 3 ml of the assay mixture (Li X, 2012).
Assessment of catalase activity:
The reaction mixture contained 2.0 ml of diluted homogenate in 0.1 M phosphate buffer (enzyme extract). The reaction was started by adding 1.0 ml of 200 mM H 2 O 2 . The decrease in OD per min was recorded against the blank (all the reagents except enzyme extract) for 3 min at 240 nm at intervals of 15 sec. CAT activity was expressed as U/mg protein [15].
Assessment of lipid peroxidation activity:
To 100 μl of tissue homogenate, 1.5 ml of 10% TCA solution was added. After 10 min, centrifuge at 5000 rpm for 10 min. The supernatant was separated and mixed with 1.5 ml of TBA. The tubes were kept in boiling water bath for 30 min to complete reaction and were cooled under tap water. The absorbance of the sample at 535 nm against distilled water [16].
Assessment of reduced glutathione (GSH) activity:
The GSH assays were performed as described by Smith I K et al. [17]. In GSH assay, 3 ml of reaction mixture consisted of 2.9 ml of 5, 5-dithiobis (2-nitrobenzoate) (DTNB) prepared in potassium phosphate buffer (0.1 M, pH 7.4) and 0.1 ml of tissue homogenate. The reaction mixture was incubated at 37°C for 15 min and the absorbance was recorded at 412 nm and the results were expressed as GSH/mg protein.
Analysis of IL-1β, IL-6 and TNF-α in the brain: The levels of IL-1β, IL-6, and TNF-α in the brain homogenates were determined using commercial (Elabscience) ELISA kits according to the manufacturer's instructions. The levels of these cytokines in the brain tissues were normalized to the protein content.
AChE estimation: AChE activity was estimated in the whole brain homogenates. Briefly, the brain homogenate was incubated for 5 min with 2.7 ml of phosphate buffer and 0.1 ml of 5, 5-dithiobis (2-nitrobenzoate) (DTNB). Further, 0.1 ml of freshly prepared acetylcholine iodide (pH 8) was added and the change in absorbance was recorded at 412 nm [18].
Statistical Analysis
The relative hematological and biochemical data were expressed as the mean ± standard error of the mean (SEM). Data were submitted to analysis of variance (one-way ANOVA) followed by Dunnett's multiple comparison tests. The results were expressed as mean ± SEM. The software GraphPad Prism 6.0 (GraphPad Software, USA) was used for statistical analysis. P<0.05 was considered statistically significant.
Behavioral study
Elevated plus maze: The elevated plus maze (EPM) was used to study the behavioral pattern of mice such as anxiety, exploration as well as learning and memory and results are shown in Figure 1. Morris water maze: The escape latency time (ELT) was measured to assess spatial memory in mice and the results are shown in Figure 2. On 1 st and 6 th day and we found that there was no significant difference in ELT in all mice groups. On the 10 th and 14 th day all plant extract treated groups were found to improve the spatial memory in a dose-dependent manner with a significant (P<0.01). On
Discussion
In neurodegeneration, oxidative stress plays important role in ageing process and the brain is highly susceptible to the oxidative imbalance due to its high energy demand, high oxygen consumption [19]. In this study, Scopolamine treatment significantly decreased the reactive oxygen species scavenging enzymes activities like superoxide dismutase, catalase and also reduced glutathione levels in brain tissue. Scopolamine treatment also significantly increased lipid peroxidation as compared to the normal group. Both plants extracts treatments were found to significantly elevate the SOD, catalase activity, lipid peroxidation activity and reduced glutathione levels in comparison with Scopolamine group. The expression of cytokine receptors is temporally and spatially regulated in the central nervous system, and they are closely involved in cell proliferation, gliogenesis, neurogenesis, cell migration, apoptosis, and synaptic release of neurotransmitters [20,21]. The level of cytokines IL-1β, IL-6 and TNF-α are significantly decreased on treatment with plant extracts which increased by Scopolamine administration. The elevated activity of AChE leads to increased degradation of acetylcholine (Ach) neurotransmitter and which in turn declines the ACh pool in the brain which is essential in learning and memory [22]. Scopolamine administration amplifies the AChE activity which is one of the major causes of the cholinergic deficit occurrence after its administration. In this study we found that treatment with both plants ethanolic extracts significantly reduced the AChE activity as compared to the Scopolamine treated mice. It reveals that inhibition of AChE activity by plant extracts have a protective role in acetylcholine degradation and improved the cholinergic neurotransmission. Thus, plant extracts reduced the cholinergic deficits produced by Scopolamine administration resulting into enhanced neuroprotective effect [23,24].
Conclusion
Since the E. alsinoides and C. asiatica are already used in Indian traditional medicine as the neuroprotective agent and also found promising effects over inflammatory diseases, wound healing, and immunomodulatory activity. The neuroprotective effect of both plants extracts attributed to inhibition of AChE activity and make improvement in the spatial memory formation. The neuroprotective activity could be ascribed to extracts strong antioxidant potential, inhibitory role on AChE activity. The aforementioned research findings of this study prospect ethanolic extracts of both plants i.e. EEA, ECA, and EEA+ECA extracts as the promising therapeutic candidate for neurodegenerative diseases.
|
2019-12-12T10:16:05.974Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "6e717db66184b358cee71df39565c4f12be9ac07",
"oa_license": "CCBY",
"oa_url": "https://www.imedpub.com/articles/episodes-of-diarrhea-in-last-calendar-year-utilization-of-services-and-reasons-for-nonutilization-of-government-health-facilities.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "72e799ea5d9db16f54aa25b3ec6011ea48d93cf7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
253051935
|
pes2o/s2orc
|
v3-fos-license
|
Preparation of High-Precision Dimension Seamless Thick-Walled Pipe by New Cold Rolling Process
: In this study, the cold rolling test on the quenched-tempered hot working die steel pipe with an outer diameter/thickness ratio of no greater than 3 was performed. The evolutionary trend of microstructure was examined by a combination of the microscope, SEM, and EBSD tests. The effect of feed rate on the inner wall roughness of rolled pipe was analyzed by means of white light interference. According to the experimental results, the maximum normal pressure per unit area increases from 1046.7 MPa to 1113.2 MPa with the rise in feed rate from 1 mm/stroke to 6 mm/stroke. Meanwhile, the inner wall roughness of the pipe declines from 0.285 µ m to 0.146 µ m after rolling. When the feed rate reaches 2 mm/stroke, the maximum normal pressure per unit area is 1058.4 MPa, which causes a significant plastic deformation to the inner wall of the pipe, and the average roughness below 0.2 µ m. The microstructure of the pipe is dominated by tempered sorbite whether before or after rolling, and the grain size before rolling is 16.22 µ m on average. After cold rolling, the longitudinal structure is deformed along the direction of rolling, in which the average grain size is 24.31 µ m. With the increase in deformation work-hardening behavior in the rolling process, the tensile strength improves from 1134 MPa to 1178 MPa, the yield strength increases from 985 MPa to 1125 MPa, and the room temperature impact energy diminishes from 58 J to 52.5 J. After vacuum tempering at 600 ◦ C, it is difficult to eliminate the deformed band microstructure along the rolling direction completely. However, the grain size is reduced after cold rolling, no coarsening occurs, and the impact toughness increases from 52.5 J to 60.5 J. With the recovery of the original microstructure, the mechanical properties are restored to the before rolling level.
Introduction
At present, high-precision thick-walled small-hole seamless steel pipes have been widely used in the production of various pneumatic or hydraulic components, such as air cylinders and oil cylinders. For their inner and outer diameters (ID and OD), high precision and small surface roughness are required [1]. The processing of thick-walled seamless pipes has direct effects on the surface quality of the inner hole, thus affecting the service life, accuracy, and reliability [2]. The core process of its manufacturing includes the formation of the hole and the processing of the inner surface. However, the traditional processing technology involves about 30 procedures, such as deep hole drilling, deep hole reaming, electrolytic polishing, honing, and inner hole processing. There were many process quality control points, and the product processing accuracy needs to be improved urgently [2].
To improve the efficiency of production as well as the dimensional accuracy and surface quality of seamless pipes, periodic cold rolling mills are often used [3]. The traditional cold rolling process is as follows: steel making → round steel rolling → piercing → pickling → cold rolling → annealing → polishing of the inner and outer surfaces. The main advantage of cold rolling is reflected in the following points. Firstly, the metal in the deformation zone of cold rolling is subjected to compressive stress, which facilitates plastic deformation. Moreover, the metal can be rolled with significant deformation. The rolling elongation coefficient(µ) can reach 4~7, the rate of diameter reduction can reach 75-85%, and the deviation in wall thickness of the pipe after cold rolling is reduced. Cold rolling leads to a dense structure of the pipe and fine grain size. Moreover, the mechanical and physical properties of the pipe are improved simultaneously, for example, AISI 321 austenitic stainless steel pipes were rolled with Q = 1.15(Q = ln(t 1 /t 0 )/ln(d 1 /d 0 ), where t 0 and t 1 are the wall thicknesses before and after pilgering, and d 0 and d 1 are the average diameters before and after pilgering), and tensile strength increased from 581 MPa to 1096 MPa, [4]. However, most of the existing cold-rolled seamless pipes are thin-walled pipes [5]. For those specially made hydraulic seamless pipes, they have high hardness and strength, small inner diameter, and large wall thickness. Furthermore, the ratio of outer diameter to wall thickness is no greater than 3, it is difficult to cold rolling, and the mechanical properties of thick-walled and small-hole seamless pipes can be significantly affected by hydrogen embrittlement and intergranular corrosion during the pickling of waste pipes. Therefore, it is necessary to phase out pickling for a new way of producing high-precision small-hole thick-walled seamless pipes through cold rolling.
Therefore, to improve the preparation efficiency and surface quality of small diameter thick-wall seamless pipe and improve the surface quality of the thick-walled pipe. The emphasis discussed later will be on the determination of the optimum feed rate for industrial rolling the thick-walled pipe by evaluating the influence of feed rate on inner surface roughness, microstructure, mechanical properties, and rolling formability. Normal pressure per unit area, feed rate and surface roughness, and the relationship between microstructure and mechanical properties were discussed. Finally, the advantages and disadvantages of traditional machining, traditional cold rolling, and new cold rolling methods were investigated. Table 1 is the chemical composition of the experimental materials. Before cold working, it is also necessary to remove the iron oxide scale from the surface. The existence of iron oxide scale will reduce the quality of the subsequent lubrication, which compromises the surface quality of the steel pipe.
Pilger Rolling and Heat Treatment
The production process of high-precision seamless steel pipe is as follows: quenching and tempering treatment → Mechanical honing → cold rolling → vacuum tempering → finished product. Table 2 is the list of parameters of cold rolling, the dimensions and inner wall roughness of mother pipe and finished pipe are all measured values, the feed rate is the set parameter, turn angle is from ref [6], working length of wall thickness reduction and roll die diameter are the measured values of the rolling tool. Figure 1 shows the heat treatment process diagrams of the experiment pipe before and after rolling. Vacuum tempering at 600 • C for 4 h, and then cooled down to room temperature by passing through room-temperature N 2 with a purity of no less than 99.99%, to avoid oxidation affecting the surface quality of the pipe.
Experimental Method
A tensile test was performed at room temperature on a CMT4103 computercontrolled electronic universal testing machine, with the strain rate set to 2 mm/min. Moreover, the impact test was carried out on a JBS-300B impact testing machine for a standard Charpy U-notch specimen. For the Charpy-U notch sample, most of the impact energy is consumed in the crack formation during the impact test, while for the Charpy-V notch sample, most of the impact energy is consumed in the crack propagation. For thick-walled pipes under high-pressure conditions, when cracks are formed, the pipes will be discarded, so Charpy-U notch samples are selected in this paper. The tensile and impact samples of each state are 3, and the average value is taken. After quenching and tempering, rolling and vacuum tempering, the samples were ground and polished, and corroded with 4% nitric acid alcohol solution, while their transverse and longitudinal microstructures were observed under optical microscope and ZEISS SUPR A55 scanning electron microscope. The roughness in the inner surface of the pipes was measured by using white light interferometry equipment (MicroXAM-3D) to describe the roughening quantitatively and qualitatively. Table 3 shows the evolutionary trend in size of the finished product after rolling. It can be found that the inner and outer diameters of the finished products meet the requirement of a smaller tolerance than 0.05 mm. When the feed rate ranges from 1 to 6
Experimental Method
A tensile test was performed at room temperature on a CMT4103 computer-controlled electronic universal testing machine, with the strain rate set to 2 mm/min. Moreover, the impact test was carried out on a JBS-300B impact testing machine for a standard Charpy U-notch specimen. For the Charpy-U notch sample, most of the impact energy is consumed in the crack formation during the impact test, while for the Charpy-V notch sample, most of the impact energy is consumed in the crack propagation. For thick-walled pipes under high-pressure conditions, when cracks are formed, the pipes will be discarded, so Charpy-U notch samples are selected in this paper. The tensile and impact samples of each state are 3, and the average value is taken. After quenching and tempering, rolling and vacuum tempering, the samples were ground and polished, and corroded with 4% nitric acid alcohol solution, while their transverse and longitudinal microstructures were observed under optical microscope and ZEISS SUPR A55 scanning electron microscope. The roughness in the inner surface of the pipes was measured by using white light interferometry equipment (MicroXAM-3D) to describe the roughening quantitatively and qualitatively. Table 3 shows the evolutionary trend in size of the finished product after rolling. It can be found that the inner and outer diameters of the finished products meet the requirement of a smaller tolerance than 0.05 mm. When the feed rate ranges from 1 to 6 mm/stroke, the size of finished products after rolling by the same pair of rolls and mandrels shows no significant correlation with feed rate. In this study, the surface roughness of the pipe was characterized by determining a number of statistical parameters as follows [7]:
Quality of Pipe after Rolling
(1) Maximum surface height parameter (Rz). The Rz parameter refers to the sum of the absolute values of maximum surface peak height, Rp, and maximum surface valley depth, Rv, as defined in Equation (1).
(2) Mean surface height parameter (Ra). The Ra parameter refers to the arithmetic mean of the absolute value of the height within an observation area as defined in Equation (2). It is used to describe how surface heights fluctuate around the mean plane.
where n represents the total number of data points. The surface height η i represents the height of each point from the mean plane, which is positive above the mean plane and negative below the mean plane.
(3) Root means the square value of surface heights (Rq). Rq is equivalent to the square root of mean squares of the observed surface as defined in Equation (3). Ra and Rq are closely correlated with each other. It is a quadratic average of the asperities and may be applicable to identify the significant variations in surface characteristics. Figure 2 shows the topography at the inner surface of the mother pipe, with Ra 0.789 µm, Rq 0.769 µm, and Rz 1.59 µm. Figure 3 shows the feed rate reaches 6 mm/stroke, the Ra 0.146 µm, Rq 0.182 µm, and Rz 0.46 µm. 022, 12, x FOR PEER REVIEW 4 of 14 mm/stroke, the size of finished products after rolling by the same pair of rolls and mandrels shows no significant correlation with feed rate. In this study, the surface roughness of the pipe was characterized by determining a number of statistical parameters as follows [7]: (1) Maximum surface height parameter (Rz). The Rz parameter refers to the sum of the absolute values of maximum surface peak height, Rp, and maximum surface valley depth, Rv, as defined in Equation (1).
(2) Mean surface height parameter (Ra). The Ra parameter refers to the arithmetic mean of the absolute value of the height within an observation area as defined in Equation (2). It is used to describe how surface heights fluctuate around the mean plane.
where n represents the total number of data points. The surface height ηi represents the height of each point from the mean plane, which is positive above the mean plane and negative below the mean plane.
(3) Root means the square value of surface heights (Rq). Rq is equivalent to the square root of mean squares of the observed surface as defined in Equation (3). Ra and Rq are closely correlated with each other. It is a quadratic average of the asperities and may be applicable to identify the significant variations in surface characteristics. Figure 2 shows the topography at the inner surface of the mother pipe, with Ra 0.789 μm, Rq 0.769 μm, and Rz 1.59 μm. Figure 3 shows the feed rate reaches 6 mm/stroke, the Ra 0.146 μm, Rq 0.182 μm, and Rz 0.46 μm. Surface roughness parameters (Ra, Rq and Rz) were extracted to quantify the change in roughening. On this basis, the relationship between the feed rate and surface roughness was established, as can be seen in Figure 4. With the increases in the feed rate, the roughness (Ra, Rq, and Rz) decreases gradually. When the feed rate ranges from 1 to 3 mm/stroke, the inner surface Ra of the pipe decreases from 0.285 μm to 0.165 μm, Rz varies from 0.746 μm to 0.485 μm, and Rq varies from 0.318 μm to 0.221 μm, the roughness of the inner surface of the pipe decreases significantly. However, as the feed rate gradually increases to 6 mm/stroke, Ra of the pipe decreases from 0.165 μm to 0.146 μm, Rz varies from 0.487 μm to 0.485 μm, and Rq varies from 0.221 μm to 0.181 μm, the reduction in the roughness tends to slow down. The reason may be that with the increase in the feed rate, the oil film thickness between the inner surface of the pipe and the mandrel increases. When the oil film thickness is too large, the reduction in the inner wall thickness will be affected [8][9][10], and the reduction trend of the roughness will slow down. The normal pressure per unit area (pc) under different feed rates in the rolling process is given by ЮФ Equations (4)~(8) [11]. The parameters and variables of the equation are presented in Tables 4 and 5. Figure 5 shows the curve of normal pressure per unit area plotted given different feed rates in the rolling process. It can be seen from the figure that the normal pressure per unit area increases rapidly in the case of forwarding rolling stroke, and that the normal pressure per unit area changes as the feed rate varies from 1 mm/stroke to 6 mm/stroke. The higher the feed rate, the greater the normal pressure per Surface roughness parameters (Ra, Rq and Rz) were extracted to quantify the change in roughening. On this basis, the relationship between the feed rate and surface roughness was established, as can be seen in Figure 4. With the increases in the feed rate, the roughness (Ra, Rq, and Rz) decreases gradually. When the feed rate ranges from 1 to 3 mm/stroke, the inner surface Ra of the pipe decreases from 0.285 µm to 0.165 µm, Rz varies from 0.746 µm to 0.485 µm, and Rq varies from 0.318 µm to 0.221 µm, the roughness of the inner surface of the pipe decreases significantly. However, as the feed rate gradually increases to 6 mm/stroke, Ra of the pipe decreases from 0.165 µm to 0.146 µm, Rz varies from 0.487 µm to 0.485 µm, and Rq varies from 0.221 µm to 0.181 µm, the reduction in the roughness tends to slow down. The reason may be that with the increase in the feed rate, the oil film thickness between the inner surface of the pipe and the mandrel increases. When the oil film thickness is too large, the reduction in the inner wall thickness will be affected [8][9][10], and the reduction trend of the roughness will slow down. Surface roughness parameters (Ra, Rq and Rz) were extracted to quantify the change in roughening. On this basis, the relationship between the feed rate and surface roughness was established, as can be seen in Figure 4. With the increases in the feed rate, the roughness (Ra, Rq, and Rz) decreases gradually. When the feed rate ranges from 1 to 3 mm/stroke, the inner surface Ra of the pipe decreases from 0.285 μm to 0.165 μm, Rz varies from 0.746 μm to 0.485 μm, and Rq varies from 0.318 μm to 0.221 μm, the roughness of the inner surface of the pipe decreases significantly. However, as the feed rate gradually increases to 6 mm/stroke, Ra of the pipe decreases from 0.165 μm to 0.146 μm, Rz varies from 0.487 μm to 0.485 μm, and Rq varies from 0.221 μm to 0.181 μm, the reduction in the roughness tends to slow down. The reason may be that with the increase in the feed rate, the oil film thickness between the inner surface of the pipe and the mandrel increases. When the oil film thickness is too large, the reduction in the inner wall thickness will be affected [8][9][10], and the reduction trend of the roughness will slow down. The normal pressure per unit area (pc) under different feed rates in the rolling process is given by ЮФ Equations (4)~(8) [11]. The parameters and variables of the equation are presented in Tables 4 and 5. Figure 5 shows the curve of normal pressure per unit area plotted given different feed rates in the rolling process. It can be seen from the figure that the normal pressure per unit area increases rapidly in the case of forwarding rolling stroke, and that the normal pressure per unit area changes as the feed rate varies from 1 mm/stroke to 6 mm/stroke. The higher the feed rate, the greater the normal pressure per The normal pressure per unit area (p c ) under different feed rates in the rolling process is given by ЮΦ Equations (4)~(8) [11]. The parameters and variables of the equation are presented in Tables 4 and 5. Figure 5 shows the curve of normal pressure per unit area plotted given different feed rates in the rolling process. It can be seen from the figure that the normal pressure per unit area increases rapidly in the case of forwarding rolling stroke, and that the normal pressure per unit area changes as the feed rate varies from 1 mm/stroke to 6 mm/stroke. The higher the feed rate, the greater the normal pressure per unit area. When the feed rate rises, the volume of metal compressed in a single pass increase, thus resulting in a significant metal deformation and a sharp increase in normal pressure per unit area. The maximum normal pressure per unit area increases from 1046.7 MPa to 1113.2 MPa. When the maximum normal pressure per unit area reaches 1058.4 MPa (feed rate is 2 mm/stroke), the inner wall of the pipe suffers plastic deformation, and the roughness of the inner wall is lower than Ra 0.2 µm. pc The normal pressure per unit area of the roll (mandrel) on the pipe σb strength of the mother pipe under different deformations Figure 6 nσ coefficient of principal stress 1.02~1.08 [11] f coefficient of friction between roller(mandrel) and pipe 0.1 [11] S0 the wall thickness of the mother pipe 14 mm Sx pipe wall thickness at position x Table 5 Sx the instantaneous reduction in wall thickness at position x x roll groove radius at position x Table 5 w the radius of the drive gear pitch 140 mm μx rolling elongation coefficient of pipe at x γx taper between any x and x + 1 positions at the ridge of the roll groove Table 5 Figure 5. The curves of normal pressure per unit area in wall-reduction zone with different feed rate. p c The normal pressure per unit area of the roll (mandrel) on the pipe σ b strength of the mother pipe under different deformations Figure 6 n σ coefficient of principal stress 1.02~1.08 [11] f coefficient of friction between roller(mandrel) and pipe 0.1 [11] S 0 the wall thickness of the mother pipe 14 mm S x pipe wall thickness at position x Table 5 S x the instantaneous reduction in wall thickness at position x x roll groove radius at position x Table 5 w the radius of the drive gear pitch 140 mm µ x rolling elongation coefficient of pipe at x γ x taper between any x and x + 1 positions at the ridge of the roll groove radius at any x position of the mandrel Table 5 D 0 The outer diameter of the mother pipe 46 mm D x pipe outer diameter at position x Table 5 L x distance between any x and x + 1 positions at the ridge of the roll groove 10 mm l x distance between any x and x + 1 positions at the mandrel 20 mm Figure 7 shows the microstructure as observed under the quenching and tempering state, after rolling states and vacuum tempering state, respectively. The room temperature microstructure of the quenching and tempering samples is dominated by tempered sorbite, and the grain size is 16.22 μm on average. The longitudinal microstructure of the Figure 7 shows the microstructure as observed under the quenching and tempering state, after rolling states and vacuum tempering state, respectively. The room temperature microstructure of the quenching and tempering samples is dominated by tempered sorbite, and the grain size is 16.22 µm on average. The longitudinal microstructure of the cold rolling specimens showing deformation compared with the transverse microstructures, the equiaxed grains were deformed along the rolling direction, and the grain size before rolling is 24.31 µm. After vacuum tempering at 600 • C, the microstructure orientation relationship of the lath shape in the metallographic diagram of the tempered material still tends to the rolling direction. It can be observed that the obvious deformation bands cannot eliminate completely by vacuum tempering. Figure 7 shows the microstructure as observed under the quenching and tempering state, after rolling states and vacuum tempering state, respectively. The room temperature microstructure of the quenching and tempering samples is dominated by tempered sorbite, and the grain size is 16.22 μm on average. The longitudinal microstructure of the cold rolling specimens showing deformation compared with the transverse microstructures, the equiaxed grains were deformed along the rolling direction, and the grain size before rolling is 24.31 μm. After vacuum tempering at 600 °C, the microstructure orientation relationship of the lath shape in the metallographic diagram of the tempered material still tends to the rolling direction. It can be observed that the obvious deformation bands cannot eliminate completely by vacuum tempering. Figure 8 shows the SEM morphology under the three processing states for the thickwalled pipe. The shape of carbides in the quenched and tempered state is characterized by long rods and needles, along with a small number of large-sized spheres and ellipsoids carbides. Moreover, there are some carbides distributed along the grain. Despite no obvious change in the shape of carbides in the rolling state after cold rolling, the carbides distributed along the grains were reduced, and the carbides in the longitudinal direction were distributed in a band shape along the direction of rolling. After tempering, the carbides were coarsened, no aggregation occurred along the grain, and the distribution of carbides was relatively uniform. by long rods and needles, along with a small number of large-sized spheres and ellipsoids carbides. Moreover, there are some carbides distributed along the grain. Despite no obvious change in the shape of carbides in the rolling state after cold rolling, the carbides distributed along the grains were reduced, and the carbides in the longitudinal direction were distributed in a band shape along the direction of rolling. After tempering, the carbides were coarsened, no aggregation occurred along the grain, and the distribution of carbides was relatively uniform.
Cross section
Longitudinal section Figure 9 shows the morphology and orientation distribution under the different extents of thickness reduction in pipe wall as characterized by using SEM and EBSD. As shown in Figure 9a, the grain morphology resembles a fine lath, and its orientation distribution is random before the rolling process in Figure 9c. As shown in Figure 9b,d, the grain orientation of after rolling is gradually focused on <101>, and the distribution of grain orientation is highly consistent. Figure 9 shows the morphology and orientation distribution under the different extents of thickness reduction in pipe wall as characterized by using SEM and EBSD. As shown in Figure 9a, the grain morphology resembles a fine lath, and its orientation distribution is random before the rolling process in Figure 9c. As shown in Figure 9b,d, the grain orientation of after rolling is gradually focused on <101>, and the distribution of grain orientation is highly consistent. Metals 2022, 12, x FOR PEER REVIEW 11 of 14 Table 6 shows mechanical properties of materials before cold rolling. As deformation becomes more significant, the level of strength and hardness increases continuously in the process of cold deformation, but plasticity and toughness decline continuously. After rolling, the grain morphology is elongating along the rolling direction, a fibrous structure was formed due to significant cold deformation. Furthermore, the dislocation density also rose rapidly with the increasing severity of deformation, which is one of the main reasons for the occurrence of work hardening [12]. Due to the dislocation cells formed to hinder dislocation slip, the yield strength increases from 985 MPa to 1125 MPa, and tensile strength increases from 1134 MPa to 1178 MPa [13]. The yield strength decreases from 1125 MPa to 912 MPa, it is suspected that lots of dislocations occurring at grain boundaries in the cold rolling process recovery in the tempering process, and the dislocation density decreases.
Inner Surface Roughness of Pipe after Rolling
In the rolling process, when the feed rate increases gradually, the deformation rate of the metal rises at the same rolling speed. This is because of an increase in the volume of deformed metal between the roller and the mandrel. When the volume of deformed metal was too small, such as a feed rate 1 mm/stroke, the normal pressure per unit area in wall-reduction zone is 1046.7 MPa, the wrinkles generated by the compression of the inner wall cannot be flattened by the mandrel, resulting in an inner Ra ≥ 0.2 μm. As the normal pressure per unit area increases, plastic deformation occurs in the inner surface of the pipe, and the height of the inner fold is reduced. When normal pressure per unit area in wall-reduction reached 1058.4 MPa, the roughness of inner wall was lower than Ra 0.2 μm. Table 6 shows mechanical properties of materials before cold rolling. As deformation becomes more significant, the level of strength and hardness increases continuously in the process of cold deformation, but plasticity and toughness decline continuously. After rolling, the grain morphology is elongating along the rolling direction, a fibrous structure was formed due to significant cold deformation. Furthermore, the dislocation density also rose rapidly with the increasing severity of deformation, which is one of the main reasons for the occurrence of work hardening [12]. Due to the dislocation cells formed to hinder dislocation slip, the yield strength increases from 985 MPa to 1125 MPa, and tensile strength increases from 1134 MPa to 1178 MPa [13]. The yield strength decreases from 1125 MPa to 912 MPa, it is suspected that lots of dislocations occurring at grain boundaries in the cold rolling process recovery in the tempering process, and the dislocation density decreases. In the rolling process, when the feed rate increases gradually, the deformation rate of the metal rises at the same rolling speed. This is because of an increase in the volume of deformed metal between the roller and the mandrel. When the volume of deformed metal was too small, such as a feed rate 1 mm/stroke, the normal pressure per unit area in wall-reduction zone is 1046.7 MPa, the wrinkles generated by the compression of the inner wall cannot be flattened by the mandrel, resulting in an inner Ra ≥ 0.2 µm. As the normal pressure per unit area increases, plastic deformation occurs in the inner surface of the pipe, and the height of the inner fold is reduced. When normal pressure per unit area in wall-reduction reached 1058.4 MPa, the roughness of inner wall was lower than Ra 0.2 µm.
Microstructure and Mechanical Properties
The microstructure of the parent tube is tempered sorbite with a grain size of 16.22 µm. After cold rolling, the grain size is elongated to 24.31 µm along the axial direction. Dur-ing the cold rolling process, the tube deformation is strengthened, and Rp 0.2 is increased from 985 MPa to 1125 MPa. As the dislocation movement resistance increases with the wall-reduction increasing, the resistance comes from the long-range resistance of elastic interaction between dislocations or the short-range resistance of the cut step during dislocation intersection.
Cold rolling is effectively an extrusion process that involves axisymmetric deformation. Such a deformation is the combination of one-way stretching and two-way compression. However, residual stress can arise due to cold rolling after plastic deformation. To address this problem, it is necessary to carry out vacuum tempering. The vacuum or high-purity N2 gas tempering can be performed to maintain a low roughness in the inner wall of the seamless pipe. In practice, grain size change is an important index used to measure the degree of deformation. The grain size of the cross section and in the longitudinal direction was found closer and more uniform after cold rolling.
After vacuum tempering, the strength is slightly lower than the quenched and tempered material, but it is higher than the requirements on mechanical performance for the finished pipe with a tensile strength of 1030 MPa and a yield strength of 900 MPa. The impact toughness increases in tempered state, reaching 60.5 J. The tensile elongation (A%) and the percentage reduction in area (Z%) are higher than 10% and 40% in three states, which meets the practical requirements. The hardness at 600 • C for 4 h is 330~350 HV.
Comparison of Three Preparation Process
Three preparation process flow charts are shown in Figure 10. In traditional machining process, the cutting chips will scratch the machined surface, thus increasing the roughness of the inner wall. Ultimately, the improvement of fatigue performance is hindered for the finished product. Moreover, due to the long pipe, the length of the matching tool holder will increase, which may lead to vibration and distortion during the processing. Consequently, the centerline of the inner hole will be skewed. However, it is difficult to ensure the deviation of the hole axis, which makes it necessary to take corrective measures to ensure its machining accuracy [14]. In order to ensure the high dimensional accuracy and surface quality of the inner hole, multiple passes of reaming and polishing are required in the follow-up. However, this incurs a lot of labor and time costs to process the slender pipe. There were about 30 procedures conducted from the start of deep-hole drilling on the round steel to final production. For example, during carbon steel deep-hole processing, the drilling rate was only 10~14 mm/min, but the deep hole drilling took 28~40 min [15] (pipe length is 400 mm). However, the inner hole roughness reached merely Ra0.4~0.8 µm after multiple reaming and polishing. Only electropolishing was possible given an inner surface roughness of no greater than Ra 0.2 µm [16].
Conclusions
The effect of feed rate on the dimensional accuracy and inner wall roughness of cold rolled pipe was studied, and the effect of rolling on the microstructure and properties of the pipe was analyzed. The inner wall roughness of the mother pipe and rolled pipe was measured by white light interferometry. The characteristics of the pipes before and after rolling and after tempering, such as microstructure, tensile properties, impact toughness, etc., were investigated by SEM and EBSD.
(1) Pilger cold rolling was performed to produce high-precision thick-walled and small- In the traditional rolling process, the pipe could be affected by hydrogen embrittlement and intergranular corrosion during the pickling, and most of the existing cold-rolled seamless pipes are thin-walled pipes. The traditional cold rolling process cannot produce pipes with diameter thickness ratio ≤3 after tempering. Moreover, it also needs repeated heat treatment and polishing, so its preparation efficiency is lower than new rolling process.
The thick-walled pipe is prepared by new cold rolling, the average roughness of the inner wall of the finished product can be reduced to less than Ra0.2 µm, Calculate the rolling process time by Equation (9) with the feed rate increasing from 2 mm/stroke to 6 mm/stroke, the processing time decreased from 1.35 to 0.45 min, which is much better than traditional drilling in terms of preparation efficiency and the surface quality of finished product. t = 400 µ·m·V s (9)
Conclusions
The effect of feed rate on the dimensional accuracy and inner wall roughness of cold rolled pipe was studied, and the effect of rolling on the microstructure and properties of the pipe was analyzed. The inner wall roughness of the mother pipe and rolled pipe was measured by white light interferometry. The characteristics of the pipes before and after rolling and after tempering, such as microstructure, tensile properties, impact toughness, etc., were investigated by SEM and EBSD.
(1) Pilger cold rolling was performed to produce high-precision thick-walled and smallhole seamless steel pipes with a smaller diameter/thickness than 3, the inner and outer diameter tolerance less than 0.05 mm; (2) The relationship between the feed rate, normal pressure per unit area and inner wall roughness was analyzed. The results show that the feed rate is positively correlated with the normal pressure per unit area, and the increase in normal pressure per unit area is beneficial to reduce the inner wall roughness. When the normal pressure per unit area is up to 1058.4 MPa, inner wall roughness Ra ≤ 0.2 µm; (3) The microstructure of the mother pipe before rolling is tempered sorbite, and grain orientation is random distribution. After pilger rolling, the grain orientation is gradually focused on <101>, and the density of dislocations increased continuously, resulting in work hardening behavior and decrease in plasticity and toughness, after vacuum tempering the dislocation density decreases, the strength decreases, and the plasticity and toughness return to the level before rolling; (4) The preparation efficiency and the surface quality of the thick-walled pipe by new cold rolling is much better than the traditional machining process and traditional rolling process.
|
2022-10-22T15:09:36.484Z
|
2022-10-19T00:00:00.000
|
{
"year": 2022,
"sha1": "8a3be5c4255c963e7e2f7d16b5c568f905073027",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4701/12/10/1761/pdf?version=1666192196",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0580b8ab09632d449810087517ae7c43aa906e2a",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
}
|
248132743
|
pes2o/s2orc
|
v3-fos-license
|
Predicting 2-year survival in stage I-III non-small cell lung cancer: the development and validation of a scoring system from an Australian cohort
Background There are limited data on survival prediction models in contemporary inoperable non-small cell lung cancer (NSCLC) patients. The objective of this study was to develop and validate a survival prediction model in a cohort of inoperable stage I-III NSCLC patients treated with radiotherapy. Methods Data from inoperable stage I-III NSCLC patients diagnosed from 1/1/2016 to 31/12/2017 were collected from three radiation oncology clinics. Patient, tumour and treatment-related variables were selected for model inclusion using univariate and multivariate analysis. Cox proportional hazards regression was used to develop a 2-year overall survival prediction model, the South West Sydney Model (SWSM) in one clinic (n = 117) and validated in the other clinics (n = 144). Model performance, assessed internally and on one independent dataset, was expressed as Harrell’s concordance index (c-index). Results The SWSM contained five variables: Eastern Cooperative Oncology Group performance status, diffusing capacity of the lung for carbon monoxide, histological diagnosis, tumour lobe and equivalent dose in 2 Gy fractions. The SWSM yielded a c-index of 0.70 on internal validation and 0.72 on external validation. Survival probability could be stratified into three groups using a risk score derived from the model. Conclusions A 2-year survival model with good discrimination was developed. The model included tumour lobe as a novel variable and has the potential to guide treatment decisions. Further validation is needed in a larger patient cohort.
multitude of reasons including patient comorbidity and clinician bias. One strategy for addressing such variation is the development of a survival prediction model that integrates individual, medical and environmental factors unaccounted for by guidelines that commonly influence treatment decisions. This would have the potential to objectively evaluate treatment benefits in individual patients to facilitate shared decision-making, tailor patient management and optimise outcomes [6].
At present, the tumour, node, and metastasis (TNM) classification is considered gold standard for NSCLC prognostication. However stage alone is a poor predictor of overall survival, accounting for less than half of prognostic variance [7]. NSCLC patients within the same anatomic stratification are inherently heterogenous, with actual prognosis depending on a complex interplay of patient, tumour and treatment characteristics [8]. To accurately predict NSCLC survival beyond TNM stage and clinical judgement alone [9], quantitative survival prediction models that can be applied to specific patient profiles must account for a range of predictive factors, reflect current practice and demonstrate higher concordance than existing prognostication methods.
While several models have been published, none have demonstrated superior performance, applicability or global utility [11]. There is considerable discordance in the factors included in prognostic tools, with a systematic review discussing incomplete coverage of established predictors and the incorporation of variables that are difficult to measure as key shortcomings of published models [10]. Additionally, the discriminatory accuracies of existing tools have generally been insufficient to justify deviation from conventional staging systems [11,12]. Furthermore, with the development of newer radiotherapy protocols, targeted therapies and immunotherapy, earlier studies fail to capture contemporary approaches to NSCLC and are no longer clinically relevant [13]. There is a need for prediction models that incorporate comprehensive data from cohorts treated with modern radiotherapy techniques, and that encompass emerging factors such as mutation and programmed cell death-ligand-1 (PD-L1) status [14].
The primary aim of this study was to develop and validate a 2-year survival prediction model in a contemporary cohort of stage I-III NSCLC patients treated with radiotherapy. Secondary aims were to compare model survival predictions to those predicted by TNM stage and validate published survival prediction models in a comparable cohort of patients.
Population
This retrospective cohort study included patients diagnosed with inoperable stage I-III NSCLC between January 2016 and December 2017 at three Australian radiotherapy treatment institutions. Tumour staging was performed based on multidisciplinary team recommendations. All patients treated radically were staged using positron emission tomography-computed tomography (PET-CT), with confirmation using endobronchial ultrasound and biopsy in instances of uncertainty or where there was the potential to influence management. Patients who received radiotherapy alone or chemoradiotherapy (concurrent or sequential) were eligible for inclusion. Patients treated surgically or for recurrent disease were excluded. The development cohort comprised patients from South Western Sydney Local Health District (SWSLHD). The validation cohort included patients from Blacktown Cancer and Haematology Centre and Illawarra Cancer Care Centre (BICC).
Data
Retrospective data were retrieved through automated and manual extraction methods using the electronic medical record systems MOSAIQ (Elekta AB, Stockholm, Sweden), ARIA (Varian Medical Systems, Palo Alto, CA) and Cerner Powerchart (Cerner Corp, North Kansas City, MO). Gross tumour volume (GTV) data were obtained from radiotherapy planning systems for patients who received radiotherapy.
Data were collected for all available predictive variables for survival as identified from a prior literature review. Patient-related variables included: age at diagnosis, sex, current smoking status, pack years smoked, weight loss, pre-treatment pulmonary function (percent predicted values for forced expiratory volume in 1 s (FEV1) and diffusing capacity of the lung for carbon monoxide (DLCO)), Eastern Cooperative Oncology Group (ECOG) performance status [15] and comorbidities as defined by the Simplified Comorbidity Score (SCS) [16].
Tumour-related variables were: TNM stage according to the International Association for the Study of Lung Cancer (IASLC) 8th edition [17], histology, tumour grade, GTV, tumour location, mutation status (epidermal growth factor receptor (EGFR), anaplastic lymphoma kinase (ALK) and V-raf murine sarcoma viral oncogene homolog B (BRAF)) and PD-L1 status [18]. GTV was defined as the sum of the GTV primary and GTV lymph nodes.
Treatment-related variables included radiotherapy technique, radiotherapy treatment duration, equivalent dose in 2 Gy fractions (EQD 2 ) and use of chemotherapy.
Radiotherapy across the three cohorts included both conventional and stereotactic ablative body radiotherapy (SABR), with radiotherapy technique classified as conformal, intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT).
The primary outcome of overall survival was recorded as patient status at 31/12/2019 with all follow-up data obtained before this study end date. Survival time was defined as the period from the start of radiation therapy until date of death or until 31/12/2019 for living patients.
Statistical analysis
The Kaplan-Meier method was used to predict survival in the study population. Variables missing > 50% of data were excluded from univariate and multivariate analysis. In the development cohort, univariate Cox proportional hazards regression was used to evaluate the predictive value of variables, with those demonstrating an association with overall survival (p < 0.20) considered for inclusion in multivariate analysis. Backward stepwise regression was applied to select the variables retained in the final multivariate model [19]. Model fit was evaluated using the Hosmer-Lemeshow goodness of fit test.
A scoring system for the South West Sydney Model (SWSM) was generated using the logarithm of the odds ratio (OR) to allocate points for each variable. Risk groups were defined according to total score quartiles in the development cohort. Kaplan-Meier curves were generated and log-rank test used to evaluate significant survival differences between subgroups. The model was applied to the SWSLHD cohort to assess internal validity, with discrimination estimated using Harrell's concordance index (c-index) [20]. To assess the impact of missing DLCO data on model performance, validation was performed twice, initially with missing data excluded and again with simple mean imputation. Model calibration was assessed graphically by plotting observed survival probabilities against predicted probabilities and calculating the calibration slope and intercept for the development and validation cohorts [21]. External validation was performed by applying the SWSM to the BICC cohort.
The performance of the multivariate model was compared with predictions based on TNM staging alone. In addition, an existing prediction model was externally validated in the development and validation cohorts and compared to the SWSM. The discrimination of the model was also assessed using Harrell's c-index.
Statistical analyses were performed using IBM SPSS Statistics, version 26.0 (IBM Corp, Armonk, NY), Matlab, version 9.3 (MathWorks, Natick, MA) and Python, version 3.6 (Python Software Foundation, Wilmington, DE). Ethics approval was obtained from the SWSLHD Human Research Ethics Committee.
Results
The patient, tumour and treatment characteristics of each study site are summarised in Table 1. There were a total of 261 patients included in the study. At the study end date, 47.5% of patients were alive. In the development cohort, mutation status was unknown in 65.0% and PD-L1 status in 85.5% of patients and these variables were excluded from analysis.
By univariate analysis, the variables predictive of survival in the study population were ECOG performance status, DLCO, overall stage, histological diagnosis, tumour lobe, radiotherapy technique, radiotherapy treatment duration and EQD 2 ( Table 2). On multivariate analysis, the variables predictive of survival were ECOG performance status, DLCO, histological diagnosis, tumour lobe and EQD 2 ( Table 3). The Hosmer-Lemeshow test demonstrated good model fit using the selected predictors (p = 0.65). The scoring system for the SWSM is presented in Table 4.
Internal The formation of four risk groups in the SWSLHD cohort according to SWSM quartiles resulted in no differences between groups 2 and 3. Subsequently these groups were combined, producing a total of three risk groups. Kaplan-Meier survival curves by SWSM risk group in the development and validation cohorts are presented in Fig. 2. Log-rank test identified significant differences (p < 0.05) between risk groups in both development and validation cohorts. Using the SWSM, 2-year survival probability in the development cohort was 63.3% in group 1, 55.0% in group 2 and 20.0% in group 3.
The c-index of TNM staging alone for survival prediction was 0.
Discussion
To our knowledge, this study is the first to develop and validate a survival prediction model in a cohort of inoperable but potentially curable stage I-III NSCLC patients irrespective of radiotherapy intent. Developing a survival prediction model in this cohort may impact on management decisions as although in theory all patients should be treated with curative radiotherapy, this does not happen in real world practice. The SWSM incorporates a combination of well-established and novel predictors using routinely collected data to predict survival in a radiotherapy cohort. Notably, our incorporation of EQD 2 enables the SWSM to be applied as a predictive models with the potential to facilitate treatment decisions and radiotherapy planning. When evaluating the predictive performance of a model, a c-index greater than 0.6 generally reflects helpful discrimination [22]. In the development cohort, the SWSM demonstrated good predictive performance, and achieved similar results on external validation in one independent cohort. There was minimal difference between the c-indexes obtained on external validation when missing data were excluded or imputed. Importantly, the SWSM maintained discrimination superior to survival predictions based on TNM staging alone. This is in accordance with published models whereby the addition of demographic and clinical covariates translated into more robust survival predictions [14,23,24]. In prediction models, calibration illustrates the association between predicted and observed outcomes. A slope of one and intercept of zero indicate perfect calibration [25]. While our model calibration plot showed good agreement between expected and actual survival probabilities, the slope and positive intercept are suggestive of survival underestimation. Validation in a larger population is required for the generation of a more precise calibration curve.
Several models predicting survival in inoperable NSCLC cohorts have been published [11,14,15], however none have examined the entire inoperable stage I-III radiotherapy cohort in a contemporary setting. The MAASTRO model is a prognostic tool for 2-year survival of stage I-III NSCLC patients treated with curative chemoradiotherapy. However, since its 2009 publication, staging classifications and radiotherapy techniques have been updated. The c-indexes obtained during external validation of the SWSM were higher than those obtained on validation of the MAASTRO model [13]. In addition, the advantages of our model include its reflection of current practice and potential to be applied to patients receiving curative or palliative therapy.
Similarly, models published more recently have been limited to patients receiving curative radiotherapy or with early or localised disease. The STEPS (sex, T stage, Staging EBUS, performance status, N stage) score was developed in the UK to predict 2-year risk of death in stage I-III NSCLC and also contained five variables [26]. However, only patient and tumour-related factors were retained in this multivariate model, precluding its use in treatment decision-making. Another model developed in Japan only included patient-related variables (age, performance status, body mass index and Charlson comorbidity index) to predict non-lung cancer death in an elderly cohort receiving definitive SABR [27]. In contrast to these models, we deliberately chose to include a potentially curable population of patients with stage I-III NSCLC who were managed heterogeneously in order to develop a prediction model which can help decisionmaking in the real world.
During the development of the SWSM, the predictive variables considered for model inclusion encompassed patient, tumour and treatment factors as supported by current evidence. The patient-related determinants most commonly included in NSCLC models are age, sex and performance status. While a poorer prognosis has been associated with older age [27] and male sex [8] overall findings have been inconclusive. In line with previous models, our study found no significant associations between age and sex on the survival outcomes of the study population [28,29]. In contrast, performance status has been associated with NSCLC survival in patients receiving curative radiotherapy [23,26,28,30] and was retained in our final model. However, a recognised limitation of performance status as a predictive variable is its subjectivity and inter-observer variability [27]. The only other patient-related variable identified as a survival predictor was pre-treatment DLCO. This has been demonstrated in a model involving NSCLC patients treated with SABR [30]. Insufficient lung function is a common reason for medical inoperability and frequently determines the suitability of treatment, with one study identifying pre-treatment DLCO as the pulmonary function measure most strongly associated with overall survival [31]. While DLCO has been reported to influence NSCLC survival in the surgical literature [32], few studies have analysed its influence on inoperable cohorts. The results of this study could be used to support further research exploring this association. The tumour-related characteristics included in the SWSM were histological diagnosis and tumour lobe. Consistent with prior studies, non-squamous cell tumours demonstrated better survival probability than squamous cell carcinomas [17,33]. The inclusion of tumour lobe in the SWSM is novel. A recent systematic review concluded that tumours located in the upper lobe conferred improved survival compared to those in the middle or lower lobes [34], consistent with our findings. The increased treatment toxicities from higher cardiac dose for middle lobe tumours and higher lung dose for lower lobe tumours may explain this result. Furthermore, the lower lobe has been associated with an increased proportion of non-adenocarcinoma tumours and a lower frequency of EGFR mutations, both of which are unfavourable survival characteristics [35]. However, at present, evidence supporting the significance of tumour location as an independent predictor of survival is less established.
The inclusion of the treatment variable EQD 2 in the SWSM allows its application as a predictive rather than a prognostic model [36]. By including EQD 2 in the prediction model, using the scoring in Table 4, one can calculate survival depending on dosage regimen in an individual patient and counsel patients accordingly about risks and benefits of treatment. Some patients who only derive a small survival benefit from more intensive radiotherapy may not wish to risk the toxicities of treatment [37]. Others may choose to undergo higher dose radiotherapy for any survival benefit no matter how small. The model attempts to provide some objectivity to aid decision-making rather than relying on clinician judgement alone. Few studies have considered EQD 2 as a variable [23] as most have been developed in a specific population only receiving curative treatment [11,37]. Furthermore, while the survival benefit of increasing radiotherapy dose has been demonstrated, its ability to improve quality of life is yet to be established [38].
There are limitations to this study. The SWSM was developed using single institution data with a relatively small sample size, although the sample size is similar to other studies [37,39]. The model was developed on the population demographic of South West Sydney, which has a higher proportion of overseas-born individuals compared to Australia as a whole, hence this may impact generalisability. This was a retrospective study relying on information documented in medical records resulting in inevitable missing data. Information not routinely collected within oncology information systems such as blood parameters were not analysed. Furthermore, data on the staging procedures used for individual patients were not collected, although this is reflective of a clinic cohort of patients. Survival may also have been impacted on by treatment at relapse which was not accounted for in this study. However the greatest impact on survival is initial treatment, and our methodology is similar to other survival prediction modelling studies [13,39].
The findings of this study have implications for further research. External validation studies should be conducted by applying the SWSM to larger datasets to confirm findings and assess model generalisability. The current model may potentially be improved by the addition of mutation and PD-L1 status. Unfortunately, the influence of these predictors was unable to be evaluated as data on these markers were not routinely collected in stage I-III NSCLC patients during the time period of the study. Likewise, global advances in laboratory biomarkers and genomic parameters [40,41] have been recently highlighted and may transform future NSCLC prognostication, however at present lack systematic investigation. In addition, data were not collected on cardiac radiation dose, which has been identified as an independent risk factor for all-cause mortality after radiotherapy in locally advanced NSCLC [42,43]. Finally, impact analysis studies evaluating the acceptability, cost-effectiveness and practicality of the SWSM is required prior to clinical implementation [44].
We plan to develop and validate a survival prediction model in patients with Stage I-III NSCLC patients undergoing radiotherapy in a larger cohort of patients with distributed learning across multiple centres using the AusCAT network [45]. The factors found to be significant in this work will be considered alongside newer variables. The ultimate aim is to develop a tool to support radiotherapy decision-making in NSCLC using objective parameters rather than subjective clinical judgment. This will facilitate shared decision-making between patients and clinicians and reduce variability in treatment recommendations between clinicians and between institutions.
Conclusions
In conclusion, our study developed a survival prediction model in a real-world contemporary cohort of inoperable stage I-III NSCLC patients treated with radiotherapy. The SWSM utilises readily obtainable data and is convenient and simple to use by clinicians. The model exhibited good discrimination on both internal and external validation, and has the potential to guide treatment decisions. Further validation of this model is needed in a larger cohort of patients.
|
2022-04-14T13:42:40.449Z
|
2022-04-13T00:00:00.000
|
{
"year": 2022,
"sha1": "68fe3ad7998e7205a2d76aceecd33b765f307da2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "48dbffd33e26d1ceb8184de8c33d932d0b859a67",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233702726
|
pes2o/s2orc
|
v3-fos-license
|
The Efficacy of Simethicone With Polyethylene Glycol for Bowel Preparation
Background: Simethicone (SIM) is a commonly used antifoaming agent in the clinic. However, it has not been clarified whether SIM can improve the quality of intestinal preparation and the detection rates of adenomas (ADR) and polyps (PDR). This systematic review and meta-analysis were carried out to mainly evaluate the effect of SIM in bowel preparation for colonoscopy. Materials and Methods: An electronic and a manual search of the literature for studies was conducted in PubMed, EMBASE, and Web of Science in all published data before February 1, 2020. The primary outcomes were the quality of bowel preparation and the ADR and PDR. All the data were calculated using a pooled estimate of risk ratio with 95% confidence intervals, and a random-effect model was used for the calculation. Results: Eighteen randomized controlled trials with 7187 patients were included in this meta-analysis. Polyethylene glycol (PEG) with SIM improved colon cleansing (P<0.00001), PDR (P=0.006) and the detection rate of lesions in the right colon (P<0.00001) when compared with PEG alone. There was no difference in the ADR (P=0.68), withdrawal time (P=0.06), cecal intubation rate (P=0.98), and cecal intubation time (P=0.65) between 2 groups. The rate of abdominal bloating rate was higher in the PEG group, but there was no significant difference in vomiting (P=0.65), and abdominal pain (P=0.25). Conclusions: SIM improves the quality of bowel cleanliness and PDR but not ADR. Besides, SIM improves the detection rate of lesions in the right colon and decreased abdominal bloating, but do not affect vomiting and abdominal pain or cramping.
C olorectal cancer (CRC) is the most frequent malignant neoplasm in most countries. In the United States, CRC is the second leading cause of death from cancer. 1 Colonoscopy can decrease the incidence and mortality of CRC significantly through the detection and removal of adenomatous polyps and other precancerous lesions.
Efficacy of bowel cleansing is an important determinant of outcomes of colonoscopy. 2 Inadequate bowel preparation leaves a residual fecal residue or even fecal mass in the intestinal cavity and bubbles over colonic mucosa, thus result in longer procedure time and the need for early repetition of colonoscopy. 3 To improve efficacy and patient compliance, antifoaming drugs have been used as adjuvant to the standard colonic preparation products. 4 Simethicone (SIM) is a commonly used antifoaming agent in the clinic. 5 By reducing the surface tension of bubbles in the lumen of the digestive tract, it can remove the bubble and improve the clarity of examination. Furthermore, it can reduce abdominal distention, thus resulting in a significant reduction of the number of patients with gastrointestinal discomfort symptoms.
There is no consensus on the routine use of silicone oil in intestinal preparation. One meta-analysis showed that oral SIM improved bowel cleanness and mucosal visibility but not overall adenoma detection rate (ADR) or polyp detection rate (PDR). 6 However, another meta-analysis showed that polyethylene glycol (PEG) with SIM improved colon cleansing and ADR when compared with PEG alone. 7 Thus, to date, whether it had a beneficial role for ADR or PDR had yet to be confirmed. This study, aiming to include all relevant randomized controlled trails (RCTs), is the first to evaluate the role of SIM in intestinal preparation in terms of its effects on intestinal cleanliness and the ADR and PDR when combined with laxative.
The objective of our systematic review was to identify, assess, and meta-analyze data from RCTs evaluating the effects of SIM on bowel preparation quality and the ADR and PDR for colonoscopy. In addition, we compared adverse events withdrawal time, cecal intubation time, and rates in the SIM treatment arm and the non-SIM arm.
MATERIALS AND METHODS
All analyses were based on previous published studies, thus no ethical approval and patient consent are required.
Search Strategy
Online databases (PubMed, EMBAS, and Web of Science) were searched for eligible studies published from January 1988 to January 2020. Citation selection utilized a highly sensitive search strategy to identify randomized trials with MeSH headings related to (1)
Selection Criteria
Studies that met all the following inclusion criteria were considered eligible: (a) RCTs; (b) adult patients (age 18 y and above) receiving colonoscopy; (c) articles in English; (d) studies comparing a bowel preparation with SIM to a bowel preparation without SIM; (e) studies using outcome measures to evaluate the effectiveness of the bowel preparation were included.
Exclusion criteria were: (a) trials comprising only animals, pediatric or inflammatory bowel disease patient populations; (b) non-English articles; (c) computed tomography colonography or small bowel enteroscopy or capsule endoscopy; (d) studies only published as abstracts were excluded.
Finally, 18 kinds of qualified literature were included in this systemic review and meta-analysis (Fig. 1).
Data Collection
Two reviewers (M.Y. and Z.L.) extracted data using a standardized form independently. The following data were extracted from each article: name of the first author, year of publication, country of study origin, patient characteristics (sample size; mean age; sex), use of cathartics and dosage of oral SIM, scale used to evaluate colon cleansing, degree of colon cleansing, mucosal bubble score, withdrawal time of colonoscopy, cecal intubation rate and cecal intubation time, the preparations to colonoscopy intervals, and overall ADR or PDR. In addition, the location and number of adenomas or polyps per patient were obtained as data presented. Data were extracted as originally stated or following appropriate calculations as necessary. If data were missing or unavailable from a study, the authors were contacted to provide the missing data, if possible.
Outcome Assessment
The primary outcomes of these studies were: (a) bowel preparation quality in the whole colon; (b) ADR and PDR in the whole colon and right colon.
The secondary outcomes included cecal intubation rate and cecal intubation time, withdrawal time of colonoscopy and side effects such as abdominal bloating, vomiting, and abdominal pain or cramping. The studies scored the quality of bowel preparations using validated scales either the Boston Bowel Preparation Scale 8 (BBPS), the Ottawa Bowel Preparation Quality Scale 9 (OBPS), Aronchick Scale, 10 or their nonvalidated scales.
Definitions for successful and unsuccessful bowel preparations were established a priori using existing validated scales or author's definitions of successful bowel preparations where validated scales were not used. In the included studies, the authors defined high quality bowel preparation as a BBPS score of ≥ 6, 11-17 an OBPS of <5, 17-20 and an Aronchick Scale score between 1 and 3. 15,17,[21][22][23] For studies not using a validated scale, their scale's determination of adequate and inadequate was used.
Quality Assessment
Trail quality was graded using the Cochrane risk of bias tool for RCTs. 24 Two reviewers assessed quality measurements for included studies, and discrepancies were adjudicated by collegial discussion. It comprised of 7 items: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective outcome reporting, and other bias. For each item, the risk of bias was assessed as "low risk," "unclear risk," or "high risk" (Fig. 2). All data abstraction and entries were validated independently by 2 authors.
Statistical Analysis
All statistical analyses were performed by Review Manager software (RevMan, version 5.3.5, Copenhagen). Weighted mean differences with 95% confidence interval (95% CI) as the effect estimate and the risk ratio (RR) with 95% CI were used to analyze continuous data and dichotomous data, respectively. The difference was statistically significant in the case of CIs at a level of 95% or P < 0.05. A forest plot was conducted to test the heterogeneity between RCTs. I 2 < 25 was regarded as low heterogeneity, I 2 between 25% and 75% was regarded as medium heterogeneity, and I 2 ≥ 75% was regarded as high heterogeneity. 25 A fixed-effect model or a random-effect model was chosen based on the forest plot and the degree of heterogeneity. Sensitivity analysis was performed by excluding the included studies one by one. Publication bias was assessed with funnel plots.
Study Selection
As shown in Figure 1, a total of 99 records were initially identified including 34 records from PubMed, 36 records from Embase, and 29 records from Web of Science. After duplicates were excluded, 63 records were identified through online database searching. After reviewing the titles and the abstracts, 22 articles were retrieved as full texts. Four articles with insufficient data were further exclude. Finally, 18 articles 11-23,26-30 fulfilled the inclusion criteria and were included in the meta-analysis.
Risk of Bias Assessment and Sensitivity Analysis
Of the 18 trials, 13 trials were single-blinded, 4 trials were double-blinded, and 1 trial did not describe a method to ensure that the endoscopist remained blinded to the intervention. The blind method was not considered an impairment because the outcomes were objective and assessed by blinded observers. Whether other biases existed was unclear. Publication bias testing could not be completed because of the low number of included trials in the analysis.
We performed sensitivity analysis on the results with significant statistical heterogeneity to assess the stability of our results. Whether a single study substantially altered the heterogeneity of the summary estimate was assessed by excluding a single study. Sensitivity analysis was performed by repeating the meta-analysis with the exclusion of 1 study at a time to assess the overall effect of the exclusion on the pooled RRs.
Study Characteristics
Eighteen RCTs with 7187 patients conducted between 1988 and 2019 were included in the final meta-analysis. The main characteristics of the 18 studies are shown in Table 1. Of these, 7 were multicenter studies. The indications for colonoscopy were similar between studies with most patients receiving colonoscopy for CRC screening. Among these studies, 9 were from Europe, 5 from Asia, and 4 from the United States. The sample size ranged from 90 to 2802. All studies had at least 1 treatment arm adding SIM into oral bowel preparation regimen, and at least 1 treatment arm without SIM, allowing for the effect of SIM on bowel cleanliness to be assessed. The amount of SIM added varied in the included articles. Except 2 studies using sodium phosphate for bowel cleansing, the rest of these studies used 2 or 4 L of PEG.
Quality of Bowel Cleansing
Seven RCTs used BBPS to evaluate the quality of bowel cleansing, 4 used OBPS and 4 used ABPS. The RCT conducted by Valiante and colleagues reported the Harefield cleansing scale of colonoscopy, 31 and the RCT conducted by Matro and colleagues used their nonvalidated scales.
Compared with the non-SIM group, the quality of bowel cleansing in SIM group was statistically significantly higher across studies (95% CI, 1.04-1.08; I 2 = 68%; P < 0.00001; Fig. 3), demonstrating that the quality of bowel preparation for colonoscopy in SIM group was higher than that of the non-SIM group. Heterogeneity was high, and a random-effect model was used to summarize effect size.
A subgroup analysis of bowel preparations comparing the use of SIM in single-dosing and split-dosing regimens was performed. In the single-dosing analysis, the PEG+SIM arm had a 1.15 greater odds of having a successful bowel preparation than the PEG arm (4 trials; 95% CI, 1.09-1.21; I² = 23%; P < 0.00001; Fig. 4). Heterogeneity was moderate and statistically significant across studies. However, in the split-dosing subgroup, the PEG+ SIM arm only had a 1.03 greater odds of having a successful bowel preparation than the PEG arm (9 trials; 95% CI, 1.01-1.05; I² = 57%; P < 0.0009; Fig. 4), indicating the effect of mixing SIM with split-dosing regimen was not obvious.
Overall ADR and PDR
ADRs were available in 9 studies, and PDRs were recorded in 11 studies. The pooled RR using a random-effect model for PDR (RR = 1.13; 95% CI, 1.04-1.23; I² = 28%; P = 0.006; Fig. 5) was statistically significant in the SIM or control group. However, the pooled RR using a random-effect model for ADR (RR = 1.02; 95% CI, 0.93-1.11; P = 0.68; I² = 41%; Fig. 6) was not statistically significant in the SIM or control group. Sensitivity analysis and bias analysis of the results revealed an important factor affecting the heterogeneity and stability of the results in 1 study, where the control group was given either a divided dose or a single dose. When a postsensitivity analysis was performed without this study, we found that the heterogeneity was lower than before and the results tended to be stable.
ADR and PDR in the Right Colon
Five studies reported the detection rates of lesions in the right colon, which showed statistically significant difference (RR = 1.57; 95% CI, 1.33-1.86; P < 0.00001; I² = 74%; Fig. 7). After sensitivity analysis, we found that after removing Bai et al, 11 the statistical results are still significant. However, after removing Bai et al, 11 I 2 dropped from 75 to 44, which was the main factor affecting heterogeneity. It is probably because the withdrawal time of this study was shorter than other studies. But there has been no bias.
Compared with the control group, the abdominal bloating rates in the SIM group were statistically significantly different across studies (RR = 0.73; 95% CI, 0.66-0.80; P < 0.00001; I² = 93%; Fig. 11). High heterogeneity might be the result of unquantified evaluation criteria of abdominal distension, which was artificially evaluated by patients according to their perception.
DISCUSSION
Compared with the traditional examination methods, colonoscopy has clear advantages in the diagnosis and treatment of intestinal diseases. Clear inspection field of vision is the prerequisite for accurate diagnosis of lesions. At the same time, poor intestinal preparation leads to bubbles, mucus, and fecal contamination in the intestinal cavity, which will reduce the clarity of adverse events such as abdominal bloating, vomiting, and abdominal pain or cramping. As our results showed, adding SIM to the bowel preparation regimen improved the quality of bowel cleanliness and polyp detection rate but not ADR. The withdrawal time, cecal intubation time, and cecal intubation rate were not statistically significant in the SIM or control group. Besides, we found that SIM could decrease abdominal bloating but had no effect on vomiting and abdominal pain or cramping.
The underlying mechanism of SIM in improving bowel cleansing is still unknown. Apart from reducing the surface tension of the intestinal contents, SIM may potentially decrease the resistance from bubbles, thereby promoting intestinal peristalsis. 17 In our study, compared with the non-SIM group, the quality of bowel cleansing in SIM group was statistically significantly higher across studies. Furthermore, the subgroup analysis revealed that the effect of adding SIM as single-dosing regimen was more obvious than that as split-dosing regimen. This was likely because a single dose intestinal preparation plan often fails to achieve satisfactory results so that SIM can provide more obvious improvement on intestinal preparation. Colonoscopy interval was significantly associated with bowel cleansing in the current study. Three articles compared the preparations to colonoscopy interval and have shown that the optimal interval required to achieve adequate bowel cleansing was between 2 and 7 hours, whereas the risk of inadequate cleansing significantly increased if the colonoscopy was carried out after 7 hours. We hope that more future studies will incorporate this indicator for analysis.
Although SIM did not improve the overall ADR, it did improve the detection rate of lesions in the right colon. We Colonoscopy is the most direct way to diagnose and treat colorectal diseases, but it has a certain rate of missed diagnosis of lesions, especially in the right colon. 33 Because of the deep fold of the right colon, the lesions are often flat, resulting in a higher rate of missed diagnosis. Therefore, it is of great clinical significance to reduce the missed diagnosis of polypoid lesions in the right colon. In this study, compared with the non-SIM group, the detection rate of lesions in the right colon in SIM group was statistically significantly higher. Zhang et al 17 reported that ADR in the right colon was significantly higher for the SIM group than the conventional group. As demonstrated in our meta-analysis, SIM could reduce mucus or bubbles, produce a clearer field of vision, and increase the detection rate of right colonic polyps, which would likely lead to an increase in the effectiveness of colonoscopy.
Regarding adverse events, we found that SIM could significantly decrease the odds of abdominal bloating. Better tolerance of patients can improve the quality and compliance of intestinal preparation and reduce the fear of endoscopic examination.
Recently, the study by Ofstead et al 34 pointed out that SIM solutions usually contain sugars and thickeners, which may be left in the enteroscopy channel during endoscopic use, promote the formation of biofilms and contribute to microbial growth and biofilm development. However, the available data to date have proved association but not causation. Therefore, in agreement with the American Society for Gastrointestinal Endoscopy, the Canadian Association of Gastroenterology recommends attaching importance to high-level reprocessing protocols and performing regular microbiological surveillance. 35 In brief, these findings are likely to limit the use of SIM. After assessing the benefits and risks, we concluded that continued use of SIM during gastrointestinal endoscopy was important to inhibit bubble formation and optimize mucosal examination. Therefore, it is important to preclean the endoscope immediately at the bedside, including postoperative rinsing and the prompt initiation of manual or machine cleaning. Moreover, more studies are needed to explore the best antifoaming dose of SIM to avoid the excessive use of it.
Strengths of this review include a comprehensive literature search, the inclusion of multiple types of polyps at different sites and adverse events as outcomes. However, there are several limitations in our study. First, the impact of technical factors and experience of endoscopists were not taken into account. Second, endoscopists used several different scale schemes and criteria to define the quality of colon cleansing. However, all these assessment scales emphasize similar aspects including the removable volume of clear liquid or fecal residue and the impact of the surplus on mucosal visibility, which greatly reduces the rate of bias. Third, with the exception of 1 article that did not mention the type of blindness, the rest were single-blinded for outcome assessment. Although it was unlikely that the blinding of outcome assessment had influenced the outcome of our analysis, we still recommend that double-blinded RCTs should be conducted to compare the effect on SIM group to that on non-SIM group.
More large double-blinded multicenter RCTs are necessary to evaluate the potential effect of SIM on colonoscopy.
CONCLUSIONS
In conclusion, adding SIM to the bowel preparation regimen improved the quality of bowel cleanliness and polyp detection rate but not ADR. No statistically significant differences were found in withdrawal time, cecal intubation time, and cecal intubation rate. Besides, we found that SIM improved the detection rate of lesions in the right colon and decreased abdominal bloating but had no effect on vomiting and abdominal pain or cramping.
|
2021-05-05T00:09:34.898Z
|
2021-03-12T00:00:00.000
|
{
"year": 2021,
"sha1": "0871b2e569c167e8c58870c16d6c9dbacab8cbb9",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/jcge/Fulltext/2021/07000/The_Efficacy_of_Simethicone_With_Polyethylene.2.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ec7727695135427ccbfe9cd4d4303f539033d46",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229227135
|
pes2o/s2orc
|
v3-fos-license
|
Ultrasound induced fragmentation of primary Al3Zr crystals
Ultrasonic cavitation melt treatment (UST) of aluminium alloys has received considerable attention in the metal industry due to its simple and effective processing response. The refined primary intermetallic phases formed in the treated alloys during controlled solidification, govern alloy structural and mechanical properties for applications in the automotive and aerospace industries. Since the UST is performed close to the liquidus temperatures of the alloys, understanding the refinement mechanism of the primary intermetallic phases has been beset by difficulties in imaging and handling of liquid metals. In this paper, the sonofragmentation behaviour of primary intermetallic Al3Zr crystals extracted from the matrix of an Al-3 wt% Zr alloy and fixed on a solid substrate was investigated. The intermetallics were exposed to cavitation action in deionized water at 24 kHz of ultrasound frequency. The fragmentation mechanism from the nearby collapsing cavitation bubbles was studied with in-situ high speed imaging. Results revealed that the main fragmentation mechanism is associated with the propagation of shock wave emissions from the collapsing bubble clouds in the vicinity of the crystal. The mechanical properties of the Al3Zr phase determined previously were used for the fracture analysis. It was found that an Al3Zr intermetallic undergoes low cycle fatigue fracture due to the continuous interaction with the shock wave pressure. The magnitude of the resulting shear stress that leads to intermetallic fragmentation was found to be in the range of 0.6 – 1 MPa.
Introduction
Ultrasound induced cavitation and its possible benefits in liquid metal processing has received considerable attention from both the academic and industrial communities since the 1950's. Ultrasonic melt treatment (UST) being an eco-friendly, sustainable and economical processing route offers several advantages in terms of degassing, enhanced heterogeneous nucleation and structural refinement of the as-cast product resulting in improved quality of the material [1,2].
Primary intermetallics of finer size and shape formed in the Al alloys are highly desirable to augment heterogeneous nucleation in the alloy melt during solidification and so obtain microstructural refinement leading to enhancements in mechanical properties. Although induced ultrasonic cavitation in metallic melts has been proven to be the cause of structural refinement of various Al alloys [3], the fundamental understanding of the mechanism by which UST promotes fragmentation and the corresponding nucleating effects to obtain finer grain structures is still deficient. Previously the effects of UST in light alloy melts have only been studied using ex-situ (i.e. after the treatment) characterization techniques to analyse the ultrasonically treated materials. In recent years, in-situ characterization methods have received much attention for UST performance visualization in real time conditions [4][5][6][7][8].
The two most common in-situ experimental techniques for characterizing materials processed using UST are; (i) high-speed optical imaging of transparent organic liquids/melts, and (ii) X-ray synchrotron radiography of liquid metals. The former technique is much more widely established due to low temperature processing and transparent nature of the treated samples. The latter technique allow the real time observation of the cavitation bubbles and corresponding phenomena specifically growth rate, average radius and their distribution. However, due to handling and processing difficulties of analysing real metallic melts and limited field of view for capturing the dynamic effects of multiphase interactions, a common technique is to use optically transparent liquids such as water to replicate the cavitation conditions and monitor the interaction with the solid phases during treatment. In-situ optical imaging studies of solidifying organic transparent alloys under the presence of ultrasound have proven to be effective for analysing grain nucleation by fragmentation of evolving dendritic structures accelerated by the oscillation of stable and transient cavitation bubbles [4,6]. Specifically, Shu et al. [5] found that the rate of fragmentation of growing dendrites in transparent organic alloy systems can be either slow or violent and depends primarily on the type of cavitation bubbles. Lately, in-situ synchrotron X-ray imaging has also been applied to studying real liquid metals under the influence of different external fields [9][10][11]. Growth rate, average radius and distribution of cavitation bubble sizes in Al-10 wt% Cu alloy was studied by Xu et al. [9] and Mi et al. [12]. Tzanakis et al. [13] provided the first direct evidence of instantaneous re-filling of a micro-capillary channel using Al-10 wt% Cu alloy melt confirming the previously postulated ultrasonic capillary effect (UCE). Dynamic collapse of a cavitation bubble in multiphase liquid flow in a Bi-8 wt% alloy melt has been observed by Tan et al. [14]. Although, observation of cavitation bubbles and their dynamic behaviour under the influence of ultrasound in real and transparent organic melts have been conclusive to a certain extent, understanding of the direct interaction of ultrasound with the dispersed or agglomerated solid phase is still lacking. Wagterveld et al. [15] imaged the influence of acoustic cavitation on suspended calcite crystals in saturated CaCO 3 solution and demonstrated that fracture of single calcite crystals is induced by the inception and collapse of cavitation cluster and acoustic streaming. Wang et al. [16] noticed that the fracture of intermetallics by the action of nearby cavitation bubbles is not an instantaneous process and requires substantial time to occur. In addition to handling and processing difficulties in analysing real metallic melts through in-situ X-ray synchrotron technique, the radiography method offers a very limited field of view for capturing the dynamic effects of multi-phase interactions.
In this paper, following an approach used by Wang et al. [16] the cavitation-induced fragmentation of extracted primary Al 3 Zr crystals under the influence of a 24 kHz ultrasonic excitation signal has been investigated in deionized water by high-speed imaging. The fracture mechanism of a single intermetallic crystal has been elucidated using the recorded images and the stressdeflection theory. The induced stress has also been compared with the crack propagation studies conducted earlier.
Sample preparation
Pure Al and a master alloy (Al-5wt% Zr) were smelted to produce about 350 grams of an Al-3wt% Zr alloy. The cast alloy was then re-melted using an electric arc furnace and slowly cooled in a cylindrical graphite crucible of 50 mm diameter following a thermal cycle as discussed in [17]. Al-3wt.% Zr alloy cubes of dimension 5 x 5 x 5 mm were cut from the solidified ingot using a rotating silicon carbide blade.
Extraction of the primary Al 3 Zr crystals was done by immersing the alloy in a 15% NaOH water solution for 24 hrs. Subsequently, the Al matrix was completely dissolved leaving only the primary crystals based on the following chemical reaction: The intermetallic crystals were then filtered out from the solution and were carefully rinsed with ethanol and left to dry out prior to the sonofragmentation studies. The optical micrographs of an Al-3wt% Zr alloy and extracted Al 3 Zr intermetallic particles are displayed in Fig. 1.
Experimental setup
The chemically extracted intermetallic particles were fixed on a steel base with a superglue adhesive and placed in a glass container of dimension 7.5 cm x 7.5 cm x 10 cm. The intermetallic was strategically positioned 2-3 mm below the piezoelectric transducer with a titanium sonotrode tip of 3 mm diameter with power density of 460 W/cm 2 (Hielscher UP200S processor) operating at a frequency of 24 kHz. The UP200S ultrasonic processor handbook enlists the detailed configuration of the system [18]. The ultrasound excitation was employed at a selected amplitude of 210 μm and the experiments were conducted in de-ionized water at room temperature.
Before capturing the fragmentation phenomenon, monitoring of the cavitation field just below the sonotrode tip and within 2 mm distance where the intermetallic crystals were later mounted was performed using a Hyper Vision HPV X2 (Shimadzu, Japan) high speed video camera and images were recorded at 1M fps in order to capture the fast shock wave propagation from the collapsing bubbles under synchronous 10 ns laser pulse illumination as in [19]. A In-situ interaction of intermetallic particles and ultrasound induced cavitation was filmed using a high-speed camera (Photron SA-Z) operating at 100,000 fps adequate to capture the fragmentation sequence of intermetallics. The camera lens was placed at a distance of 165 mm from the ultrasound source to have a full focussed observation of the interaction plane. For imaging with maximum illumination of the interaction plane, a multi LED flash light (GS Vitec) was used illuminating both the front and rear of the tank. The schematic of the experimental setup is illustrated in Fig. 2. 3 Results and discussion Figure 3 shows recorded images of ultrasound induced cavitation cloud and propagating shock wave fronts from the imploding bubbles. From the sequence of images, it is evident that introduction of an ultrasonic wave in a liquid medium leads to the development of acoustic cavitation cloud and emission of periodic high energy shock waves reaching pressures of several GPa and shock velocities up to 4000 m/s [20,21]. However, most of the shock wave energy is released within the first few hundred micrometres from the bubble rim [21]. Fig. 3(ac) represents the movement of shock waves marked as S1 and S2 (indicated with blue curves and arrows) at definite intervals. It can be seen from the images that as the shock wave S1 propagates further away, another shock wave emerges from a thick cavitation cloud near the ultrasonic horn and the progression continues. Using frame by frame images, the velocity of S1 was calculated at a radial distance of approximately 3 mm and was found out to be almost 1650 m/s. It has been frequently observed that these emitted shock wave are responsible for micro-damage on any solid surface present in their vicinity [22]. Other effects such as micro-streaming and turbulences have also been found to attack the solid interfaces aggressively causing fragmentation and erosion of the material [23].
Fragmentation of primary Al 3 Zr particles
To better understand and observe the effect of shock wave fronts on the fragmentation of the intermetallic crystals, the imaging was carried out at a comparatively lower frame rate i.e. 100000 fps Figure 4(a-l) represents the fragmentation sequence of the two Al 3 Zr crystals, one placed in a perpendicular plane to the other. The displayed sequences of fragmentation images are representative of at least 10 of similar and reproducible observations. Only carefully chosen images haven been included for brevity of the manuscript. The first frame at t = 0 μs, shows the two crystals positioned at right angles to each other. From here onwards, the crystal on the left will be referred to as side facing (SF) crystals and the crystal on the right will be mentioned as front facing (FF) crystal to avoid any ambiguity for the reader. Figure 4a shows the two well developed and illuminated tabular-plated crystals with similar dimensions; roughly 3 mm x 2.5 mm and a thickness of 60 -100 μm. The detailed morphology of these primary Al 3 Zr crystals can be found elsewhere [24]. The first frame also shows the SF crystal having a small notch on the edge (marked in red). It should be noted that the visibility and details of the notch/crack are limited by the camera resolution based on the present high-speed images. The ultrasonic device was subsequently switched on generating a cluster of cavitating bubbles across the sonotrode tip as depicted in Fig. 4b. As soon as the bubble cloud starts to propagate towards the crystal, slight deflection of the tip of SF is observed (marked in yellow). Since this sequence of images was recorded using white light illumination, it was difficult to capture the propagation of shock waves due to wavelength restrictions. After about 4 ms, the SF crystal above the small notch (indicated with the arrow mark) starts to deflect vigorously due to continuous emission of shock waves from the oscillating and imploding bubble cloud. At the same time, a fine crack was also formed on the FF crystal (marked in red). With the continuous oscillation of the ultrasonic horn tip, the cavitation cloud became bigger, simultaneously moving towards the crystal causing both the notch and crack to enlarge and grow in size [ Fig. 4(e-f)] until it completely separated off the parent crystal (Fig. 4g). Note that up till this period the cavitation cloud had not even reached those locations of crystal imperfections from where the notch/crack began to grow indicating that shock waves (reaching ahead of the cloud) have the potential to fragment the intermetallic crystals rapidly and violently. It is also important to understand that the occurrence of the notches/cracks in the crystal results from geometrical irregularities structural defects and micro-cracks arising from the residual stresses in the intermetallic. It is also interesting to note that once the crack initiates and reaches its critical length, the crystal fails immediately in just 270 μs confirming the extremely brittle nature of the intermetallic as observed by authors in [25]. It was also revealed that the shock waves emitted from the collapse of a single cavitation bubble are primarily responsible for the fragmentation of the solid interface present nearby. At around t = 10 ms, the cavitation field further grows and encapsulates half of the crystal (SF and FF). Figure 4i shows the real time snapshot of the maximum deflection induced in the SF crystal (marked in yellow). With a continued oscillating cavitation field, the SF crystal experienced cyclic fatigue owing to developed shear stresses before completely fragmenting at t = 12.42 ms as shown in Fig. 4j. At the same instant, a crack can be seen on FF crystal (marked with red arrow) which also eventually propagates and breaks off (Fig. 4k). Overall, this whole process of intermetallic failure can be attributed to the combined effect of cavitation bubble collapses and emitted shock waves. From the sequence of high speed images, it can also be deduced that the first fracture happens in just 86 cycles of ultrasonic vibrations representative of low cycle fatigue failure. The crystal disintegrates into micron sized particles in few acoustic cycles upon sonofragmentation
Application of stress-deflection theory
It has been previously observed that for an intermetallic crystal with a pre-existing notch/crack to fail completely, the required tensile stress should be in the range of 20-30 MPa and the shock pressure amplitude generated from a single (laser-induced) bubble is around 30-40 MPa at a distance of 2-3 mm [25]. However, in the case of ultrasound, the pressure amplitude of the induced cavitation field is expected to be strongly reduced owing to decrease in acoustic radiation resistance (real part of acoustic radiation impedance) [26]. The acoustic pressure was measured at a distance of 2 mm (the position of the intermetallic crystal from the sonotrode surface) using a calibrated fibre optic hydrophone system and was found to be around 1 ± 0.2 MPa.
The reason for this relatively low value of pressure compared to the pressure obtained from the single bubble collapse [25] can be attributed to the cavitation shielding [27] and the decrease in the speed of sound and the density of the surrounding medium due to the presence of bubble clouds under the sonotrode as observed by Yasui et al. [26].
In order to determine the magnitude of the shear stress acting on the tip of SF crystal (Fig. 4i), the crystal was considered to be a rectangular plate cantilever of length L, width, b and thickness d, as illustrated in Fig. 5 for the sake of geometrical simplicity. The maximum shear stress was mathematically evaluated based on the deflection observed from the frame by frame high speed images and the corresponding pixel size using the following equation [28]: where, F is the transverse shear force obtained from the maximum deflection (δ max ) of the cantilever, Q is the first moment of area, I is the moment of inertia and b is the width of the crystal. For the stress calculation, the elastic modulus (E) of Al 3 Zr crystal was taken from previously conducted nanoindentation measurements as 200 GPa [25]. Using Eq. 1, and assuming that the maximum shear force is acting exactly on the tip of the crystal, the corresponding maximum shear stress produced on the intermetallic is estimated to be about 0.77 MPa while the measured cyclic acoustic pressure at that location was found to be slightly higher inducing low cycle fatigue within the intermetallic leading to fragmentation. The shear stress developed at the tip of the crystal was confirmed after 10 such observations and the average stress was found to be 0.8 ± 0.2 MPa. Nevertheless, this approximation was established for a constant load, which is not illustrative of the influence of continuous pressure pulses generated by the shock front on the crystal. Moreover, the acoustic pressure generated by the ultrasound induced cavitation bubbles cannot elucidate the fragmentation alone. Additional effects of pulsating cavitation bubbles on the crystal surface, effect of shock waves released and the liquid jet upon collapse, also need to be considered, accelerating the fragmentation.
Conclusions
Sonofragmentation experiments of primary Al 3 Zr intermetallic crystals extracted from an Al-3wt.% Zr alloy were conducted in water. The fracture mechanism was elucidated using in-situ high speed imaging. The shock wave induced stress on the crystal was quantified using the defection fracture mechanics approach. It was confirmed that the high energy shock wave resulting from cavitation bubble collapse is mainly responsible for the fragmentation of the crystal. The Al 3 Zr intermetallic undergoes cyclic deflection and eventually fails in a typical brittle manner upon interaction with the propagating shock front and cavitation bubble clouds. Crystal failure occurs in 80-100 acoustic cycles implying a low cycle fatigue fracture mechanism. The acoustic pressure amplitude at a distance of approximately 3 mm was found to be approximately 1 MPa, which is sufficient for fragmenting an intermetallic present nearby.
|
2020-11-12T09:08:17.082Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "2c93662c8dc924c41be232b45c808d3b217658d7",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2020/22/matecconf_icaa172020_04002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e604f720d66f449ce287b7a9c64e20b7c4556452",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
119707583
|
pes2o/s2orc
|
v3-fos-license
|
Analyticity of extremisers to the Airy Strichartz inequality
We prove that there exists an extremal function to the Airy Strichartz inequality, $e^{-t\partial_x^3}: L^2(\mathbb{R})\to L^8_{t,x}(\mathbb{R}^2)$ by using the linear profile decomposition. Furthermore we show that, if $f$ is an extremiser, then $f$ is extremely fast decaying in Fourier space and so $f$ can be extended to be an entire function on the whole complex domain. The rapid decay of the Fourier transform of extremisers is established with a bootstrap argument which relies on a refined bilinear Airy Strichartz estimate and a weighted Strichartz inequality.
The linear Strichartz inequality for (2) asserts that for −α+ 3 q + 1 r = 1 2 and −1/2 < α ≤ 1/q, see [18,Theorem 2.1]. When α = 1/q, the inequality above is called "endpoints" while "nonendpoints" for α < 1/q. It plays an important role in establishing local or global wellposedness theory for the Cauchy problem of (1), see for instance [18,32]. In this paper, we study the the following symmetrical Strichartz inequality (5) e −t∂ 3 x f L 8 t,x (R×R) ≤ C f L 2 (R) , and consider "extremisers" for (5): the existence of extremisers and characterization of some of their properties. To begin with, we denote the optimal constant for (5) by A: (6) A := sup{ e −t∂ 3 x f L 8 t,x : f 2 = 1}. A simple argument, together with (4) shows that A < ∞, see the proof of Theorem 2.4. Definition 1.1. A function f ∈ L 2 is said to be an extremiser for (5) if f is not equal to the zero function a.e. and (7) e −t∂ 3 The first result is the following theorem.
This theorem is proven in Section 3. The proof makes use of the linear profile decomposition for the Airy evolution operator e −t∂ 3 x acting on a bounded sequence of {f n } ∈ L 2 , which we develop in Section 2 based on the previous result in [28]. In [29], the profile decomposition for the Schrödinger equation developed in [2] was used to prove the existence of extremisers to the Strichartz inequality for the Schrödinger equation in higher dimensions. The profile decomposition can be viewed as a manifestation of the idea of "concentration-compactness", see P.-L. Lions [21,22,23,24]. Remark 1.3. Theorem 1.2 is different from that in [28] where a dichotomy result is obtained on the existence of extremisers to the Strichartz inequality e −t∂ 3 x D 1/6 f L 6 t,x ≤ C f L 2 , which is the symmetric "endpoint" Strichartz inequality; in other words, for this Strichartz inequality, either an extremiser exists or a sequence of modulated Gaussians approximates to the extremiser. The dichotomy is due to the presence of highly oscillatory terms in the refined profile decomposition, see Theorem 2.3. Another instance of a dichotomy result on extremisers to a Strichartz-type inequality is in [17]. The presence of highly oscillatory terms in the profile decomposition is not a problem for the existence of extremisers if the equation is invariant under boosts, i.e., shifts in momentum (or Fourier) space, which is the case for the Schrödinger and wave equations. The Airy equation (2) is, however, not invariant under shifts in momentum space. Hence to get the existence of maximizers for (5) we need a profile decomposition which avoids highly oscillatory terms, which is done in Theorem 2.4.
Extremisers to the Strichartz inequality for the Schrödinger equation and the wave equation have been studied intensively recently. For the Strichartz inequality for the Schrödinger equation, Kunze [20] proved the existence of extremisers to the one dimensional Strichartz inequality by establishing that any nonnegative extremizing sequence converges strongly an extremiser in L 2 up to the natural symmetries of the inequality. In the lower dimensional case, the existence of extremisers was shown by Foschi [14] and Hundertmark, Zharnitsky [16]: Gaussians are extremisers, which are unique up to the natural symmetries of the inequality. Later works devoted to the study of the Strichartz inequality for the Schrödinger equation with different emphases include [3,6,9]. To the best of our knowledge, we remark that all the previous known methods do not seem to be adapted directly to finding the explicit form of "extremisers" to (5) in our setting. For extremisers to the Strichartz inequality for the wave equation, see [14,4].
Closely related to the Strichartz inequality for the Schrödinger equations, Christ and Shao [10,11] studied "extremisers" to an adjoint Fourier restriction inequality for the sphere, namely the Tomas-Stein inequality L 2 (S 2 ) → L 4 x (R 3 ) for two dimensional sphere S 2 . Although the Strichartz inequality for the Schrödinger equation can be viewed as an adjoint Fourier restriction inequality for the paraboloid, the situation for the sphere is different from the paraboloid case due to the nonlocal property and the lack of scaling symmetry of the adjoint Fourier restriction operator: L 2 (S 2 ) → L 4 x (R 3 ). However, among other things, they were able to show that there exists an extremal by proving that any extremising sequence of nonnegative functions in L 2 (S 2 ) has a strongly convergent subsequence. For existence of quasiextremals and extremisers to the convolution inequality with the surface measure on the paraboloid or the sphere, see [8,7,31].
Next we turn to the characterization of the extremisers to (5) from studying the corresponding generalized Euler-Lagrange equation: where ω is a Lagrange multiplier, which for extremisers f is given by ω = A 8 f 6 2 where A is the optimal constant defined in (6). The Euler-Lagrange equation (8) can be established by a standard variational argument. Traditionally, once the existence of an extemiser has been shown its properties are deduced from studying the associated Euler-Lagrange equation. Note that in our case (8) is a highly non-linear and non-local equation, which makes this a rather non-trivial task. Nevertheless the following strong regularity result for extremisers holds.
wheref is the Fourier transform of f . In particular, f can be extended to be an entire function on the complex plane.
The proof of this theorem is based on a bootstrap argument, which relies on a refined bilinear Strichartz inequality for Airy operator e −t∂ 3 x f , and a weighted Strichartz inequality. The argument uses some ideas similar to Erdogan, Hundertmark and Lee [13], which in turn is based in part on [15]. In [13], it is shown that solutions to the dispersion managed non-linear Schrödinger equation in the case of zero residual dispersion are exponentially fast decaying not only in the Fourier space but also in the spatial space. The fact that [13] also establishes decay in the spatial space is essentially due to the fact that the linear Schrödinger operator e it∆ involved enjoys an identity which enables one to obtain the decay in the spatial space from that on the Fourier side. There is no such identity for the Airy operator and thus our Theorem 1.4 gives decay only in Fourier space. On the other hand, the decay given by Theorem 1.4 is much more rapid than even Gaussian decay.
The organization of the paper is as follows. In Section 2, we establish the linear profile decomposition. In Section 3, we show the existence of extremisers to the Airy Strichartz inequality L 2 → L 8 t,x . In Section 4, we show that any solution to the generalized Euler-Lagrange equation, which includes the extremiser as a special case, obeys a bound of the form (9) and can be extended to be analytic on the complex plane. It is proven by assuming an important bootstrap lemma, which we establish in Section 5.
The linear profile decomposition
Recall from the introduction, we will use the linear profile decomposition for the Airy evolution operator e −t∂ 3 x for L 2 initial data to prove the existence of extremisers for (5). Roughly speaking, the linear profile decomposition is to investigate the general structure of solutions {e −t∂ 3 x f n } for bounded {f n } ∈ L 2 , and aims to compensate for the loss of compactness of solution operator caused by the symmetries of the equation, [21]. For a sequence {e −t∂ 3 x f n }, it is expected to be written as a superposition of concentrating waves, "profiles" plus an negligible reminder term; the interaction of the profiles is small, see the precise statements in Theorem 2.3 and Theorem 2.4. The profile decomposition for the nonlinear wave and Schrödinger equation, and the gKdV equation have been developed in [1,2,5,19,25,28]. To prepare for the linear profile decomposition theorem for the Airy evolution operator in the Strichartz norm u L 8 t,x needed in this paper, we recall two definitions from [28]. Definition 2.1. For any phase θ ∈ R/2πZ, position x 0 ∈ R and scaling parameter h 0 > 0, we define the unitary transform g θ,x 0 ,h 0 : L 2 → L 2 by the formula We let G be the collection of such transformations. It is easy to see that G is a group which preserves the L 2 norm.
Let D α , α ∈ R, be the fractional derivative operator defined in terms of the Fourier multiplier, D α f = |ξ| αf . We state the following linear profile decomposition in the Strichartz norm t,x from [28]. Theorem 2. 3. Let (f n ) n≥1 , f n : R → C, be a sequence of functions satisfying f n L 2 t,x ≤ 1. Then up to a subsequence, there exists a sequence of L 2 functions (φ j ) j≥1 : R → C and a family of pairwise orthogonal sequences Γ j n = (h j n , ξ j n , x j n , t j n ) ∈ (0, ∞) × R 3 such that, for any l ≥ 1, there exists an L 2 function w l n : R → C satisfying (13) f t,x = 0. Moreover, for every l ≥ 1, (15) lim sup As a consequence of this theorem, we can develop a linear profile decomposition in the Airy-Strichartz norm · L 8 t,x , where the highly oscillatory terms e ixh j n ξ j t,x = 0, and for j = k, Moreover, we have two orthogonality results: for every l ≥ 1, Proof. This argument consists of three steps. We first see that the error term w l n still converges to zero in this new Strichartz norm · L 8 t,x . Indeed, by the Sobolev embedding, e −t∂ 3 x u 0 L 8 t,x ≤ C D 1/6 e −t∂ 3 x u 0 L 6 t,x ; so an application of (14) yields that lim sup l→∞ lim sup n→∞ e −t∂ 3 x w l n L 8 t,x = 0. Secondly we claim that, for 1 ≤ j ≤ l, when lim n→∞ h j n ξ j n = ∞, lim x = 0. It shows that the highly oscillatory terms can be reorganized into the error term. To show (20), by using the symmetries, we reduce to prove (21) lim t,x = 0. We may assume φ ∈ S, the set of Schwartz functions, and that φ has the compact Fourier support (−1, 1).
Setting x ′ := x + 3tN 2 and t ′ := 3Nt, we have, for some c > 0. Then the dominated convergence theorem yields Here e −it∂ 2 x denotes the Schrödinger evolution operator defined via Indeed, and by using [30,Corollary,p.334] or integration by parts, for n large enough but still uniform in n. Here It is easy to observe that B ∈ L 8 t ′ ,x ′ . Then (21) follows immediately.
Finally we claim that, for j = k, This is a consequence of the orthogonality condition (17), whose proof is a special case of Lemma 2.7 below. The remaining conclusions in Theorem 2.4 follow from Theorem 2.3 accordingly.
Remark 2.6. A linear profile decomposition for all non-endpoint Airy Strichartz inequalities can be established by using the first two observations in the previous lemma and Lemma 2.7. The statement is similar to Theorem 2.4 and so we omit the details.
provided that {(h j n , x j n , t j n )} and {(h k n , x k n , t k n )} satisfies the orthogonality condition in (17).
By using Hölder's inequality and the Strichartz inequality followed by a change of variables, we have . The latter integral converges to zero when R goes to infinity from the dominated convergence theorem. So we can choose a sufficiently large R > 0 such that as small as we want. Likewise for e −(t−t j n )∂ 3 (Ω k n ) c . So fixing a large R, we may restrict our attention onto Ω j n ∩ Ω k n . We aim to show that the integral on Ω j n ∩ Ω k n converges to zero when n goes to infinity. Indeed, by using trivial L ∞ t,x bounds on e −(t−t j n )∂ 3 x D α g j n (φ j ) and e −(t−t k n )∂ 3 x D α g k n (φ k ), we see that as n goes to infinity. Note that C > 0 depending on R, φ j L 1 , and φ k L 1 . Thus (22) is obtained, which completes the proof of Case I.
Case II. Now we may assume that h j n = h k n for all n, we are left with the case where lim sup We change variables As proving Case I, we may reduce to the domain Ω k ∩ Ω j n . While for this case, we observe that, for any fixed large R > 0, This, together with the L ∞ t,x bounds, proves Case II. Therefore the proof of Lemma 2.7 is complete.
Remark 2.8. With this lemma 2.7, we have the following orthogonality result: for (α, q, r) defined as in Lemma 2.7 and l ≥ 1, See [29] for a similar proof.
Existence of extremisers
In this section we apply the linear profile decomposition Theorem 2.4 to prove the existence of extremisers for (5).
Proof. Choose an extremising sequence (f n ) n≥1 such that By applying the linear profile decomposition in Theorem 2.4, we see that there is a sequence of profiles φ j and errors w l n such that for all l ∈ N, up to a subsequence (in n), f n = 1≤j≤l e t j n ∂ 3 x g j n (φ j ) + w l n .
Moreover,
where the second equality follows from (16), the third equality from (19), the first inequality from the definition of A, and the last inequality from j φ j 2 2 ≤ 1, see Remark 2.5.
Thus the equal signs at the beginning and at the end force all the signs in this chain to be equal. Hence, we have which in turn implies that there is exactly one j remaining. Without loss of generality, we may assume that φ j = 0, for j ≥ 2. Thus φ 1 is an extremiser as desired.
Remark 3.2. Combining this argument with the orthogonality in Remark 2.8, the existence of extremisers for any non-endpoint Strichartz inequality can be obtained similarly. We omit the details here.
Analyticity of extremisers
In this section, we establish that any extremiser f to (5) enjoys an exponential decay in the Fourier space, Theorem 1.4, from which the property of analyticity of extremisers follows easily. We begin with a bilinear Airy Strichartz estimate.
where the constant C > 0 is independent of N 1 and N 2 .
for some ω > 0. Here ·, · is the inner product in L 2 defined by g, f = R gf dx.
Now we list some additional notations and observations that are used in the following sections: Set where δ denotes the Dirac mass. Then using the Fourier transform to represent e −t∂ 3 x f and doing the t and x integrals in the definition of Q, using (2π) −1 e isr dr = δ(s) as distributions, we rewrite Q as Then it is not hard to see that where f ∨ (x) := (2π) −1/2 R e ixξ f (ξ)dξ is the inverse Fourier transform.
Now we define a weighted version of M, for any function F : R → R, We define, for µ > 0, ε > 0, Proposition 4.5. For F µ,ε defined as above, we have for all µ, ε ≥ 0.
Since a(η) = 0 implies η 3 where we have used the fact that t → t 1+εt is increasing on [0, ∞). Remark 4.6. From the proof we can easily see that Proposition 4.5 remains true if F µ,ε is replaced by F where F (k) = F (|k| 3 ) with F increasing and F (a + b) ≤ F (a) + F (b) for a, b ≥ 0. Thus Proposition 4.5 holds for a much larger class of functions than the one given in (40). However, for our goal of proving Theorem 1.4, the class of functions in (40) is the one we need.
The bootstrap argument
In this section, we prove Proposition 4.9, for which we only have the definition of weak solutions in (30) and the definition of Q at our disposal. We set F = F µ,ε for F µ,ε defined in (40) and define f > , h, and h > by Proof of Proposition 4.9. We use g = e 2F (P ) f > with P = −i∂ x in (30). Using that the operator e F (P ) is simply multiplying with e 2F (k) in Fourier space, the representation (35) of Q, and h ∨ for the inverse Fourier transform of h, one sees ω e F f > We estimate A by using the bilinear Airy Strichartz estimate in Lemma 4.1: Recalling that f 2 = 1, then h < 2 = e Fµ,ε f < 2 ≤ e µ|k| 3 f < 2 ≤ e µs 6 f 2 , h ≪ 2 = e Fµ,ε f ≪ 2 ≤ e µs 3 f 2 , h ∼ 2 = e Fµ,ε f ∼ 2 ≤ e µs 6 f ∼ 2 , we obtain (62) A ≤ C h > 2 s −1/4 e µs 3 −µs 6 + f ∼ 2 e 7µs 6 .
Now we turn to estimate B.
|
2010-12-29T22:11:36.000Z
|
2010-12-29T00:00:00.000
|
{
"year": 2012,
"sha1": "001f97fd75af5e7cae75ff8af8ed24065c872feb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1101.0012",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "001f97fd75af5e7cae75ff8af8ed24065c872feb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
11450841
|
pes2o/s2orc
|
v3-fos-license
|
Worldwide Increase of Obesity Is Related to the Reduced Opportunity for Natural Selection
Worldwide rise of obesity may be partly related to the relaxation of natural selection in the last few generations. Accumulation of mutations affecting metabolism towards excessive fat deposition is suggested to be a result of less purging selection. Using the WHO and UN data for 159 countries there is a significant correlation (r = 0.60, p<0.01) between an index of the relaxed opportunity for selection (Biological State Index) and prevalence of obesity (percentage of individuals with BMI >30kg/m2). This correlation remains significant (r = 0.32., p<0.01) when caloric intake and insufficient physical activity prevalence are kept statistically constant (partial correlation analysis, N = 82). The correlation is still significant when gross domestic product per capita is also kept constant (r = 0.24, p <0.05, N = 81). In the last decades, prevalence of both obesity and underweight has increased in some countries despite no change in caloric intake nor in physical inactivity prevalence. Relaxed selection against genes affecting energy balance and metabolism may contribute to the increase of fatness independent from commonly considered positive energy balance. Diagnoses of individual predispositions to obesity at an early age and individual counselling on diet and behaviour may be appropriate strategies to limit further increases in body mass.
Introduction
Obesity prevalence has been increasing though the last several decades worldwide while its causes are not precisely known [1]. This increase is most likely a result of complex interactions between genetic predispositions, environmental factors and human behaviour [2]. Although the main cause of obesity is the disturbed energetic balance-too much food consumed in relation to low field metabolic rates of everyday existence [3]-there are increasingly noticeable differences in adiposity related to individual biological variation. People with larger gastrointestinal tracts accumulate more subcutaneous fat [4,5]. Healthy young adult males whose alanine transaminase activity is elevated have greater BMI values [6,7], there is increase in diabetes among young people [8,9] and the increase in diabetes prevalence remains significant a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 after adjustment for BMI, age and ethnicity [10]. Prevalence of type 1 diabetes correlates with the relaxation of the opportunity for natural selection [11].
Gene-environment interactions play a significant role in regulation of adiposity. Phenotypic variation of body mass contains a substantial genetic component [12,13,14,15]. Recently, a number of genes upregulating metabolism towards excessive fat accumulation have been identified [16,17]. This genetic background to the regulation of body mass underlies hypotheses explaining evolutionary origins of current obesity problem: the 'thrifty gene' hypothesis [18,19] and the 'drifty gene' hypothesis [20]. The 'thrifty gene' hypothesis states that due to periodic food shortages and famines in the past, natural selection increased frequency of genes improving ability to store excess energy into fat, while the 'drifty gene' hypothesis states that such genes accumulated when selective pressures for predation avoidance were relaxed in hominin evolution in response to technological developments and improvements in social relations. Both of these hypotheses consider significant time depth for the accumulation of genetic backgrounds to obesity, though the mechanisms they propose can be still active. Another hypothesis has been advanced [12]. It states that humans living in different geographic areas, subjected to different climatic conditions, underwent adaptations to those conditions that are related to energy balance and metabolism. As members of geographic populations migrated recently to other areas of the world, some of their climatic adaptations became disadvantageous in new environments producing obesity.
Here we propose a hypothesis explaining raise of obesity by recent changes in the operation of natural selection. During the last century, the opportunity for natural selection through differential fertility and mortality has been decreasing very substantially [21,22] while it has been found that de novo mutations occur at greater rate than previously thought [23] and the mutation load is substantial [24,25]. Purifying selection plays a role in controlling the mutation load [24,25]. lt may be hypothesised that with the decline in the opportunity for selection there is an increasing quantity of heritable factors altering energy balance and metabolism. These may contribute to increasing numbers of obese individuals, as well as some increase in numbers of too lean individuals, even in situations when environmental factors do not promote increasing adiposity of all members of a population The aim of this paper is to investigate a possible coincidence of the relaxation of natural selection and prevalence of obesity.
Materials and Methods
We have used data from United Nations files (fertility and Gross Domestic Product per Capita) and from World Health Organisation (life tables, prevalence of adults with BMI>30 kg/ m 2 , energy [caloric] consumption per capita, prevalence of insufficient physical activity ["physical inactivity" for short], changes in obesity and underweight prevalence through time) for all nations for whom these data were available [26,27] (http://who.int/research/en, http:// www.who.int/gho/en/, http://unstats.un.org/unsd/snaama/, http://faostat.fao.org/). A spreadsheet of data used is available as S1 Table. Biological State Index (I bs ), [20,21,28] was used as an index of the relaxed opportunity for natural selection in a given national population. Computation of this index requires age specific fertility rates and age-specific survivorship from life tables. Details of the calculation of this index are described in [28,29,30,31]. The index is calculated by combining age-specific death frequency (d x variable of a life table) with an age-specific reproductive loss (s x ): I bs = 1 − Sd x s x . Age-specific reproductive loss results from a death of a person at an age before the end of reproductive life span. It is calculated by accumulating to a given age (x) annual age-specific fertility rates, expressing them as a fraction of the Total Fertility Rate and subtracting the result from unity [28,29]: s x = 1-[(Sf x )/TFR], where Sf x is the sum of annual age-specific fertility rates up to age "x", TFR is the total fertility rate, the number of children born to a woman who has reached menopause. When five-year interval age specific fertility rates are used instead of annual ones, their sum is multiplied by five.
Values of s x for sub-adults are by definition equal to 1 because they could not have produced any offspring, for post-reproductive age people s x value is by definition zero. For ages 15-49 years s x values are fractions. For instance, an s x value for a person aged 30 years is the fraction of her fertility that would be lost had she died at that age. For instance, if a woman aged 30 years has already given 3 births, while her total number of births at age 50 is expected to be 6, her s 30 equals 0.50. Depending on age specific fertility distribution in a given population this s 30 value may vary from 0.2 (where women limit their fertility after 30) to 0.7 (where women have children later in their lives). Value of 1-s x gives a probability that a person will have an opportunity to pass her/his genes to the next generation. People dying young have lesser opportunity to pass their genes to the next generation, thus lesser reproductive potential, than people surviving to the old age.
The Biological State Index, combining information about mortality by age and reproductive potential by age, gives the probability that an average individual born into a population is able to fully participate in the reproduction of the next generation-to pass her/his genes to the next generation. The lower this probability, the greater the opportunity for natural selection, since the variance in Darwinian fitness (w) is a ratio of individuals who are reproductively unsuccessful to those who are successful [32]. Based on this rule, James Crow introduced in 1958 an index of the opportunity for natural selection through differential mortality, I m [32]. In Crow's notation I m = Pd/Ps where Pd is the proportion of individuals dying before reaching reproductive age, Ps the proportion of individuals surviving to reproductive age. In terms of I bs , which takes into account partial reproductive success during the adult life span, the proportion of reproductively unsuccessful individuals is 1-I bs while proportion of reproductively successful individuals is I bs . This is an improvement over the Crow's index because I bs takes into account the portion of adult mortality that truncates reproductive performance, rather than just assuming that all adults survive through the whole reproductive period. Values of I bs close to 1.00 indicate a loss of opportunity for natural selection through differential mortality because 1-I bs is close to zero. Thus the index is a convenient measure of the relaxation of natural selection-the higher the I bs value, the less opportunity for selection there is. SPSS version 23.0 was used for statistical analyses.
Results
I bs values for individual countries indicate very significant reduction of the opportunity for natural selection in the 21 st century (Fig 1). The range of I bs values for this century is from 0.635 (Burkina Faso) to 0.994 (Iceland and Cyprus), its arithmetic mean is 0.927 (sd = 0.080).
In 40 (25%) of the countries of the world values of I bs equal at least 0.985, in the next 23 countries (14%) they exceed 0.975. When these values are expressed as the variance of Darwinian fitness (w = [1-I bs ]/I bs ) we obtain an average of 0.088. This is four times lower than 100 years ago (0.22, [21]). Regression of obesity prevalence by country on I bs values per country is an exponential function with the correlation coefficient 0.61 (Fig 2, p<0.001). The relationship of similar strength is indicated by the non-parametric Spearman "rho" of 0.56 (p<0.001).
It is obvious that one can expect greater values of I bs and greater prevalence of obesity in more affluent countries, simply because their health services are better and they have greater availability of energy-rich foods. This may cause a spurious correlation of I bs and obesity prevalence. Gross national product per capita, caloric intake and physical inactivity all correlate significantly with the prevalence of BMI>30kg/m 2 (Table 1). Since Pearson moment-product correlation coefficients for those relationships and for the I bs and obesity are similar to the non-parametric Spearman "rho" coefficients (Table 1), it is possible to use Pearson coefficients to calculate partial correlation between I bs and obesity when the three "confounding" variables are kept statistically constant in various combinations ( Table 2). The partial correlation (r = 0.32) between I bs and obesity prevalence remains clearly significant (p<0.01) when both caloric intake and physical inactivity levels are kept constant. The situation is the same when caloric intake and GDP are stabilised (r = 0.24, P<0.01) and even when all three confounders are kept statistically constant, the correlation of the index of selection and obesity prevalence is of similar magnitude (r = 0.24) and remains significant, though at a lower level (p<0.05) due to a smaller sample size caused by missing data on physical inactivity.
Stepwise multiple regression analysis (SPSS Statistica 23, probability of F to enter < = 0.05, to remove > = 0.10) using obesity prevalence as a dependent variable and gross domestic product, caloric intake, physical inactivity prevalence and I bs as independent variables, selected GDP as the variable having the greatest influence on obesity with R 2 = 0.421, while opportunity for natural selection (I bs ) was placed second increasing R 2 to 0.457. The other variables (caloric intake and physical inactivity) were removed by the analysis as having no statistically significant influence on prevalence of obesity.
Discussion
Official statistics used by international bodies can only be as accurate as information collected by various national agencies that report them. Thus, due to some technical errors of reporting, ideal correlations can not be expected. The presence of significant correlations in a large sample of countries studied here is indicative of possible interrelationships of variables studied even if values of correlation coefficients are not high. Parallel changes in the prevalence of obesity and the increase of I bs values during the 20 th century in two countries for which data allowing such observations were available, seem to support our observations. In Australia, adult female BMI averages increased from 23 kg/m 2 in 1926 to 28 kg/m 2 in 2002 while the I bs values rose in the same period from 0.86 to 0.99 (Table 3). In the analysis of Polish conscript body mass [33] it has been found that prevalence of obesity, overweight and also underweight, has increased from 1965 to 2001 while the caloric intake decreased slightly [33]. In the same period, however, I bs values for the Polish population increased (own calculation from WHO data (Fig 3).
In a number of countries (examples in Fig 4) increase in the prevalence of obesity, similar to the Polish situation, has been accompanied by the prevalence of underweight remaining constant or even increasing. With the constant percentage of cases in one extreme category, increase in the frequency of the other extreme category must be a result of the increase in total variance, not just the shift upwards of all values. In everyday terms it is not that everybody is getting fatter, but that the number of fat people increases while many people remain thin. This supports our hypothesis that the number of metabolic faults is increasing. Since the low body mass is now considered a desired feature, increasing numbers of thin people are not as noticeable, or alarming, as those of obese individuals, while some of them may also be a result of metabolic faults. It is noteworthy that the increases in the prevalence of obesity, and sometimes underweight, may not be paralleled by increased caloric intake nor raising levels of physical inactivity, as suggested by our correlation analysis, though this requires further studies.
Finding of the significant correlation between the values of the Biological State Index and the prevalence of obesity, in the statistically controlled absence of changes in caloric intake and physical inactivity, despite possible inaccuracies of official statistics, indicates a possibility of increasing prevalence of genetically determined metabolic disorders as a consequence of relaxed natural selection with increasing affluence of national populations. Metabolic disorders with genetic background may produce, as one of their signs, increased adiposity. In this situation, public health campaigns aimed at reducing food consumption and increasing exercise, can not be fully effective in reducing prevalence of obesity. It is necessary to improve the understanding of changing metabolic causes of abnormal fat accumulation and to develop effective methods of treatment of increasing adiposity.
|
2017-08-12T13:50:55.686Z
|
2017-01-20T00:00:00.000
|
{
"year": 2017,
"sha1": "faaa9ef98dba6541221150b3e05c690caedf8641",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0170098&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "212c469b4766097edc54f45864ea58784ff1a7fe",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
198911108
|
pes2o/s2orc
|
v3-fos-license
|
Anti-Biofilm Property of Bioactive Upconversion Nanocomposites Containing Chlorin e6 against Periodontal Pathogens
Photodynamic therapy (PDT) based periodontal disease treatment has received extensive attention. However, the deep tissue location of periodontal plaque makes the conventional PDT encounter a bottleneck. Herein, upconversion fluorescent nanomaterial with near-infrared light excitation was introduced into the treatment of periodontal disease, overcoming the limited tissue penetration depth of visible light in PDT. Photosensitizer Ce6 molecules were combined with upconversion nanoparticles (UCNPs) NaYF4:Yb,Er with a novel strategy. The hydrophobic UCNPs were modified with amphiphilic silane, utilizing the hydrophobic chain of the silane to bind to the hydrophobic groups of the UCNPs through a hydrophobic-hydrophobic interaction, and the Ce6 molecules were loaded in this hydrophobic layer. This achieves both the conversion of the hydrophobic to the hydrophilic surface and the loading of the oily photosensitizer molecules. Because the excitation position of the Ce6 molecule is in the red region, Mn ions were doped to enhance red light, and thus the improved PDT function. This Ce6 loaded UCNPs composites with efficient red upconversion luminescence show remarkable bacteriological therapeutic effect on Porphyromonas gingivalis, Prevotella intermedia and Fusobacterium nucleatum and the corresponding biofilms under 980 nm irradiation, indicating a high application prospect in the treatment of periodontal diseases.
Introduction
With the changes in human living habits, periodontal disease has become the "top chronic killer" in oral diseases [1]. Periodontal treatment includes surgical and non-surgical methods. Underarm treatment and root smoothing can provide more recovery of the operation area, which, however, may also cause the additional risks of bleeding, pain, and infection in the patients' gums during the treatments, increasing patient discomfort [2]. In recent years, non-surgical methods have attracted more and more attention, with an aim to avoid the above problems in controlling and treating periodontal disease. Photodynamic therapy is one of the most important non-surgical treatment methods, combining special photosensitizers with bio-optical techniques. This phototherapy method in dentistry, also known as antimicrobial PDT (aPDT), displays efficient bactericidal performance for oral pathogens [3]. With the emergence of drug-resistant strains due to the abuse of antibiotics and bacterial mutations, aPDT becomes especially important. The basic principles of photodynamic therapy are as groups on the surface of UCNPs [20]. In the NaYF 4 :Yb,Er/Ce6 composites, after hydrolysis, the silane forms a very thin hydrophilic coating, which means the hydrophobic UCNPs were converted to hydrophilic ones without increasing size and influencing the luminescent centers on the surface of UCNPs, since the hydrophobic side chain of the silane does not touch the surface of UCNPs. In addition, this design successfully avoids the leakage problem in conventional photosensitizer carriers by physical adsorption, such as photosensitizer loading in mesopores UCNP@mSiO 2 NPs [19], which may lead to serious leakage problems, causing systemic toxic side effects as well. As the PDT function of Ce6 molecule should be triggered by excitation of red light, Mn doping is involved in this work, which greatly improves the probability of the red emission transition [18,21]. As a result, the enhanced upconversion red emission greatly improved the PDT effect on periodontal disease. The Ce6 loaded UCNPs composites with efficient red upconversion luminescence could be potentially employed for aPDT applications.
Preparation of NaYF 4 :Yb 3+ ,Er 3+ -Mn UCNPs
All experiments NaYF 4 :Yb 3+ ,Er 3+ UCNPs were synthesized by a high temperature thermal decomposition method. The specific experimental steps are as follows: 2.5 mmol NaOH and 4 mmol NH 4 F were dissolved in the 5 mL of methanol.0.78 mmol YCl 3 •6H 2 O, 0.20 mmol YbCl 3 •6H 2 O and 0.02 mmol ErCl 3 •6H 2 O were added in a three-necked flask and dissolved in 15 mL octadecene and 6 mL oleic acid and stirred. After the contents in the three-necked flask were dissolved at 80 • C, the whole three-necked flask was cooled to 30 • C, and the methanol solution was added dropwise, and the mixed solution was magnetically stirred for 30 min. Then the solution was heated to 125 • C and the methanol solution was evaporated. After the complete evaporation of methanol, it was heated to 320 • C under nitrogen for one hour. When the temperature of the solution was cooled to room temperature, the product was washed with a 1/1 ratio of ethanol and deionized water, and the final sample was obtained. Note that during the reaction those products are rapidly oxidized by oxygen in the air, and the desired sample cannot be prepared. Therefore, nitrogen is used as an inert gas, which is very necessary for protection. For the NaYF 4 :Yb 3+ ,Er 3+ ,Mn 2+ UCNPs, the corresponding MnCl 2 were added together with lanthanide elements, and the doping ratio of Mn is 0%, 10%, 20%, 30%, respectively [21].
Preparation of NaYF 4 :Yb 3+ ,Er 3+ @Ce6@Silane Composites with and without Mn Doping
The amphiphilic silane has a hydrophobic alkane chain with a chain length of 18 carbon atoms. The silane was first dissolved in tetrahydrofuran solvent according to a mass ratio of silane/NaYF 4 =1/2. After that, Ce6 molecules were added to the mixed solution of NaYF 4 :Yb 3+ ,Er 3+ and silane in a certain ratio, and mixed under ultrasonic conditions for half an hour. Then the mixed solution was quickly injected into ammonia water with a pH of 9. After 3 hours of the hydrolysis reaction, it was transferred into a dialysis bag (molecular weight cutoff of 8000-14,000) for dialysis. The resulting upconversion composite not only has the function of up-converting fluorescent labeling, but also has PDT function. Mn doped UCNPs followed the same modification process with Ce6 and silane.
Characterization of NaYF 4 :Yb 3+ ,Er 3+ @Ce6@Silane Composites with and without Mn Doping
The XRD measurement of the prepared samples was carried out using a Rigaku D/max 2550 X-ray diffractometer with a test angle of 15 to 80 degrees; the morphology of the synthesized samples was measured by a transmission electron microscope (TEM) of Hitachi H-800 (Hitachi, Ltd., Tokyo, Japan) with an operating voltage of 200 kV; The Fourier infrared (FTIR) spectra Shimadzu DT-40 model 883 IR spectrophotometer (Shimadzu Co., Kyoto, Japan) was used to measure the organic groups on the surface of UCNPs. For the specific operation process, 3 mL of the samples (UCNPs and UCNPs@silane at an approximate concentration of 4 µg/mL) was taken to be tested with the addition of potassium bromide powder, placed in an oven at 60 degrees Celsius, and the samples were dried for two days until the liquid completely disappeared. The Zeta potential of the UCNPs was determined by Zeta potential instrument (Zetasizer, Nano-Z, Malvern Instruments Limited, UK). The absorption spectrum of the sample was tested by UV-visible spectrophotometer Shimadzu UV-1800 (Shimadzu Co., Kyoto, Japan); SENS-9000 spectrometer (Zolix Instruments Co. Ltd., Beijing, China) is used to measure the excitation and emission spectra of the samples. The Andor Shamrock SR-750 spectrometer (Andor Technology Ltd., Tokyo, Japan) was used to test the up-conversion spectra of nanoparticles with a continuous 980 nm diode laser, and the upconversion signals were collected by photomultiplier tubes and monochromators. The detection range was set from 300 nm to 750 nm.
Dark Cell Toxicity of Mn-Doped UCNPs
L929 mouse fibroblasts were selected to detect dark cytotoxicity. The cells were obtained from the Institute of Biochemistry and Cell Biology of the Chinese Academy of Sciences (Shanghai, China) and approved by the Institutional Review Board of the Jilin University School of Dentistry. Fibroblasts were cultured in DMEM medium (HyClone, Logan, UT, USA) containing 1% antibiotics (100 U/mL penicillin and 100 µg/mL streptomycin) and 10% fetal bovine serum (Gibco, Carlsbad, CA, USA). The culture temperature of the cells was 37 • C, and the atmosphere contains 5% CO 2 . For the dark cytotoxicity of NaYF 4 :Yb 3+ ,Er 3+ @Ce6@silane, L929 cells were incubated in 96-well plate at a density 5000 cells per well for 24 h. Then the cells were incubated with different concentration of NaYF 4 :Yb 3+ ,Er 3+ @Ce6@silane for another 24 h in darkness. The cell viability was determined by CCK-8 assay (7sea biotech Ltd., Shanghai, China) following the instructions. Wells without any nanoparticles served as calibrators. The percentage survival was calculated and based on the control sample without any treatment as being 100%. All measurements were tested in three replicates.
aPDT of NaYF 4 @Ce@Silane Against Biofilm Formation On Dentin Squares
Caries-free human molars were extracted for the preparation of dentin samples, which was approved by the Institutional Review Board of the Jilin University School of Dentistry (Ref. H20170062), serving as the substrates for biofilm formation.
A square dentin sample of 5 × 5 mm (thickness of about 1 mm) was prepared and ground with silicon carbide paper, and then sterilized by autoclaving (134 • C, 15 min). Before every experiment, UV light for 30 minutes was involved and followed by immersion in saliva at 37 • C for 2 h to pre-coat the saliva film. A saliva sample was prepared from the mixed saliva from fifteen healthy donors who had not taken any antibiotics for 3 months. The saliva from the donors follows the requirements that there was no teeth brushing for 24 h and no food/beverage for 2 h before donating. Cell debris was removed from saliva by centrifugation of 3000 rpm for 20 min. A sterile 0.22 µm filter was used to filtrate the supernatant for sterilization (VWR International, Radnor, PA, USA). Salivary pellicle on dentin disks was prepared by immersing the disk in sterile saliva (37 • C) [22].
Three single-strain biofilms were prepared with each bacteria species [22]. Each species was inoculated (10 8 CFU/mL) onto a salivary pellicle-coated dentin square in a 24-well plate. Then NaYF 4 @Ce@silane and Mn doped UCNPs were used to treat the samples with a concentration of 100 µg/mL. Then, it was irradiated with a 980 nm laser at 750 J·cm −2 for 3 min. The medium containing NaYF 4 @Ce@silane NPs was renewed every 24 h. The dentin square with the adhered biofilm was transferred to a new 24-well plate and treated with 980 nm light irradiation in the same way.
A mixture of SYTO 9 (2.5 µM, Invitrogen, Carlsbad, California) and propidium iodide (2.5 µM, Invitrogen, Carlsbad, California) was used to stain each sample for live/dead bacterial analysis (Molecular Probes, Eugene, OR, USA) [23]. A confocal laser scanning microscope (CLSM) was employed for the biofilms imaging. All experiments were conducted three times. Five images were randomly selected from each sample. For the three bacterial species, 15 images per group were obtained.
For the five groups and three bacterial species, a 5 × 3 full factorial design was used for CFU counting. Biofilms were collected from the samples by mechanical scraping. Then the corresponding bacterial deposits and samples were transferred to vials with 2 mL cysteine peptone water (CPW) and dispersed with ultrasound. Diluted suspensions of the biofilm were streaked onto Columbia blood agar and cultured at 37 • C for 48 h under anaerobic conditions of 80% N 2 , 10% H 2 and 10% CO 2 and then the CFU was counted, along with the dilution factor [24].
Polysaccharide Production of 4-day Biofilms
The water-insoluble polysaccharide of the biofilm of (A) P. gingivalis, (B) P. intermedia and (C) F. nucleatum on dentin squares was investigated via a phenol-sulfuric acid method [23]. The 4-day biofilms were immersed in a vial with 2 mL CPW and then collected by sonication/votexing [23]. A precipitate was obtained by centrifugation and then rinsed twice with PBS and resuspended in 1 mL of de-ionized water. After that, the solution was added to 1 mL of a 6% phenol into the vial, followed by 5mL of 95-97% sulfuric acid and incubated at 25 • C for half an hour. Then, 100 µL of the above solution was transferred into a 96-well plate. The amount of polysaccharide in the biofilm can be measured by the absorbance at 490 nm with the microplate reader (SpectraMax M5, Sunnyvale, CA, USA). The standard was made with five glucose concentrations of 0, 5, 10, 20, 50 and 100 mg/L in the conversion of OD signals to polysaccharide concentrations [23].
Statistical Analysis
The sample size was determined by studying the power of a test of the hypothesis. A random sequence by the encoder was used to generate the random allocation of this study. All data were checked for normal distribution with the Kolmogorov-Smirnov test. Testing for significant differences was assessed by two-way ANOVA using Tukey's post-hoc test for pairwise comparisons (p < 0.05). Statistical analyses were performed by SPSS 19.0 (SPSS, Chicago, IL, USA).
Results and Discussions
The schematic Figure 1 illustrates the synthesis of UCNPs@Ce6@silane composites. The Ce6 molecules have hydrophobic nature and are difficult to be introduced inside the in vivo environment. In this case, the Ce6 molecules were loaded and coated with a very thin silane layer with a thickness of 2-3 nm. When the upconversion nanocomposite is endocytosed by bacterial, the UCNPs can emit green and red light under the excitation of 980 nm NIR light, and the Ce6 molecules within the hydrophobic layer can be excited by the upconversion red emission, performing the aPDT function. The singlet oxygen is highly cytotoxic and can efficiently damage a variety of biomolecules, such as protein, nucleic acids and lipids, and in this design the Ce6 molecules can be well encapsulated and triggered only in the infectious area. As shown in Figure 2A,B the synthesized UCNPs and the silane modified UCNPs were characterized by TEM. It can be seen that the size distribution of both NPs is very uniform, about 25 nm for UCNPs and 30 nm for silane modified UCNPs. After silane coating, a thin layer of silane can be seen on the surface of the particles with a thickness of about 2-3 nm, as the black arrows point in Figure 2B. When a single nanoparticle is selected and characterized by high-power transmission electron microscopy, the thin layer of silane on the surface can be clearly examined, as indicated by the white arrow. The FTIR spectra before and after silane coating were shown in Figure 2C. It can be seen that, after silane modification with 18 carbon atoms chain, the characteristic peaks of Si-O-Si and Si-OH appear new compared to the unmodified UCNPs, indicating the successful modification of silane. Moreover, the intensity of the characteristic peak of -CH 2 increases relative to the unmodified UCNPs due to the long carbon chain, which is also consistent with the experimental conditions. The zeta potential of NaYF 4 :Yb 3+ ,Er 3+ @Ce6@silane and samples with the Mn doping percent of 10%, 20% and 30% composites are −28.5 mV, −27.8 mV, −33.5 mV, and −32.8 mV, respectively, indicating the high water solubility and stability.
It should be noted that, in addition to the complexity of the oral structure, the main difficulty of this NIR triggered PDT system lies in the design of the upconversion material. In some previous construction of photodynamic systems, photosensitizers were mostly supported on the nanocarriers by physical adsorption. For example, photosensitizer zinc phthalocyanine can be loaded into the mesopores of UCNP@mSiO 2 NPs [19]. Though efficient production of 1 O 2 was realized after these photosensitizers were loaded by physical adsorption, it still faces serious leakage problems. Therefore, the FRET efficiency becomes low in the PDT process, causing systemic toxic side effects as well. In this coating strategy, by focusing on improving the efficiency of photodynamic therapy, the UCNPs were modified with silane, achieving water solubility and biocompatibility. In addition, the stable and very thin coating layer can well encapsulate the Ce6 molecules by hydrophobic-hydrophobic interaction and the followed hydrolysis of the silane further form the close composite, avoiding the Ce6 leakage.
Silanes of different carbon chain lengths were also tested. Considering the higher loading amount of Ce6 molecules, silane with 18 C atoms was employed as the coating layer. The maximum amount of Ce6 molecules that can be loaded in the hydrophobic layer was investigated. Here, by fixing the amount of UCNPs and silane, the content of Ce6 was changed to test the stability. Herein the mass of the UCNPs was fixed at 8 mg, and the corresponding mass of Ce6 were changed from 300 to 1000 µg. In this case, the content of Ce6 molecules was not to exceed 800 µg or precipitation would occur, producing a flocculent precipitate due to the leakage of the hydrophobic molecules (data not shown).
The structure of UCNPs with and without Mn doping were measured by XRD as shown in Figure 2D. Results confirmed the pure hexagonal phase NaYF 4 without Mn doping. After the Mn ions were introduced into the nanocrystals, the crystal structure of the UCNPs changes from a hexagonal to a cubic phase. In addition, as the Mn ions doping increased, it could be found that the diffraction peak of 111 plane shifted toward the large angle direction (shown in inset), further illustrating the success of Mn ions doping. Note that the Mn doping do not influence the morphology and size of the UCNPs.
The upconversion luminescent property of silane-coated UCNPs and Mn-doped UCNPs were investigated. Upconversion luminescence were obtained based on the anti-Stokes mechanism. The spectra of different Mn-doped samples were excited using a 980 nm continuous diode laser, where the laser power was adjusted to 1 W. From the upconversion emission spectra in Figure 2E, the green emission was located at 528 and 546 nm, and red emission was located at 660 nm, corresponding to 2 H 11/2 , 4 S 3/2 and 4 F 9/2 excited states to the ground state 4 I 15/2 transition of Er 3+ , respectively. Yb 3+ ions serve as sensitizers which can absorb 980 nm photons more efficiently and then transfer energy to the activator Er 3+ , thus completing the upconversion green and red emissions. Different Mn doped samples showed no change of the peak position in the emission spectrum, but the ratios of the green/red emissions, which is dependent on the Mn doping amount. The existence of Mn 2+ ions greatly influence the transition possibilities between green and red emissions of Er 3+ . As the Mn doping content increases from 0 to 30%, the proportion of red light gradually increases. The fine-tuning of red/green emissions could be attributed to nonradiative energy transfer from the 2 H 9/2 and 4 S 3/2 levels of Er 3+ to the 4 T 1 level of Mn 2+ , followed by back-energy transfer to the 4 F 9/2 level of Er 3+ as shown in Figure 2F, resulting in an enhanced red emission output by rational controlling the Mn 2+ doping level. The energy transfer from level 4 S 3/2 to the Mn ion can be proved by the lifetime since the occurrence of such non-radiative transitions will reduce lifetime value. Relative to the NaYF 4 :Yb 3+ ,Er 3+ @Ce6@silane UCNPs, all of the Mn doped UCNPs show the decreased lifetime in the green emission energy level. In addition, as the Mn doping content increases, the lifetime decreases gradually, and due to that the non-radiative transition speed is much faster than the radiation transition, as shown in the inset in Figure 2E, indicating that the more efficient energy transfer to the Mn ion happens, thus causing the further enhancement of red emission. It should be noted that this luminescence property is beneficial for the Ce6 based aPDT, because the excitation of the Ce6 molecule is located in the red region, and, in addition, the Ce6 molecule is on the surface of the UCNPs with a very close distance, thus facilitating the upconversion aPDT.
The absorption spectrum of the Ce6 molecule inside the composites and the upconversion fluorescence spectrum of Mn 30% doped UCNPs are shown in Figure 3A. The absorption at the red region is the primary excitation band of the Ce6 molecule for singlet oxygen production and this region is totally overlapped with the upconversion red emission band of its UCNPs carrier. Therefore, the enhanced upconversion red emission can further improve the aPDT effect. In addition, since the Ce6 molecule is located in the hydrophilic thin layer of the NPs, this energy transfer is very efficient for stimulating the PDT function. In the present study, up to 30% Mn was selected to dope into UCNPs to realize the enhancement of red light emission. Too much foreign element doping would lead to the change of crystal lattice, following with the functional variation of UCNPs. Besides, in this doping range (10-30%), the fluorescence intensity of red light increases with Mn doping, while the overall fluorescence intensity also decreases as the Mn content further increases. Previous studies have demonstrated that the introduction of sufficient Mn 2+ ions into NaYF 4 :Yb/Er leads to a brilliant red emission, which is brighter by as much as 15 folds than that of Mn-free sample [25]. Therefore, in this doping scale, the highest red emission intensity was obtained, which is better for the efficacy of aPDT. In addition, in such doping situation, green light was also retained, which can be further used for fluorescence imaging. Currently, the up-conversion green light as imaging signal and red light as PDT treatment source need to be further investigated.
The upconversion PDT function of composite nanomaterials was tested using the singlet oxygen probe ABDA 9,10-fluorenyl-di(methylene)dimalonic acid [26]. Figure 3B shows the absorption spectra of magnetic nanocomposites under red light with an interval of 2 min. As the irradiation time increases, the absorption value of ABDA at 260 nm is reduced, indicating the generation of singlet oxygen. Furthermore, the absorption band at 400 nm showed the similar tendency, because the Ce6 molecules can also be consumed via the photodegradation of red light. It should be noted here that the detection efficiency of singlet oxygen is not very high according to the probe absorption because of the fact that the 980 nm excitation area is usually small, while the diffusion region of singlet oxygen is relatively large in the solution. This test proves that the composite material can produce singlet oxygen, indicating the successful material design and preparation. For periodontal disease treatments, the small 980 nm laser irradiation area is enough for sterilization. Dark cytotoxicity of NaYF 4 :Yb 3+ ,Er 3+ @Ce6@silane NPs was investigated with L929 mouse fibroblast cells by CCK-8 assay, as shown in Figure 3C. The UCNPs show very good biocompatibility. The results showed that the cells were still 90% viable with the concentration of 200 µg/mL, indicating very low cytotoxicity. In this study, it should be attributed to the silane modification and the negatively-charged surface which could reduce cytotoxicity, showing great potential for new photosensitizer carriers in dental application. Upconversion red light triggered PDT was first tested within biofilm experiments. The sample was irradiated with a 980 nm continuous diode laser which was adjusted to 750 J·cm −2 by tuning irradiation area for 3 min. Representative results of the live/dead analysis were performed and the results are shown in Figure 4 P. gingivalis, P. intermedia and F. nucleatum were selected as the bacteriostatic models in this work simulating early, middle and late stages of biofilm development, respectively, and colonized in plaque biofilms [27]. Live bacteria were stained as green which mainly in the control group and dead bacteria were stained red. In all three kinds of bacteria, NaYF 4 @Ce6@silane plays an efficient role in aPDT function. There are more and more dead bacteria in the groups with NaYF 4 @Ce6@silane and Mn doped NaYF 4 @Ce6@silane NPs under 980 nm light irradiation. The red color increases as the Mn doping increases, due to the enhanced upconversion red emission. The corresponding enhanced aPDT from Ce6 caused more and more dead bacteria. Note that the power density of the laser used in this study is strong enough for application of cell level in vitro. In addition, though the proposed periodontal bacteria locate in deepest periodontal pockets, usually 5-8 mm from the gingival margins, the power of the irradiation laser can still reach the threshold value. It is reported that the 980 nm laser can penetrate more than 0.7 cm in pork tissue without obvious reduction of power density [17,19,28].
Several kinds of UCNPs were applied to covert NIR to visible light or UV light for triggering PDT in tumor therapy or antibacterial application [29][30][31]. Gulzar et al. synthesized a nanocomposite based on nanographene oxide-UCNPs-Ce6 as a theranostic platform for the upconversion luminescence imaging-guided PDT/PTT of cancer [29]. The tremendous surface area of graphene oxide was allowed to house Ce6, as well as UCNPs. Remarkably, both the imaging and dual-mode treatments in this nanoplatform are stimulated by light, which unveils outstanding gains in terms of augmenting cancer killing specificity and decreasing side effects [29]. Numerous UCNPs-based nanomaterials with varieties of structures for photodynamic therapy in cancer treatment were summarized in a recent study [32]. On the other hand, for antibacterial application, Zhang et al. developed a photosensitizer (β-carboxyphthalocyanine zinc, CPZ) delivery system with UCNPs (LiYF 4 :Yb/Er) and polyvinylpyrrolidone (PVP) [30]. Such a near-infrared (NIR) triggered UCNPs-CPZ-PVP system significantly reduced the aggregation of CPZ and presented a high anti-infectious activity against multi-drug resistant bacteria (methicillin-resistant Staphylococcus aureus by 4.7 log and multi-drug resistant Escherichia coli by 2.1 log). Another study investigated the dual antibacterial behavior induced by the curcumin-UCNPs itself and induced by photodynamic therapy were demonstrated [31]. The results showed that nearly 100% methicillin-resistant Staphylococcus aureus was eradicated using curcumin-UCNPs under the NIR irradiation. However, to the best of our knowledge, the present study is the first report on application of near-infrared light to achieve photodynamic therapy for periodontitis treatment. We combine the upconversion luminescent material and the photosensitizer so that the near-infrared light with high tissue penetration depth can be utilized. More importantly, traditional aPDT had a minimal effect on the viability of microorganisms organized in a bacterial biofilm, which was probably due to the hydrophobic nature of the most photosensitizer molecules, leading to the reduced penetration of the photosensitizer into the biofilm matrix. The present study developed a silane coating as the shell of the UCNPs and embedded Ce6 inside the thin layer. This design would improve the hydrophilicity of the nanoparticles, and thus overcome the drainage of gingival crevicular fluid and high saliva fluid turnover [24]. Furthermore, the energy transfer would be more efficient for aPDT triggering due to the existence of Ce6 in the hydrophilic thin layer of the NPs. Therefore, this is a new exploration for the treatment of oral periodontitis, which is of great significance.
There are two possible aPDT mechanisms which can be described as follows: The triplet state photosensitizer Ce6 either can undergo a type I reaction, or a type II reaction. Type I: excited triplet Ce6 reacts directly with the macromolecule (protein, nucleic acid, lipids, etc.), generating free radicals or free radical ions by electron transfer, and further react with oxygen molecules, forming highly reactive oxides such as hydroxyl radicals, peroxides, etc. Type II: triplet photosensitizer Ce6 react with surrounding ground state oxygen molecules, generating singlet oxygen, which has ultrahigh cytotoxicity by oxidation and peroxidation of the cellular structure, microbial attack, destruction of the cell wall, and membrane system damage, thus affecting microbial metabolism and leading to cell death [33,34].
In this work, NaYF 4 @Ce6@silane nanocrystals release singlet oxygen through a type II reaction, which can penetrate into plaque and subsequently kill bacteria. Note that the nanosized material can enter the microorganisms by endocytosis which has been proven in previous studies [35][36][37]. In the present study, the high permeability of silane modified UCNPs are responsible for the efficient reactive oxygen generation and PDT effect.
Regarding the periodontitis treatment, aPDT that produces ROS possibly would not exacerbate the inflammatory response due to the following reasons: First, the role of PDT in inflammation is a complex process. Indeed, in tumor therapy, PDT could produce a strong inflammatory response of infiltration with neutrophils, mast cells, lymphocytes, monocytes, and macrophages [38]. However, PDT could also increase the stability of interleukin 10 (IL-10) RNA and/or increase the transcription efficiency of IL-10, which is an anti-inflammatory cytokine that inhibited cell-mediated immune responses [39]. Gollnick et al [40] reported that PDT could change the activity of a gene promoter and increase the expression level of IL-10. Therefore, these two processes may co-exist in the PDT therapy. Second, in general, 1 O 2 diffusion distance is only about 100 nm, and the half-life is <0.04 µs [41], the photosensitizer should be precisely delivered to the area of periodontal disease. Hence, the distance of 1 O 2 diffusion to the bacterial cells is of significant importance for the aPDT activity. Henderson et al. proposed that the 1 O 2 -induced photodamage from porphyrin activation is usually localized to within 0.1 µm of its site of release [42]. Therefore, when the sterilization process is completed, the energy will disappear and the inflammation promotion effect on normal tissues is limited. These should be the main reasons for the efficient antibacterial properties, but without obvious inflammation. Alternatively, it is a promising approach to incorporation of anti-inflammatory agents (such as ceria, etc.) into the design of nanoparticles for aPDT application to reduce the potential risks in the future [43]. Considering the complicated oral structure and infectious area is always in deep tissue, these 980 nm laser excited UCNPs with large penetration depth are the most suitable photosensitizer carriers and energy transfer donors for antibacterial application. In addition, the Ce6 molecules are located on the surface of the UCNPs within very close distance because of the very thin silane-modified layer, forming the very close excitation distance. Therefore, besides the enhanced upconversion red light emission from NaYF 4 UCNPs, efficient 1 O 2 production can be obtained for the antibacterial action against periodontal pathogens. The CFU assay is the most essential for evaluating a new antimicrobial method. In this work, the CFU of 4-day biofilms of (A) P. gingivalis, (B) P. intermedia and (C) F. nucleatum after aPDT were measured and shown in Figure 5. In aPDT treatments, the biofilm matrix was easily disrupted with a deeper penetrated infrared light. The value for single-strain biofilms on dentin is different among the samples treated with different Mn doped NaYF 4 @Ce6@silane NPs. The control group shows the highest CFU value. After the treatment with the NPs and the 980 nm irradiation, the CFU experienced significant reductions for all three species compared to the control groups (p < 0.05). Among different bacterial species, NaYF 4 @Ce6@silane and NaYF 4 -Mn10%@Ce6@silane groups show similar CFU results, while, as Mn doping increases, the CFU values further decreases. A similar trend was observed among all the three bacterial species with the same aPDT procedure, indicating the universality of this upconversion aPDT agent. The three bacterial CFU counts of 4-day biofilms all showed a logarithmic reduction of more than 2 log with the Mn30% doped NPs. This high efficacy against periodontitis-related biofilms should be attributed to the high hydrophilic surface after silane modification, as well as the upconversion luminescence triggered aPDT. It is known that, relative to planktonic bacteria, biofilms are much more difficult to kill because of the formation of extracellular polymeric substance inside the biofilm which greatly resist the entry of conventional antibacterial agents [44]. In such a situation, the singlet oxygen from PDT can play an important function due to its efficient diffusion and the oxidation of amino acids and DNA damage, while the aPDT agent carrier should be located in part of the disease. In this work, we further involved the upconversion aPDT with deep tissue penetration, which can solve the problem of bacterial treatment in deep periodontal tissue.
The polysaccharide production of the biofilms was measured because polysaccharide is produced by live bacteria and then related to bacterial viability. Extracellular polymeric substance (EPS) protects pathogens from antibacterial agents and contributes to the virulence and pathogenicity of pathogens via small molecule mediated inter-and intra-species crosstalk. It is mainly composed of polysaccharides, proteins and extracellular DNA and accounts for about 90% of the total mass of the biofilm [45]. EPS with polysaccharide can be regulated by external stimuli since most glycoproteins are located on the outer membrane of Gram-negative bacteria. The polysaccharide production results of biofilms of (A) P. gingivalis, (B) P. intermedia and (C) F. nucleatum on dentin squares are plotted in Figure 6. For each species, the control group without UCNPs shows similar polysaccharide amounts (p > 0.1). The polysaccharide production for all three species was greatly reduced with the addition of NaYF 4 -Mn@Ce6@silane NPs under 980 nm irradiation compared to the control group (p < 0.05). These would probably be co-related to the decreased number of bacteria after aPDT treatments. Also, since the increased Mn doping enhanced the upconversion red emission and then aPDT function, the polysaccharide production decreased with the increasing Mn doping concentration from 0% to 30%. Therefore, the reduction in EPS via aPDT could reduce/destroy the protection of all three species of the bacteria. Figure 6. Polysaccharide production by 4-day biofilms of (A) P. gingivalis, (B) P. intermedia and (C) F. nucleatum on dentin squares (mean ± sd; n = 6). Dissimilar letters indicated significant differences between each group (p < 0.05).
Clinically, the subgingival biofilm is an aggregation of multispecies bacteria. Multispecies biofilms are more challenging to eradicate than single species biofilms and planktonic bacteria [46]. The early attached dominant species of bacteria are streptococci and members of the yellow and purple complexes, such as Actinomycess pp. which soon develop a polymicrobial community [47]. However, most studies also have been done in planktonic or single-species biofilms without regard for the complex microbial and biochemical changes occurring simultaneously [24]. Besides the standardized simple model for culturing in vitro, intact single-species biofilms were easy to realize in a normal laboratory. Within the limitations of this in vitro study, the biofilm model gave a simple means of determining the antimicrobial efficacy of novel nanocomposite with Mn doping upon NIR irradiation against three key periodontal pathogens. This methodology may be more clinically representative than the methods, which do not consider the microorganism in biofilms [48]. However, it still does not reproduce what happens clinically in the periodontal pocket. In such an environment, several mechanisms allow the growth and selection of several microorganisms, even after the treatment. Therefore, further studies should focus on the susceptibility of novel nanocomposite based aPDT against the multispecies biofilms and confirm the effects in animal studies.
Conclusions
NaYF 4 @Ce6@silane NPs with upconversion triggered aPDT function were designed in the treatment of periodontal disease. This upconversion fluorescent nanomaterial can overcome the limited tissue penetration depth of visible light in conventional PDT due to the infrared light excitation. The efficient photosensitizer Ce6 molecules were combined with NaYF 4 :Yb,Er NPs by amphiphilic silane method, through a hydrophobic-hydrophobic interaction between the hydrophobic side chain of the silane and the hydrophobic groups of the UCNPs, avoiding the leakage of the Ce6 molecules. With 980 nm NIR light excitation, the upconversion red emission can efficiently trigger the aPDT effect, because the primary excitation band of Ce6 molecules is totally overlapped with the upconversion red emission band of its UCNPs carrier, and, also, the Ce6 molecule is located in the hydrophobic thin layer of the NPs. Mn ions were doped into the UCNPs for the enhanced red emission and obtained improved energy transfer to Ce6 molecules for stimulating the aPDT function. The novel UCNPs had excellent biocompatibility low cytotoxicity with above 90% cell viability at the concentration of 200 µg/mL. By investigating the aPDT function on three bacterial of P. gingivalis, P. intermedia and F. nucleatum, an enhanced bacteriological therapeutic effect on bacterial and the corresponding biofilms was obtained with the increased Mn doping amount from 0 to 30%. Especially for Mn30% doped UCNPs, the three bacterial CFU counts of 4-day biofilms all showed a logarithmic reduction of more than 2 log. Moreover, Mn30% doped UCNPs had the lowest polysaccharide production among all groups. This upconversion aPDT design can overcome the problems of conventional PDT and has important application prospects in the treatment of periodontal diseases.
Author Contributions: X.S., L.W. and Y.Z. designed the experiments, interpreted the data, and co-wrote the paper with T.Z. T.Z., D.Y., M.Q., X.L. and L.F. carried out the syntheses, characterizations, biological measurements, and data analyses. T.Z., X.S., L.W. and Y.Z. discussed and commented on the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
|
2019-07-27T13:05:05.899Z
|
2019-07-24T00:00:00.000
|
{
"year": 2019,
"sha1": "e9ab67abf671d4b82146e6fc71a36d118a8dad7e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/24/15/2692/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7ee1bd6314e28b96332c29ff76b35171ee53d79",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
255977062
|
pes2o/s2orc
|
v3-fos-license
|
SNCG promotes the progression and metastasis of high-grade serous ovarian cancer via targeting the PI3K/AKT signaling pathway
The poor prognosis of patients with ovarian cancer is mainly due to cancer progression. γ-Synuclein (SNCG) has reported as a critical player in cancer metastasis. However, its biological roles and mechanism are yet incompletely understood in ovarian cancer, especially in high-grade serous ovarian cancer (HGSOC). This is a retrospective study of 312 patients with ovarian cancer at a single center between 2006 and 2016. Ovarian cancer tissues were stained by immunohistochemistry to analyze the relationship between SNCG expression and clinicopathologic factors. The clinical outcomes versus SNCG expression level were evaluated by Kaplan–Meier method and multiple Cox regression analysis. Next, systematical functional experiments were given to examine the proliferation and metastatic abilities of SNCG both in vitro and in vivo using loss- and gain- of function approaches. Furthermore, the mechanisms of SNCG overexpression were examined by human phospho-kinase array kit and western blot analysis. Clinically, the expression of SNCG was significantly upregulated in ovarian cancer compared with the borderline and benign tumor, normal ovary, and fallopian tube. Notably, the high level of SNCG correlated with high-risk clinicopathologic features and showed poor survival for patients with HGSOC, indicating an independent prognostic factor for these patients. Functionally, we observed that overexpression of SNCG promoted cell proliferation, tumor formation, migration, and invasion both in vitro and in vivo. Mechanistically, we identified that SNCG promoted cancer cell metastasis through activating the PI3K/AKT signaling pathway. Our results reveal SNCG up-regulation contributes to the poor clinical outcome of patients with HGSOC and highlight the metastasis-promoting function of SNCG via activating the PI3K/Akt signaling pathway in HGSOC.
Background
Ovarian cancer is the most lethal gynecological malignancy [1]. Globally, approximately 295,000 new cases and 184,000 deaths of ovarian cancer occur in 2018 [2]. More than 70% of patients are diagnosed with advanced disease (stage III/IV), which may be due to the absence of early symptoms and sensitive and specific biomarkers for early diagnosis. Histologically, the majority type of ovarian cancer is epithelial ovarian cancer (EOC). In recent years, EOC is recommended dividing into two subtypes due to distinctive morphologic and molecular genetic features [3]. Type I is composed of low-grade serous, endometrioid, clear cell, and mucinous carcinomas. Type II is mainly composed of high-grade serous ovarian cancer (HGSOC), which is the most aggressive behavior cancer and correlates with the poor clinical outcome as compared to other subtypes [4,5]. Despite progress in treatment, the high rates of recurrence and metastasis correlate with poor prognosis of ovarian cancer [6]. Therefore, it is urgent to identify the novel targets and understand the mechanisms underlying ovarian cancer proliferation and metastasis.
γ-Synuclein (SNCG) is one member of the synuclein family (α-synuclein, β-synuclein, and SNCG), which was first named breast cancer-specific gene 1 (BCSG1) [7]. To date, the overexpression of SNCG has been demonstrated in multiple malignant solid tumors, including breast, ovarian, uterus, liver, and cervical cancers [8][9][10]. Besides, SNCG up-regulation is related to tumorigenesis and metastasis [10][11][12]. Studies suggest SNCG may be a potential prognostic marker and therapeutic approach to promote cancer progression, but the association of the SNCG overexpression with patient survival is controversial in EOC [13,14]. Furthermore, one study has reported that SNCG might enhance the migration of ovarian cancer cells by activating small GTPases and ERKs of the RHO family [15]. However, the molecule mechanism of SNCG promoted ovarian cancer cell proliferation and metastasis is still not well understood.
Therefore, the present study aims to systematically examine the association of SNCG expression with clinicopathologic features and survival outcomes, the effect on biological functions both in vitro and in vivo, and the mechanism might involve in ovarian cancer progression and metastasis.
Patients and specimens
The 312 EOC tissues were obtained from the Department of Pathology, Chongqing Medical University, between January 2006 and September 2016. All patients underwent primary surgery and followed by cisplatinbased chemotherapy. No patients received neoadjuvant chemotherapy or radiation therapy. The borderline tumor tissues (n = 21), benign tumor tissues (n = 24), normal ovaries (n = 26), and fallopian tubes (n = 28, resected for non-ovarian diseases) were collected from patients who received gynecological surgeries in the First Affiliated Hospital of Chongqing Medical University between April 2014 and December 2016. All specimens were reevaluated for histological type and grade by three senior pathologists. The study protocols have been approved by the local Medical Ethics Committee, and the written informed consent was obtained from the patients.
Cell lines and cell culture
Human ovarian tumor cell lines SKOV3, OVCAR3, OVCAR5, ES-2 were purchased from Keygen Biotech (Jiang Su, China). HO-8910, HO-8910 PM cell lines were purchased from Cell Bank of the Chinese Academy of Science (Shanghai, China). The HEY cell line was purchased from Shanghai Genechem (Shanghai, China). All cell lines were authenticated by profiling of short tandem repeat analysis. Cells were cultured in RPMI1640 (Invitrogen, Carlsbad, CA) supplemented with 10% fetal calf serum, penicillin (100 U/ml, Gibco) and streptomycin (100 μg/ml, Gibco). Cells were incubated in a humidified atmosphere containing 5% CO 2 at 37°C and checked regularly for mycoplasma infection.
Immunohistochemical staining (IHC)
The paraffin-embedded blocks were cut at 4 μm and mounted on slides. Following the deparaffinization, rehydration, blocking endogenous peroxidase, and antigen retrieval, the tissues were incubated with antibodies overnight at 4°C. After washing in PBS, the tissues were incubated with the secondary antibodies for 30 min at 37°C. Finally, sections were stained with a DAB staining solution and counterstained with hematoxylin. Negative control slides were incubated with normal goat serum. The immunostains were scored based on the intensity and the percentage of stained cells as described previously. Scores of 0 to 4 were defined as low expression, whereas scores of 5 to 9 were defined as high expression [16].
Confocal immunofluorescence microscopy
Cells were seeded on sterile glass coverslips at 37°C overnight and then fixed with 4% paraformaldehyde for 10 min, followed by blocking with 10% normal goat serum at 37°C for 30 min. Subsequently, cells were incubated with the SNCG, MMP9, Vimentin, and F-actin primary antibodies at 4°C overnight. After washing, cells were incubated with the secondary antibodies for 1 h. The F-actin cytoskeleton was visualized by staining with Phalloidin for 30 min following DAPI nuclear staining for 5 min. All images were taken with a laser scanning confocal microscope.
Western blot analysis
Cells were harvested and lysed with the lysis buffer (Beyotime, Jiangsu, China) and centrifuged for at 12,000 g for 10 min at 4°C. Following extraction, the total proteins were determined with BSA protein assay kits (Beyotime, Jiangsu, China). Next, equal amounts of protein (40 μg) were loaded onto 6-12% sodium dodecyl sulfate-polyacrylamide gels and electrophoretically transferred to polyvinylidene fluoride membranes (Millipore, MA, USA). Membranes were blocked with 5% non-fat dry milk in TBS with 0.1% Tween-20 for 1 h at 37°C and then incubated with primary antibodies overnight at 4°C, followed by exposure to secondary antibodies for 1 h at 37°C. The GAPDH and β-actin were used as the internal control. Protein bands were detected by enhanced chemiluminescence plus detection reagents (Beyotime, Jiangsu, China).
Quantitative real-time PCR (qRT-PCR)
Total RNA was extracted using Trizol reagent (Takara, Japan), and a cDNA reverse transcription kit (Takara, Japan) was used to synthesize the first-strand DNA following the manufacturer's instructions. Briefly, each well contained a reaction volume of 25 μl. And the reaction was carried out using SYBR Green Kit and a CFX96™ real-time PCR Detection System (BioRad, USA). The primers sequences are listed in Table S1. The 2 −△△Ct method was used to determine the mRNA expression levels, with GAPDH as an internal control.
Cell proliferation and colony formation assays
For cell viability assays, cells (500 per well) were seeded in 96-well plates over 24 h to 96 h. Briefly, 20 μl of MTT solution was added to each well. After incubation for 4 h at 37°C, 150 μl of DMSO was added to each well. Finally, A450 was measured using a microplate reader. To analyze the colony formation, cells (200 per well) were seeded in 24-well plate for 2 weeks, and then colonies were fixed with 4% paraformaldehyde and stained with 0.5% crystal violet. The colony formation was also assessed using a two-layer soft agar assay. Cells (1× 10 3 per well), mixed in with a 1.5 ml of 0.3% top agar solution in RPMI1640 medium containing 10% FBS, were plated over a solidified base agar layer (1.5 ml of 0.6% agar solution) in 6-well plates. After 3 weeks, colonies containing over 50 cells were counted.
Cell migration and invasion assays
Transwell assays were used to evaluate cell migration and invasion. The 8-μm pores (BD Bioscience, CA, USA) coated with a 1:5 dilution of Matrigel (BD Bioscience, CA, USA) were used in the invasion assay. Meanwhile, membranes without Matrigel coating in the chambers were used in the migration assay. The protocols were the same for both assays. Cells (1 × 10 4 per well), suspended in the medium containing 5% FBS, were seeded in the upper chambers. Next, the medium containing 10% FBS was added to the lower chambers as a chemoattractant. After a 24 h incubation at 37°C, cells on the lower side were fixed with 4% paraformaldehyde, stained with 0.5% crystal violet, photographed and counted under a microscope. Besides, a wound-healing assay was used to assess cell migration ability. Cells were seeded into 6-wells (1 × 10 6 per well) and cultured until 90% confluence. Subsequently, cells were scratched with a 200-μL sterile tip and washed with PBS twice to remove the detached cells. Cells were allowed to grow for 24 h in a serum-free medium. The wound margins were observed and photographed under a microscope.
Tumor metastasis in vivo
To study the effect of SNCG on tumor growth in vivo, 2 × 10 6 cells per mice, trypsinized and resuspended in PBS, were injected intraperitoneally into 6-week-old athymic female nude mice (n = 5/group). After five weeks, the mice were sacrificed and autopsied. Visible metastatic implants were harvested and weighed. Excised tumor tissues were formalin-fixed and paraffinembedded for IHC subsequently. All animal experiments were approved by the Institution Animal Care Committee at Chongqing Medical University, Chongqing, China.
Human phospho-kinase array
The human phospho-kinase array kit (ARY003B) was purchased from R&D Systems and performed based on the manufacturer's protocol. Briefly, 1.0 mL of array buffer was firstly added to each well for 1 h at room temperature. After washing, SKOV3 (Scr/sh1) cell lysates (400 μg per well) was added to each well overnight in array membranes at 4°C. And then membranes were washed and incubated with detection antibodies for 2 h at room temperature. Next, membranes were exposed to streptavidin-HRP for 30 min on a rocking platform. After washing, protein bands were detected by enhanced chemiluminescence for 1 min and exposed to film. The experiment was performed three times.
Statistical analysis
Statistical analysis was done using IBM SPSS Statistics for Mac (Version 23.0, IBM, USA). Continuous data are presented as mean ± standard deviation (SD), and categorical data are expressed as frequencies. Pearson Chi-Square test was used to compare the correlations of SNCG expression and clinicopathological features. The student's t-test was used for analyzing two-group differences. All experiments were carried out at least three independent times. The relationship between SNCG expression and patient survival was examined by the log-rank test with the Kaplan-Meier method. Multiple Cox regression analysis was used to identify independent risk factors associated with survival. All tests were twotailed with P < 0.05 significance.
The high level of SNCG expression was observed in 68.3% of EOC tissues by IHC staining. The expression of SNCG was significantly increased in the EOC tissues compared with the borderline tumor, benign tumor, normal ovary, and fallopian tube (Fig. 1a-b). Moreover, representative staining images of SNCG expressed in different pathological types were also shown in Fig. 1b. The association between the SNCG expression and clinicopathologic parameters of EOC was subsequently analyzed. Results showed that up-regulated SNCG positively correlated with the high CA125 values (p = 0.003), advanced stage (p < 0.001), high-grade (p = 0.018), HGSOC (p < 0.001), and suboptimal debulking tumors (p = 0.015), but not associated with age, tumor histology, or ascites (Table 1). Furthermore, we explored the relationship between SNCG expression and the survival time of EOC patients. The median progression-free survival (PFS) and overall survival (OS) were 19 months and 38 months, respectively. Kaplan-Meier analysis indicated a significantly worse PFS of patients in the SNCG-high expression group than those in the SNCG-low expression group (p = 0.007). However, the data failed to show a statistically significant correlation between SNCG expression and OS (p = 0.058, Fig. 1c). Patients harboring HGSOC mostly diagnose at advanced stages and have shorter progression-free survival. Thus, we hypothesized that overexpression of SNCG might be related to the survival of HGSOC. As expected, a significant correlation was identified between SNCG up-regulation and clinical outcome (PFS and OS) in patients with HGSOC. Moreover, multivariate analysis showed that SNCG was an independent prognostic factor for HGSOC patients (Fig. 1d).
SNCG accelerates cell proliferation, facilitates cell migration and invasion in vitro
To determine the contribution of SNCG to the malignant behaviors of ovarian cancer cells, we first used Western blot to assess the endogenous expression of SNCG in different cell lines. The results showed that SNCG was highly expressed in SKOV3 and HO-8910 PM cells, whereas a low level of SNCG was detected in OVCAR3 (Fig. 2a). We chose these three kinds of HGSOC cell lines for further study. Next, we lentivirally introduced specific SNCG shRNA into SKOV3 and HO-8910 PM cells and successfully established stable cell lines (SKOV3-sh1 and HO-8910 PM-sh3, Fig. 2b). The MTT and colony formation assays showed that knockdown of SNCG suppressed the viability and colony formation in SKOV3-sh1 and HO-8910 PM-sh3 cells (Fig. 2c-d). Using transwell assays, we observed that the migratory and invasive capacities of SKOV3-sh1 and HO-8910 PM-sh3 cells were noticeably inhibited (Fig. 2e). Besides, we overexpressed SNCG in OVCAR3 cells (OVCAR3-Ove). As expected, the ectopic expression of SNCG significantly enhanced EOC cell proliferation and clonogenicity (Fig. 2c-d). Meanwhile, the transwell assay showed a substantial increase in cell migration and invasion of OVCAR3-Ove cells (Fig. 2e). The effect of SNCG on cancer cell morphology was then evaluated by Factin cytoskeleton staining combined with FN exposure. We observed that the cell polarization and lamellipodia formation were limited in SNCG-knockdown cells compared with control cells. Furthermore, knockdown of SNCG reduced the expression levels of MMP9 and Vimentin ( Fig. 2f and g). Thus, SNCG silencing inhibited the ovarian cancer cell migratory capacity, which might contribute to its metastatic inhibition in ovarian cancer.
SNCG promotes cell growth and metastasis in vivo
To determine the potential impact of SNCG in cell growth and metastasis in vivo, we intraperitoneally injected SNCG knockdown (SKOV3-sh1 and HO-8910 PM-sh3) and overexpression (OVCAR3-Ove) cells into female athymic BALB/c nude mice to form peritoneal metastases. Mice were sacrificed after five weeks. Consistent with human disease, the overt metastatic lesions were attached to the mesentery, omentum, and liver. Compared with the control groups, both the number of metastatic nodules and tumor weight were significantly reduced in SNCG-knockdown groups. In contrast, the SNCG-overexpression group significantly increased the metastatic colonization and weight of implants (Fig. 3a). Next, tumor sections were stained to evaluate the proliferation, epithelial-mesenchymal transition (EMT) and metastasis-associated markers by IHC. As shown in Fig. 3b-f, we found that knockdown of SNCG was accompanied by reduced Ki-67, cyclin D1, MMP2, MMP9, Ncadherin, and Vimentin and increased E-cadherin immunostaining in metastatic tissues. Results were also supported in the SNCG overexpression group. Together, the data corroborated the in vitro results and provided evidence that SNCG was critical for the establishment of ovarian cancer metastasis in vivo.
SNCG induces ovarian cancer progression through activating the PI3K/AKT signaling pathways
Since loss-and gain-of-function of SNCG could regulate the aggressive behaviors of ovarian cancer cells both in vitro and in vivo, we tried to identify the underlying molecular mechanism. Thus, we performed a human phospho-kinase array, which is closely related to cell proliferation and migration. As shown in Fig. 4a-b, 13 kinases significantly changed after knockdown of SNCG in SKVO3 cells. The phosphorylation of kinases within the PI3K/Akt signaling pathway, including AKT1/2/3 (Sec473) and its downstream kinases p70S6K (Thr398), mTOR (Sec2448), was down-regulated in SKOV3-sh1 cells relative to control cells. These three kinases, (See figure on previous page.) Fig. 1 members of the PI3K/Akt family, have been reported to be involved in the induction of EMT, which is a critical process in the pathogenesis of metastasis [17,18]. Since recent studies indicate the role of SNCG on the PI3K/ Akt signaling pathway in cancers [19,20], we chose to further focus on the effect of SNCG on PI3K/Akt signaling pathway in HGSOC. To verify SNCG overexpression promoted EOC progression through regulating the PI3K/Akt pathway, we examined the phosphorylated proteins using western blot analysis. Our results showed that the expression levels of p-Akt, p-p70S6K, and p-mTOR were significantly decreased after knockdown of SNCG in SKOV3 and HO-8910 PM cells, and markedly increased after exogenous overexpression of SNCG in OVCAR3 cells (Fig. 4c). Furthermore, we observed that the IGF-1 (a PI3K agonist) could restore the expressions of p-Akt, p-p70S6K, and p-mTOR downregulated by knockdown of SNCG (Fig. 4d). In contrast, the high levels of these phosphorylated proteins could be decreased by LY294002 (a PI3K inhibitor) in SNCG ectopic-expressed cells (Fig. 4e). Moreover, the colony formation assay and the wound healing assay confirmed that the SNCG regulates cell proliferative and invasive abilities via the PI3K/Akt pathway in vitro (Fig. 4f). Overall, the PI3K/Akt signaling pathway activation is involved in SNCG-mediated promotion of HGSOC proliferation and metastasis.
Discussion
Despite advances in radical surgery and chemotherapy, the majority of patients who suffered advanced HGSOC remain high metastasis and poor prognosis, leaving an urgent need for effective prevention and therapeutic measures [21][22][23]. Emerging evidence has indicated that SNCG is highly expressed and tightly associates with tumor progression and metastasis, including gynecologic processes (endometriosis, cervical, endometrial and ovarian cancers) [19,[24][25][26]. In the present study, we systematically investigated the expression, biological function, and mechanism of SNCG in HGSOC. Our data showed that SNCG was significantly upregulated in EOC tissues and the overexpression of SNCG correlated with clinicopathological factors, including high CA125 level, HGSOC, high grade disease, advanced stage, and suboptimal debulking surgery. These findings were consistent with a previous study [13,27]. Survival analysis revealed that increased SNCG expression correlated with worse prognosis in breast cancer and endometrial cancer [25,28], yet the clinical outcomes of EOC were lacking and unclear. Strohl et al. found that SNCG overexpression did not correlate with patient survival, despite it was overexpressed in metastatic tumors and associated with highrisk clinical variables [13]. Additionally, Fekete et al. reported the increased expression of SNCG correlated to poor PFS with a lager datasets of EOC samples in a metaanalysis [14]. There are two histotypes of EOC (type I and type II), which show to be profoundly different diseases in terms of etiology, morphology, protein expression and molecular profile. Studying them together may be the reason for the unclear outcomes. In our cohort, we revealed that the higher level of SNCG had a significant correlation with worse PFS and OS in HGSOC patients, which indicated its role in HGSOC as well as a novel prognostic marker of HGSOC aggression. Some studies have shown that SNCG could be detected in the circulation of patients with cancer [29,30]. Future studies are needed to evaluate the potential of serum SNCG levels as a biomarker for HGSOC patients. Collectively, our findings indicate that SNCG overexpression is associated with HGSOC progression as well as an important prognostic marker of survival in patients with HGSOC.
The cellular behaviors of SNCG have been demonstrated to promote tumor growth and invasion in the breast, gastric, and oral squamous cancers [11,12,31]. However, little research exists investigating the function of SNCG correlation with HGSOC. Pan et al. indicated that SNCG upregulation enhanced cell migration in breast and ovarian cancers using the Boyden chamber assay [26]. Thus, we performed a series of cellular functional experiments. Consistently, our findings revealed silencing of SNCG could significantly block cell proliferation, colony-forming, migration, and invasion. In contrast, overexpression of SNCG could enhance the malignant behaviors of ovarian cancer in vitro. The changes in the morphology and cytoskeletal reorganization of SKOV3 and HO-8910 PM supported the effect of SNCG on the process of cell invasion. Furthermore, the results collected from the nude mice (in vivo) supported and verified the ability of SNCGmediated cell progression and metastasis. Together, (See figure on previous page.) Fig. 2 SNCG accelerates ovarian cancer cell proliferation, facilitates cell migration and invasion in vitro. a Western blot analysis of SNCG expression in different ovarian cancer cell lines. b The transfection efficiency was confirmed by Western blotting and qRT-PCR in SKOV3, HO-8901 PM, and OVCAR3 cells. c The MTT assay was used to detect ovarian cancer cell viability. d A soft agar assay was used to examine the proliferation of ovarian cancer cells. e Cell migration and invasion capabilities were determined using transwell assays (original magnification, × 200). f and g SKOV3 and HO-8910 PM cell transfectants were plated on FN and stained for SNCG, phalloidin, and nuclear. Moreover, cells were stained for Vimentin, MMP9, and F-actin. The individual or merged images visualized by a laser scanning confocal microscope (original magnification, × 1000). ▲ , P < 0.05. *, P < 0.001. Ctrl: control; Ove: overexpression; Scr: scramble; sh1: small hairpin RNA 1; sh2: small hairpin RNA 2; sh3: small hairpin RNA 3 these observations underscore the importance of SNCG in cell proliferation and metastatic potential.
Recent studies show the overexpression of SNCG is tightly involved in the multiple complex mechanisms mediating tumor progression [10,26,31,32]. Zhang et al. claimed SNCG could enhance tumor growth through the AKT pathway in cervical cancer [19]. Liang et al. reported that SNCG maintains pAKT might increase cancer progression and resistance to Hsp90 disruptors in breast cancer [20]. However, there is a lack of research on the mechanism of SNCG in the progression of ovarian cancer, especially in HGSOC. In our current c Western blot analysis of the levels of Akt, p-Akt (Sec473), p70S6 kinase, p-p70S6 kinase (Thr389), mTOR, and p-mTOR (Sec2448) in SKOV3 and HO-8901 PM cells transfected with SNCG-shRNA or Scr, and OVCAR3 cells transfected with SNCG-Ove or Ctrl. d Expression levels of Akt, p-Akt (Sec473), p70S6 kinase, p-p70S6 kinase (Thr389), mTOR, and p-mTOR (Sec2448) in cells transfected with SNCG shRNA, Scr, SNCG Ove, Ctrl, IGF-1, and DMSO were determined by Western blot. e Expression levels of Akt, p-Akt (Sec473), p70S6 kinase, p-p70S6 kinase (Thr389), mTOR, and p-mTOR (Sec2448) in cells transfected with SNCG shRNA, Scr, SNCG Ove, Ctrl, LY294002, and DMSO were determined by Western blot. f Wound healing assay (upper, original magnification × 40) and cell colony formation (lower) of cells transfected with SNCG shRNA, Scr, SNCG Ove, Ctrl, IGF-1, LY294002, and DMSO confirmed the effect of SNCG on the PI3K/AKT signaling pathway. ▲ , P < 0.05. *, P < 0.001. IGF-1: insulin-like growth factor 1; sh-SNCG: small hairpin RNAs of SNCG study, we found SNCG could increase the phosphorylation of AKT and its downstream kinases, p70S6K, and mTOR. It is noteworthy that the PI3K/AKT pathway is an important activated signaling pathway and plays a critical role in multiple essential biological processes, such as cancer cell proliferation and invasion [33][34][35][36]. As expected, our data further indicated that SNCG overexpression could augment the activation of the PI3K/ AKT pathway, while knockdown of SNCG could suppress this pathway. Using the PI3K inhibitor and agonist, we proved that the PI3K/AKT pathway was involved in SNCG promoting cell proliferation and metastasis. Overall, results suggest that SNCG induces ovarian cancer progression through regulating the PI3K/AKT signaling pathway (Fig. 5).
Conclusions
The present study uncovered the high level of SNCG is a significant association with high-risk tumor features as well as an independent prognostic factor for HGSOC patients. Our systematic analysis showed the in vivo and in vitro effects of SNCG on ovarian cancer. Mechanistically, the up-regulated SNCG mediated cell proliferation and metastatic ability via activating the PI3K/AKT pathway of HGSOC. SNCG may serve as a potential therapeutic target and prognostic marker for HGSOC in the future.
Additional file 1: Table S1. The primers used in this study.
Additional file 2: Table S2. The cDNA sequence of SNCG.
|
2023-01-19T21:33:24.429Z
|
2020-05-07T00:00:00.000
|
{
"year": 2020,
"sha1": "dce55c518169fe9c381f5ed42ef90689f2cd5d51",
"oa_license": "CCBY",
"oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-020-01589-9",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "dce55c518169fe9c381f5ed42ef90689f2cd5d51",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
23971362
|
pes2o/s2orc
|
v3-fos-license
|
Expression Patterns and Post-translational Modifications Associated with Mammalian Histone H3 Variants*
Covalent histone modifications and the incorporation of histone variants bring about changes in chromatin structure that in turn alter gene expression. Interest in non-allelic histone variants has been renewed, in part because of recent work on H3 (and other) histone variants. However, only in mammals do three non-centromeric H3 variants (H3.1, H3.2, and H3.3) exist. Here, we show that mammalian cell lines can be separated into two different groups based on their expression of H3.1, H3.2, and H3.3 at both mRNA and protein levels. Additionally, the ratio of these variants changes slightly during neuronal differentiation of murine ES cells. This difference in H3 variant expression between cell lines could not be explained by changes in growth rate, cell cycle stages, or chromosomal ploidy, but rather suggests other possibilities, such as changes in H3 variant incorporation during differentiation and tissue- or species-specific H3 variant expression. Moreover, quantitative mass spectrometry analysis of human H3.1, H3.2, and H3.3 showed modification differences between these three H3 variants, suggesting that they may have different biological functions. Specifically, H3.3 contains marks associated with transcriptionally active chromatin, whereas H3.2, in contrast, contains mostly silencing modifications that have been associated with facultative heterochromatin. Interestingly, H3.1 is enriched in both active and repressive marks, although the latter marks are different from those observed in H3.2. Although the biological significance as to why mammalian cells differentially employ three highly similar H3 variants remains unclear, our results underscore potential functional differences between them and reinforce the general view that H3.1 and H3.2 in mammalian cells should not be treated as equivalent proteins.
The fundamental repeating unit of chromatin is the nucleosome core particle, which consists of DNA in close association with an octameric unit of core histones (H2A, H2B, H3, and H4). However, in some instances, specialized histone variants are found in place of the canonical histones, enabling the encoding of epigenetic information through defined or "specialized" nucleosome arrays (reviewed in Ref. 1).
Histones are subject to a diverse array of covalent modifications that occur mostly at the N-and C-terminal tail domains. The histone "code" hypothesis (2,3) has been put forward to explain the seemingly complex nature of the reported patterns of histone modifications. Formally, this hypothesis states that a specific histone modification, or combination of modifications, can affect distinct downstream cellular events by altering the structure of chromatin and/or generating a binding platform for effector proteins, which specifically recognize the modification(s) and initiate events that lead to gene transcription or silencing. Expanding the scope of this code, a large number of variant histones has been identified, including some that are unique to vertebrates and some that are highly conserved among all eukaryotes (reviewed in Ref. 4). It has been shown that replacement of the replication-dependent (RD) 4 histone H3 (formally H3.2, see supplemental Fig. 1A, top) with its replication-independent (RI) variant H3.3 in Drosophila cells occurs at transcriptionally active loci (5,6). Furthermore, characterization of Drosophila and Arabidopsis histones by mass spectrometry (MS) revealed enrichment of modifications associated with transcriptional activity, such as methylation of lysine 4 (Lys 4 ) and Lys 79 and acetylation of Lys 14 , Lys 18 , and Lys 23 , in H3.3 compared with H3.2 (7,8). These results suggest that, at least in plant and Drosophila cells, H3.2 and its variant H3.3 play different roles in remodeling chromatin, in part by altering covalent histone modification patterns associated with transcriptional silencing and activation.
Unlike Drosophila, which contains only two different histone H3 molecules, mammalian cells contain three non-centromeric H3 variants: H3.1, H3.2, and H3.3, which differ only in a few amino acids (see supplemental Fig. 1, A). The function of these three mammalian H3s, especially H3.1 and H3.2, is poorly understood. In this report, we investigate the expression patterns and post-translational modifications associated with these three mammalian H3 variants. Analyses of multiple mammalian cell lines revealed that they can be divided into two groups based upon the relative amounts of the individual H3 variants in chromatin. Although the functional significance of this grouping remains unclear, cellular differentiation appears to alter these ratios in at least one ES cell line in a modest, but reproducible, fashion. We also show that these cell line-specific differences in H3 variant expression do not originate from changes in growth rate, cell cycle stages, or chromosomal ploidy. Possible mechanisms are discussed. Additionally, the different human H3 variants were subjected to quantitative tandem MS analyses. As expected from studies in Drosophila (7), transcriptionally active marks are associated with H3.3; those often associated with gene silencing, e.g. Lys 27 di-and trimethylation, are found on H3.2. Surprisingly, H3.1 is enriched both with modifications that are largely associated with gene silencing, e.g. Lys 9 dimethylation, as well as those linked to gene activation, e.g. Lys 14 acetylation. These data reinforce the general view that alterations in the covalent modification patterns associated with histone variants provide additional regulatory options for epigenomic "indexing" of biological processes, many of which remain unclear. As well, our data lend support to a poorly appreciated notion that H3.1 and H3.2 variants, while highly similar at the level of the primary sequence, differing in only one amino acid position (Cys 96 in H3.1 versus Ser 96 in H3.2; see supplemental Fig. 1, A), are not equivalent at the level of post-translational modifications. Thus, our studies underscore the potential need for caution when interpreting H3-related studies in mammalian models.
MATERIALS AND METHODS
Cell Lines and Culture-All mammalian cell lines, with the exception of mouse LF2 cells were grown in Iscove's Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum and penicillin/streptomycin at 37°C and 5% CO 2 . Cell lines used in this study are described under supplemental material and methods.
Cell Synchronization-HeLa cells were grown to 70% confluency and treated with 3 mM thymidine for 15 h. Thymidine containing medium was removed, cells washed once with fresh thymidine-free medium and grown for 9 h in regular medium. This process was repeated and then, after release from the double thymidine block, cells were harvested from individual plates every 2 h by trypsinization, washed with PBS, and split into three samples to prepare for cell cycle analysis by FACS and isolation of RNA and histones (see below).
Preparation of Histones-Nuclei and histones were isolated as described earlier (9). Cell nuclei were isolated by hypotonic lysis in buffer containing 10 mM Tris-HCl, pH 8.0, 1 mM KCl, 1.5 mM MgCl 2 , 1 mM dithiothreitol, 0.4 mM phenylmethylsulfonyl fluoride, protease and phosphatase inhibitors. Pelleted nuclei were acid-extracted using 0.4 N sulfuric acid, precipitated with trichloroacetic acid and resuspended in water.
Reverse Phase HPLC (RP-HPLC)-Separation of mammalian core histones by RP-HPLC was done as described (9). Briefly, acid-extracted histones were separated by RP-HPLC on a C8 column (220 by 4.6 mm Aquapore RP-300, PerkinElmer Life Sciences) using a linear ascending gradient of 35-60% solvent B (solvent A: 5% acetonitrile, 0.1% trifluoroacetic acid, solvent B: 90% acetonitrile) over 75 min at 1.0 ml/min on a Beckman Coulter System Gold 126 Pump Module and 166/168 Detector. Under these conditions histones H3 split into two peaks. The H3-containing fractions were dried under vacuum and stored at Ϫ80°C. RP-HPLC fractions were resuspended in water, analyzed by SDS-PAGE and control-stained with Coomassie Brilliant Blue. The identified fractions were then subjected to MS analysis.
Two-dimensional Triton-Acid-Urea (TAU) Gels-Total histones were dried under vacuum and resuspended in loading buffer (6 M urea, 0.02% (w/v) pyronin Y, 5% (v/v) acetic acid, 12.5 mg/ml protamin sulfate). Samples were separated on TAU mini-gels (15% PAGE, 6 M urea, 5% acetic acid, 0.37% Triton X-100; 300 V in 5% acetic acid for 1.5 h). Lanes containing the samples were cut out, adjusted in 0.125 M Tris, pH 6.6, and the TAU gel slice was assembled on top of a 15% SDS-PAGE mini-gel. After the run, the gel was stained with Coomassie Blue and destained with 5% methanol, 7.5% acetic acid overnight. The gels were scanned and quantified using Image Gauge software (Science Lab), with the subtraction of background staining.
Growth Rate Analysis-1 ϫ 10 5 cells (HeLa and HEK293) were grown in 6-well plates, and every 24 h samples were collected and counted, with exclusion of dead cells (staining with Trypan Blue). Half of the cells were discarded to avoid contact inhibition of the cells as they become too confluent, fresh medium was added, and the cells were allowed to grow for another 24 h before the next sample was collected. With this method, we could measure the doubling time of HeLa and HEK293 cells while still maintaining them at normal confluency (i.e. the same confluency at which we grow them for all other experiments). Cell numbers were calculated to present the cell growth over days of both cell lines. HeLa and HEK293 cells were seeded as duplicates, and this experiment was performed twice.
Cell Cycle Analysis by FACS-1 ϫ 10 6 cells were collected, washed with PBS, and fixed overnight at Ϫ20°C in 70% ethanol, diluted in PBS. The next day, cells were washed with PBS and incubated for 30 min at 37°C in PBS containing RNase A (10 g/ml), followed by the addition of propidium iodide (10 g/ml) and another incubation for 30 min at 37°C. Stained cells were analyzed on a FACSort instrument (BD Immunocytometry Systems) with the exclusion of doublets. Analysis of the results was performed with CellQuest software (BD Bioscience).
Sample Preparation of Histone H3 Variants for MS-Purified histone H3 protein from pooled RP-HPLC fractions were derivatized by treatment with propionyl anhydride reagent (8). The reagent was created using 75 l of MeOH and 25 l of propionic anhydride (Aldrich, Milwaukee, WI). Equal volumes of reagent and each H3 variant were mixed and allowed to react at 51°C for 15 min. Propionylated histone H3s were then digested with trypsin (Promega) at a substrate/enzyme ratio of 20:1 for 5 h at 37°C after dilution of the sample with 100 mM ammonium bicarbonate buffer solution (pH 8). The reaction was quenched by the addition of concentrated acetic acid and freezing. A second round of propionylation was then performed to propionylate the newly created peptide N termini.
Mass Spectrometry-Propionylated histone digest mixtures were loaded onto capillary precolumns (360 m outer diameter ϫ 75 m inner diameter, Polymicro Technologies, Phoenix, AZ) packed with irregular C18 resin (5-20 m, YMC Inc., Wilmington, NC) and washed with 0.1% acetic acid for 10 min. Precolumns were connected with Teflon tubing to analytical columns (360 m outer diameter ϫ 50 m inner diameter, Polymicro Technologies) packed with regular C18 resin (5 m, YMC Inc.) structured with an integrated electrospray tip as previously described (10). Samples were analyzed by nanoflow HPLC--electrospray ionization on a linear quadrupole ion trap-Fourier Transform Ion Cyclotron Resonance (LTQ-FT-ICR) mass spectrometer (Thermo Electron, San Jose, CA). The gradient used on an Agilent 1100 series HPLC solvent delivery system (Palo Alto, CA) consisted of 0 -40% B in 60 min, 45-100% B in 15 min (A, 0.1% acetic acid, B, 70% acetonitrile in 0.1% acetic acid) or other similar gradients. The LTQ-FT mass spectrometer was operated in the data-dependent mode with the 10 most abundant ions being isolated and fragmented in the linear ion trap. All MS/MS spectra were manually interpreted.
Stable Isotope Labeling for Relative Quantitative Analysis-For a differential expression comparison of histone post-translational modifications from the three H3 variants, stable isotope labeling based on conversion of peptide carboxylic groups to their corresponding methyl esters was used (11). First, all samples were dried to dryness by lyophilization. Aliquots of solutions from propionylated histone peptides from H3.1, H3.2, or H3.3 were converted to d 0 -methyl esters by reconstituting the lyophilized sample in 100 l of 2 M d 0 -methanol/HCl, or converted to d 3 -ethyl esters by reconstituting the lyophilized sample in 100 l of 2 M d 4 -methanol/HCl. Reaction mixtures were allowed to stand for 1 h at room temperature. Methyl ester solvent was removed from each sample by lyophilization, and the procedures were repeated using a second 100-l aliquot of methyl ester reagents. Solvent was then removed again by lyophilization, and samples were dissolved in 20 l of 0.1% acetic acid. Aliquots of each solution were then equally mixed for comparative analysis by MS.
Quantitative PCR-Total RNA isolation was performed using TRIzol Reagent (Invitrogen). Single-stranded cDNA was generated with the Superscript First-Strand Synthesis kit (Invitrogen). Quantitative PCR was performed with SYBR green dye according to the manufacturer's instructions (Stratagene). HeLa cDNA was used to generate a standard curve from which the amount of cDNA amplified in each sample was determined as indicated. mRNA levels were normalized to H3.2 mRNA expression. All oligos were synthesized by Sigma, and the sequences of the primer pairs for quantitative PCR used in this study are listed under supplemental material and methods.
Because there are no available antibodies that distinguish the three mammalian H3 variants, the investigation of endogenous expression of these variants is restricted largely to chromatographic and electrophoretic separations (7,9,12). Thus, to test whether the different mammalian H3 variants are present in similar abundance in different cell types, we first turned to a chromatographic method. Acid-extracted total histones from different mammalian cell lines were resolved by RP-HPLC; two distinctive H3 peaks were typically observed (supplemental Fig. 1, A). Based on the H3 RP-HPLC profiles, we were able to separate mammalian cell lines into two groups (A and B) based on their peak height (absorbance) and peak area differences. Interestingly, members of group A show an absorbance that is higher for peak 2 than peak 1; a reverse relationship is evident for group B (i.e. an absorbance that is higher for peak 1 than peak 2 (supplemental Fig. 1, C).
To gain further insight into the protein compositions of the two RP-HPLC peaks, MS was employed (supplemental Fig. 2). Our MS analyses demonstrate that the first fractions of peak 1 contain H3.2, and the later "shoulder" fractions contain H3.3; peak 2, in contrast, contains almost entirely H3.1 variant (supplemental Fig. 2 and supplemental Tables 1-3). To confirm the observed cell type-specific H3 differences and to additionally identify the expression levels of each H3 variant in the different cell lines examined, we separated acid-extracted histones on two-dimensional TAU gels (Fig. 1A) and visualized the histones by staining with Coomassie Blue. Three of the histone spots also stained with H3 specific antibodies (data not shown). To determine the identity of the variant in each of the three H3 spots, each protein sample was digested in gel and the resulting peptides were then characterized by tandem MS as described above (data not shown and Fig. 1A).
Two-dimensional TAU gels were then employed to examine the distribution of H3.1, H3.2, and H3.3 in six different mammalian cell lines, three from each of the group A and B categories (Fig. 1B). We find that cell lines from group A are enriched in H3.1 and those in group B are enriched in H3.3. Next, we quantified the proportion of protein in each of the different H3 spots using Image Gauge software (Fig. 1C). Interestingly, while the proportion of H3.2 (dark gray bars) did not change dramatically between the two different groups, cell lines from group A were enriched in H3.1 (light gray bars). In contrast, cell lines from group B contained less or equal proportions of H3.1 compared with H3.3 protein (black bars). Furthermore, as seen before by RP-HPLC analysis (supplemental Fig. 1), several modest changes in H3 variant composition were observed in murine ES cells (LF2) that were treated with retinoic acid (RA) to induce neuronal differentiation (Fig. 1C, see LF2 columns). First, an immediate increase in H3.3 and a corresponding drop in H3.1 occurred during the first 6 days of treatment, whereas H3.2 levels remained largely the same. However, by day 10 post-RA treatment, when the majority of the ES cells have taken on a neuronal phenotype, the levels of H3.2 increased marginally and the levels of H3.3 dropped slightly, whereas the levels of H3.1 remained about the same. These results confirm and extend the observations made by RP-HPLC (supplemental Fig. 1, C).
Because all cell lines from group A are derived from cancer cells, we wondered whether the high level of H3.1 expression arises from differences in chromosomal ploidy. Therefore, we used RP-HPLC to separate histones from mouse embryonic fibroblast cells where the chromosomal status was either diploid (P-CUT MEF) or tetraploid (10T1/2). Results of this experiment are shown in Fig. 1D. Both of these cell lines had equal peak area ratios in RP-HPLC analysis and were assigned to group B because the observed ratio of peak areas (peak 1/peak 2) was 3. We then separated the H3 variants by two-dimensional TAU gels and found that the levels of H3.1, H3.2, and H3.3 were very similar between diploid (P-CUT MEF) and tetraploid (10T1/2) cell lines (Fig. 1E). H3.3 was the most highly expressed variant in both cell lines, followed by H3.1 and then H3.2. These results suggest that ploidy is not responsible for the different expression levels observed for H3 variant proteins. However, because differences in the peak 1/peak 2 ratio were observed in human (group A) versus mouse (group B) species ( Fig. 1 and supplemental Fig. 1), we cannot rule out the formal possibility that variable copy numbers of the H3.1 and H3.2 genes might contribute, at least in part, to some of the differences in H3 variant expression profiles. Another explanation for the above observations is that cells from embryonic origin contain high levels of H3.3, and cells derived from adult tissue have more H3.1.
Because H3.1 and H3.2 are expressed only in S-phase whereas H3.3 is expressed and incorporated into chromatin independent of the cell cycle (5), we next wondered whether the observed differences in H3 variant expression arise from differences in growth rates and/or time spent in S-phase. Therefore, we tested a representative cell line from group A and B (HeLa and HEK293, respectively) in growth assays and cell cycle analyses. While somewhat variable, these cells showed a sim-ilar growth curve ( Fig. 2A), and similar numbers of cells in S-phase by FACS analysis (Fig. 2B). It is also important to note that the growth rates of cell lines within a single group was very different, e.g. within group A, HT-29 grew extremely slowly, whereas Raji cells grew extremely fast (data not shown). Therefore, we conclude that the observed differences in the proportions of the three H3 variant proteins between groups A and B are not likely explained by the RD expression of H3.1 and H3.2 alone.
We wondered whether differences in the proportion of H3.1, H3.2, and H3.3 between group A and B cells originated from differences in steady state levels of mRNA expression. To address this possibility, we performed quantitative analyses of mRNA expression levels of five human cell lines used in this study. We could not include other cell lines from group B in this study, because these are of mouse origin and differ in their nucleotide sequence from human H3 variant genes. Fig. 2, C and D show the mRNA expression levels of one H3.2, nine different H3.1, and both H3.3A and H3.3B genes normalized to 18 S rRNA expression. Because we do not know if the 18 S rRNA expression level is the same in all human cell lines examined, we also normalized our data to H3.2 mRNA expression because the proportion of H3.2 protein did not change as drastically between groups A and B compared with H3.1 and H3.3 protein (supplemental Fig. 3). Because it is still possible that different cell lines express different amounts of 18 S rRNA, these results should be viewed with caution. Nonetheless, we observed a similar pattern in H3 variant gene expression when normalized to 18S rRNA expression (Fig. 2, C and D) as when normalized to H3.2 gene expression (supplemental Fig. 3). Interestingly, H3.1 genes of the five different human cell lines were expressed at relative low level and did not exhibit dramatic differences in their expression, with the exception of CEM cells, which seem to express H3.1C. On the other hand, HEK293 cells, which belong to group B, showed a reproducible increase in the expression level of H3.3A (almost 2-fold compared with other human cell lines from group A). The lack of significant differences in growth rates from HEK293 and HeLa cells together with the mRNA expression data suggest that the differences in H3.1, H3.2, and H3.3 protein expression that we observed by both RP-HPLC and two-dimensional TAU gel analyses might originate at the transcriptional level and are independent of growth rates.
Because we observed slight differences in the proportions of cells in G 1 or G 2 /M between HeLa and HEK293 cells, we wondered if cell cycle phases could account for the observed differences in H3.1, H3.2, and H3.3 proportions between HeLa and HEK293 cell lines. We therefore performed a detailed analysis of H3 variant expression on both mRNA and protein levels during G 1 , S, and G 2 /M phases in HeLa cells. The results from one of two independently conducted, highly reproducible experiments are shown in Fig. 3. HeLa cells were synchronized in G 1 by a double thymidine block and released from this block to continue through different cell cycle phases. Every 2 h, cells were harvested and samples prepared for cell cycle analysis by FACS, mRNA isolation, and cDNA generation or acid-extraction of histones. Fig. 3A shows the cell cycle profile of these cells analyzed by FACS, and quantitative analysis of the amount of cells in each cell phase is depicted in Fig. 3B. The majority of asynchronously growing cells was found to be in G 1 (ϳ75%), but also cells in S and G 2 /M phase were observed. Treatment of HeLa cells with Nocodazole led to an arrest in G 2 /M (ϳ50%). ϳ75% of cells were found to be in G 1 , 2 h after the release from the double thymidine block. 6 -8 h after the release, the amount of cells in G 1 dropped and more cells in S-phase were found (ϳ55%). 10 -12 h after the thymidine release cells were found to move into G 2 /M (ϳ20 to 40%). These results show that we were able to enrich for cells in specific cell cycle phases.
We also isolated histones by acid-extraction from cells at different time points and tested for mitosis-specific histone modifications by immunoblots as an indication of a successful enrichment of cells in G 2 /M and synchronization by thymidine block (Fig. 3C). Unfortunately, we could not test for the enrichment of cells in G 1 or S-phase, because histone modifications specific for these cell cycle stages have not been identified. We used an antibody against the well characterized mitosisspecific H3 Ser 28 phosphorylation mark and found that histones from cells arrested with nocodazole stained strongly for this mark and also that histones from the 10 and 12 h time points were positive for H3 Ser 28 phosphorylation (Fig. 3C, top) The blot was stripped and stained with an antibody against the C-terminal tail of H3 as a loading control (Fig. 3C, bottom). These results confirm the data we obtained by FACS analysis and show that we successfully enriched for cells in different cell cycle phases.
Next we asked if the expression of H3 variant genes changes during cell cycle phases. To test for these possibilities, we isolated RNA from these cells, generated cDNA and tested for H3.3A, H3.3B (Fig. 3D, left), and H3.2, H3.1H, and H3.1L (Fig. 3D, right) gene expression by quantitative PCR. Surprisingly, we found that both the RD H3 variant genes encoding H3.1 and H3.2 as well as the RI H3.3 genes (A and B) increased in their expression 8 h after the release of thymidine. These data suggest that during S-phase the expression of all H3 variant genes increases.
Next, we wondered if the proportions of H3 variant proteins change in different cell cycle phases, particularly in S or G 2 /M. Therefore, we isolated histones from asynchronously growing cells, nocodazole-arrested cells, and cells harvested at two (G 1 ) and eight (S) hours after release from the thymidine block and then separated and visualized them by two-dimensional TAU gels with Coomassie Blue (Fig. 3E). As discussed above, H3 variant protein levels were quantified, and the results of two independent experiments are shown in Fig. 3F. We did not observe a significant change in the proportions of H3 variants that would explain the observed differences between group A and B cells. These data suggest that the proportions of H3 variants remain constant regardless of cell cycle phases, and that the observed H3 variant proportion differences between group A and B cell lines are cell intrinsic phenomena.
The above data suggest that different cell lines contain different steady-state levels of H3.1, H3.2, and H3.3 proteins, although the functional significance for these differences remains unclear. To determine whether these variants might have different biological functions revealed by distinct post-translational modification "signatures," tandem MS was employed to identify covalent modifications present on each of the three variants. The use of MS for both the qualitative and quantitative analysis of post-translational modifications also circumvents problems associated with the use of site-specific antibodies such as specificity, cross-reactivity and epitope occlusion through interference by closely neighboring modifications (13)(14)(15).
Treatment of histone H3 with propionic anhydride reagent converts amino groups on the N terminus and internal lysine residues (endogenously unmodified and mono-methylated residues only) to propionyl amides. The consequence of this procedure is removal of charge from lysine residues and increased hydrophobicity of histone peptides, thus facilitating their analysis by MS. Additionally, propionylation of histone proteins blocks trypsin from cleaving residues on the C-terminal side of lysine (di-and trimethylated lysine residues are not cleaved by trypsin as well). Therefore, upon digesting propionylated histones with trypsin, cleavage only occurs C-terminal to arginine, generating a fairly uniform set of peptides from highly modified H3 protein and allowing for a more straightforward monitoring of post-translational modifications (16).
Histones from HEK293 cells were isolated from nuclei by acid extraction, suggesting that a majority of the histones purified in this study originated from nuclear (presumably chromatin-incorporated) histones. Individual histone H3 variants from HEK293 cells were derivatized with propionic anhydride, digested with trypsin (cleavage C-terminal to Arg residues), and the N termini of the newly formed peptides were also derivatized with propionic anhydride. The resulting mixture was then analyzed by a combination of LC-MS/MS on a linear ion trap/Fourier transform mass spectrometer (11). Stable isotopic labeling was employed to estimate the relative abundances of the post-translational modifications on each variant. For example, to compare modifications on H3.3 and H3.1, peptides from the former were converted to d 0 -methyl esters and those from the latter were converted to d 3 -methyl H3 (H3, bottom) as loading control. D, quantification of H3.3A and H3.3B (left) and H3.2, H3.1H, and H3.1L (right) esters. Equal amounts of the two samples were then mixed and analyzed on the above mass spectrometer (11). As a result of the above derivatization, ions corresponding to peptides from the two variants that contain the same post-translational modification appear in the mass spectrum as doublets. These doublet peaks are separated by multiples of 3 mass units per carboxylic acid group (C terminus plus Asp and Glu residues) for singly charged ions and 1.5 mass units for doubly charged ions. As an example, a comparative analysis of peptides derived from histones H3.3 and H3.1 is shown in Fig. 4. The figure inset shows a magnification of the mass range from m/z 737-747. Signals at m/z 738.4039 and 744.4039 (6-mass unit separation) correspond to [Mϩ2H] 2ϩ ions for the same isotopically labeled peptide (containing 4 carboxylic acid groups) from H3.3 and H3.1, respectively. The MS/MS spectrum recorded on the ion at m/z 738.4093 is shown in Fig. 4B and defines the sequence of the peptide to be EIAQDFK Me2 TDLR (residues 73-83 of both H3.3 and H3.1). This peptide is chemically modified by the addition of a propionyl amide group on the N terminus and four methyl ester groups on the carboxylic acid groups. A comparison of the areas under the signals for the pairs of [Mϩ2H] 2ϩ ions indicate that the modified peptide is about 4-fold more abundant in H3.3 than in H3.1. Dimethylated Lys 79 has been observed on hyperacetylated histone H3 (17) and is known to be associated with transcriptional activation (18). Table 1 provides a compilation of post-translational modifications detected and enriched on the H3 variants isolated from two samples independently purified from HEK293 cells. With the exception of acetylation of Lys 27 on H3.1, all modifications were detected on each of these three variants. Marks found to be enriched by a factor of at least 2-7 (ϩϩϩ in Table 1) on H3.3 in both of the above samples include: acety-lation of Lys 9 , Lys 14 , Lys 27 , and Lys 18 together with Lys 23 , mono-and dimethylation of Lys 36 , and dimethylation of Lys 79 . These modifications have been described in Drosophila and partly in Arabidopsis (7,8), and are consistent with the general view that the H3.3 variant is involved in the establishment of "active" chromatin. In contrast, marks greatly enriched (ϩϩϩ in Table 1) in both samples of H3.2 were Lys 27 di-and trimethylation. Both states of this methylation mark are also found on H3.2 in Arabidopsis (8) and are often implicated in "silent" chromatin (19,20), specifically in the formation and maintenance of facultative heterochromatin. Thus, H3.3 and H3.2 appear to carry covalent modification "signatures" that largely denote "active" versus "inactive" chromatin, respectively. In contrast, dimethylation of Lys 9 , often considered an "off" mark (21), and acetylation of Lys 14 , often considered an "on" mark (22), are both significantly enriched (ϩϩϩ in Table 1) on H3.1 in both independent experiments. These marks were determined to be enriched on H3.1 on separate peptides because quantitative analysis of peptides containing both marks simultaneously revealed that Lys 9 dimethylation together with Lys 14 acetylation is found on only 9% of all H3.1, 6% of all H3.2, and 9% of all H3.3 peptides, Thus, a clear cut difference between "on" versus "off" covalent modification signatures is less clear with the H3.1 variant. Several post-translational modifications were only found enriched in one of the two samples (ϩϩ in Table 1). Although of variable nature, these, too, follow the same observed trend; enrichment of active marks on H3.3, silent marks on H3.2 and a combination of both on H3.1. Also enriched on H3.1 is a previously unidentified modification, monomethylation of Lys 64 .
Relative enrichments of the observed modifications are displayed in pair-wise fashion in Fig. 5A as follows: (I-H3.1/H3.2) Di-and trimethygene expression from cells harvested as described under A. Samples were measured in triplicate and normalized to 18 S rRNA gene expression. E, two-dimensional TAU gel analysis of histones from HeLa cells, grown asynchronously, arrested in G 2 /M by treatment with nocodazole, synchronized in G 1 (2 h after thymidine release) and S-phase (8 h after thymidine release). F, quantification of histone H3 variant protein proportions of two independent experiments (one is shown in E). This experiment was performed twice and showed similar results. These collective results show that the three mammalian H3 variants are enriched in different post-translational modifications and in different patterns of these modifications, suggesting that they have different biological functions. From the relative abundance data, we conclude that H3.3 is involved in gene activation and that H3.2 is used primarily in euchromatic gene silencing. The function of H3.1 is yet to be defined since our analyses show that it is enriched in marks that are largely associated with both gene silencing and gene activation. The combination and non-overlapping nature of these modification patterns clearly distinguishes H3.1 from H3.2 and H3.3 (see Fig. 5B). These data also underscore the need to not combine H3.1 and H3.2 together as H3 in mammalian models.
DISCUSSION
Previous studies showed that epigenetic indexing mechanisms help determine whether a gene is maintained in a silent or active state. Histone modifications clearly play a role in this process, as does the incorporation of specialized histone variants into nucleosomes, the latter being particularly important for chromatin remodeling. One histone variant in particular, H3.3, has been associated with transcriptional activation. In Drosophila and mammalian cells, H3.3 is closely associated with transcriptionally active foci (5,6,23), and found to be enriched in active marks (7). Additionally, recent reports suggest that transcriptional activation triggers the deposition and removal of H3.3 from chromatin in Drosophila cells (24).
Because several of the above studies used Drosophila cells as their experimental model, less is known about the potentially different functions of mammalian H3 variants. Moreover, mammalian cells are unique in that they contain, in addition to RI H3.3, also H3.1 and H3.2, both of which assemble by RD mechanisms. This special feature of mammalian cells has been largely ignored, in part because H3.1 and H3.2 differ only in one amino acid in the histone core region (see supplemental Fig. 1, A). Nevertheless, the post-translational modification signatures differ significantly between these highly similar proteins, suggesting that H3.1 and H3.2 are likely to differ in function. Our data hint at the intriguing possibility that the unique mammalian H3 variant, H3.1, may play a specialized role in chromatin biology that may corre-
TABLE 1 Post-translational modifications on H3 variants
Data obtained from comparative analysis experiments conducted with stable isotope labeling and a tandem mass spectrometer (LTQ-FT) on two H3 samples isolated from HEK293 cells. Peptide abundances (ion currents) for individual H3 variants that differ by a factor of less than 2, more than 2 in one experiment, and more than 2-7 in both experiments are indicated by ϩ, ϩϩ, and ϩϩϩ symbols, respectively. Dimethyl-and trimethyl-TK 4 QTAR elute in the void volume and were not detected in the above experiments. Monomethyl-K 18 QLATKAAR and K 64 LPFQR have not been identified previously.
late with differentiation or cell origin determination. However, this possibility remains to be shown, in part because of limited reagents that distinguish H3.1 from H3.2.
Post-translational modifications of histones have been shown to be important in establishing and maintaining chromatin remodeling events leading to gene activation or silencing. Different modifications have different biological read-outs, and the marks on histones can therefore point toward a potential function. Using a combination of isotopic labeling and quantitative tandem MS, we show that human H3.1, H3.2, and H3.3 variants are enriched in different post-translational modifications, suggesting separate biological functions for each of the variants. As has been shown previously in Drosophila and Arabidopsis, H3.3 is enriched in modifications associated with transcriptional activation (7,8). These observations are both interesting and important, because they suggest that the function of H3.3 has been evolutionarily conserved.
These studies also serve as a key internal control for our MS/MS analysis of human H3.1 and H3.2, where no data are available to date. H3.2 is found in all eukaryotes except budding yeast and has been implicated in gene silencing. Our data support these observations, as we find that H3.2 is enriched in Lys 27 di-and trimethylation. These generally repressive marks have been associated with gene silencing and the formation of facultative heterochromatin (reviewed in Ref. 25). Unexpectedly, we find that H3.1 has evolved to contain a distinct covalent modification spectrum as compared with H3.2 and H3.3. H3.1 is enriched in Lys 9 dimethylation, a modification associated with gene silencing (reviewed in Ref. 26) as well as Lys 14 acetylation, a modification we find on H3.3, and a novel mark, Lys 64 monomethylation. These data show that the three human H3 variants differ in their post-translational modifications and therefore suggest that each variant is likely to perform a different biological function.
We show that mammalian cell lines (human and mouse origin) can be divided into two groups (A and B) that differ in their expression levels of H3 variants. Our data suggest that neither the ploidy status of the cell nor its growth rate is an explanation for the variant usage detected in our studies. As expected, we found that H3.3A and B gene expression is high also outside of S-phase, whereas H3.1 and H3.2 gene expression is low, in accordance with the general notion that H3.3 is a RI and H3.1 and H3.2 are RDexpressed genes. One can envision that H3.3 is expressed at all cell cycle stages to allow its incorporation into chromatin, and the subsequent activation of previously silenced genes upon appropriate outside stimuli. Interestingly, however, we also found that the RI H3.3 variant genes, which are described by many groups to be deposited into chromatin in a replicationindependent manner (5) are also up-regulated during S-phase, similar to H3.1 and H3.2 genes. These results suggest that during S-phase, when the DNA content is doubled, the expression of all H3 variants is up-regulated to provide the cell with the materials to heritably maintain its nucleosomal composition in both daughter cells.
Having ruled out other possibilities, our results suggest that H3 variant composition correlates either with the tissue-, species-, or most interestingly, developmental origin of the cell in each group. Cells of embryonic origin contain more H3.3 compared with H3.1 and H3.2, whereas cells derived from adult tissue have more H3.1 protein than H3.2 and H3.3. One intriguing possibility would be that during the differentiation of certain cell types the ratio of these variants changes. In the case of neuronal cells the proportion of H3.3 increases during differentiation, as has been described previously (27), and is similar to what we observe in RA-treated embryonic stem cells (see Fig. 1C, bracket). However, other cell types might behave differently from neuronal cells. A previous study reported that, during terminal differentiation of murine erythroleukemia cells, incorporation of H2A variants, but not H3.3, into chromatin rapidly increases although these cells stopped dividing (28). In support, Urban and Zweidler (29) found changes in the proportion of H3.2 and H3.3 during chicken development. Dramatic increases of H3.3 were found in liver and kidney, but not other tissues, where the amounts of H3.2 protein remained relatively high (29). However, we cannot rule out the possibility of other more trivial explanations. Variant gene copy numbers between human and mouse, for example, could account for the observed differences in H3 variant proportions. Despite these uncertainties, our data underscore the importance to distinguish the three H3 variants from each other in future studies.
Our interesting observation that during RA-treatment of murine ES cells the levels of H3 variants slightly change (modest increase of H3.3 during the first 6 days of treatment, and a slight drop of H3.3 and increase in H3.2 levels at day 10) parallels the report from a recent study by Chambeyron and Bickmore (30). This report describes the nuclear reorganization of the HoxB locus upon RA-treatment of murine ES cells (OS25). Interestingly, they suggest that higher-order chromatin structure regulates the expression of the HoxB gene cluster. Upon induction with RA, the Hoxb1 locus loops out away from the chromosomal territories with kinetics that parallel those of its transcription, so that when Hoxb1 expression is silenced after day 4, the frequency and extend of its looping also decreases. The later expressed gene locus of Hoxb9 does not loop out until day 10. Chambeyron and Bickmore (30) also show that chromatin compaction and nuclear organization represent a level of chromatin structure that is not simply a reflection of underlying histone acetylation. The kinetics of HoxB locus reorga-nization parallels our observed modest changes of H3.2 and H3.3 levels over time in our RA-treated murine ES cells. One exciting explanation would be that H3 variants are involved in the nuclear organization of chromatin, with H3.1 associated with irreversibly silenced gene loci, H3.2 with reversibly silenced and H3.3 with active gene loci. Future studies will have to determine if H3 variants might play a role in the organization of the nuclear architecture.
This is, to our knowledge, the first comprehensive study of the three mammalian H3 variants, H3.1, H3.2, and H3.3, addressing both their level of expression and their post-translational modifications. Our data point to the existence of a regulatory mechanism in mammalian cells that is more complex than that in lower eukaryotes. We suggest that the three H3 variants might have different biological functions that are based on differences in covalent modification patterns. Our findings also suggest that a prevailing view, namely, that RI-coupled assembly leads to the incorporation of H3.3 into non-replicating chromatin thereby replacing H3.1 and H3.2 over time, may not account for all biological phenomena in which these H3 variants participate.
|
2018-04-03T02:57:37.042Z
|
2006-01-06T00:00:00.000
|
{
"year": 2006,
"sha1": "3aa65346e1329542e807d723b7feed5e70087a29",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/281/1/559.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "40d08de98339daee9e8e1f3e7f7bd4654aeed2cb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
262242307
|
pes2o/s2orc
|
v3-fos-license
|
Extreme Value Theory : a New Characterization of the Distribution Function for the Mixed Method
Consider the sample X1, X2, ... , XN of N independent and identically distributed (iid) random variables with common cumulative distribution function (cdf) F, and let Fu be their conditional excess distribution function. We define the ordered sample by X1 ≤ X2 ≤ ⋯ ≤ XN. Pickands (1975), Balkema and de Haan (1974) posed that for a large class of underlying distribution functions F, and large u, Fu is well approximated by the Generalized Pareto Distribution. The mixed method is a method for determining thresholds. This method consists in minimizing the variance of a convex combination of other thresholds. The objective of the mixed method is to determine by which probability distribution one can approach this conditional distribution. In this article, we propose a theorem which specifies the conditional distribution of excesses when the deterministic threshold tends to the end point.
Introduction
Pareto distribution is traditionally used by reinsurer's excess of loss mainly because of its good mathematical properties, particularly from the simplicity of the formulas resulting from its application.The new mixed method (MM) was proposed in [1,2,3,4] to determine a threshold = ∑ =1 + 3 3 with 1 ≤ ≤ 2, at which a unit is declared atypical minimizing the variance of a convex combination of thresholds obtained by the mean excess function and generalized Pareto distribution (extreme quantile were estimated with a probability of 99.9% being an extreme value for the distribution of amounts of sinister with a confidence level of 95%).This method allows a compromi between the GPD method and FME method, between a minimum strategy GPD and maximum strategy FME (Mean Excess Function).It is more correlated with the GPD method and relatively smooth.
Method
This article focuses on two major paragraphs.The first paragraph (see paragraph 3.1) is based on determining a threshold = ∑ =1 + 3 3 with 1 ≤ ≤ 2 by the mixed method (MM) and last paragraph (see paragraph 3.2) is to determine a distribution function of the laws of the mixed method.Let 3 : the threshold beyond which a unit is declared as extreme, obtained by the GPD function and : the threshold beyond which a unit is declared as extreme, obtained by the mixed method (MM).Let 1 , 2 , . . ., random variables (iid) common distribution function .We are looking from the distribution of to define a conditional distribution 3 compared to 3 threshold for random variables exceeding this threshold.It defines the excess over the threshold 3 as the set of random variables defined by: Thus, for large threshold 3 , the law of excess is approximated by a generalized Pareto law: In this article, we will show that: where ξ,σ(U) () is the distribution function of the law of the mixed method and U is the threshold beyond which a unit is declared as extreme, obtained by the mixed method (MM).
Theorem Pickands (1975), Balkema and de Haan (1974) assures us that the law of the excess may be approaching a generalized Pareto law.In this article, we will use the theorem Pickands (1975), Balkema and de Haan (1974) to show that the law of the excess can be approached by a law of the mixed method.
Determination of Threshold U By the Mixed Method (MM)
The new mixed method (MM) was proposed in [1,2,3,4] to determine a threshold = ∑ =1 + 3 3 with 1 ≤ ≤ 2, at which a unit is declared atypical minimizing the variance of a convex combination of thresholds obtained by the mean excess function and generalized Pareto distribution (extreme quantile were estimated with a probability of 99.9% being an extreme value for the distribution of amounts of sinister with a confidence level of 95%).
Let 1 be the threshold beyond which a unit is declared as extreme, obtained by the record values, 2 be the threshold beyond which a unit is declared as extreme, obtained by the mean excess function and 3 the threshold beyond which a unit is declared as extreme, obtained by the GPD function with 1 < 2 < 3 .Let = + (1 − ) with 0 < < 1, minimizes the variance , , = 1, 2, 3 and < .We get. ) Consider the sample 1 , 2 , … , of independent and identically distributed (iid) random variables.We define the ordered sample by 1 ≤ 2 ≤ ⋯ ≤ .Let , = 1,2,3 thresholds obtained by different methods.We consider a statistical series to a variable j U X ,taking the amount 1 , 2 , … , and j U X ,which have been sorted in ascending order: 1 ≤ 2 ≤ ⋯ ≤ ≤ ≤ ⋯ ≤ .We consider a statistical series 2 variables and , taking the amount 1 , 2 , … , and 1 , 2 , … , .Which have been sorted in ascending order: 1 ≤ 2 ≤ ⋯ ≤ and 1 ≤ 2 ≤ ⋯ ≤ .We write: The means of X and Y are :
Example 1. Threshold Calculation
The data base provides a sample of 2020 observations for 4 wheel vehicle for personal use during the year 2013.The data come from a Malian insurance company and concern the amounts of claims caused by the insured of a risk class.This file contains only the amounts of claims during the insurance year.U 1 be the threshold beyond which a unit is declared as extreme, obtained by the record values.U 2 be the threshold beyond which a unit is declared as extreme, obtained by the mean excess function.U 3 the threshold beyond which a unit is declared as extreme, obtained by the GPD function and U the threshold beyond which a unit is declared as extreme, obtained by the method MM.
Let be the number of claims and 1 , 2 , … , the realizations of , which is the random variable representing the amounts of loss.As usual we assume mutual independence of random variables.with = −0,0
Law (distribution) of The Mixed Method
In this section, we will give the main result of this paper is to write a new law of the mixed method (MM).Let U 3 : the threshold beyond which a unit is declared as extreme, obtained by the GPD function and U: the threshold beyond which a unit is declared as extreme, obtained by the mixed method (MM).Let 1 , 2 , . . ., random variables (iid) common distribution function .We are looking from the distribution of to define a conditional distribution 3 compared to 3 threshold for random variables exceeding this threshold.It defines the excess over the threshold 3 as the set of random variables defined by: = − 3 for ∈ 3 = { ∈ *1,2, … , +/ > 3 }.It defines the excess over the threshold 3 as the set of random variables defined by: The objective of the mixed method is to determine by which probability distribution one can approach this conditional distribution.In this article, we propose the following theorem (Theorem 2) which specifies the conditional distribution of excesses when the deterministic threshold tends to the end point .
Theorem 1 (Pickands (1975), Balkema and de Haan (1974)): Let 3 be the conditional distribution of the excess over a threshold 3 , combined with unknown distribution function .This function F belongs to the domain of attraction of ξ if and only if there exist a positive function such Where ξ,σ( 3 ) is the distribution function of GPD, define by: for ∈ ,0, ( − 3 )if ξ ≥ 0 and ∈ 00, . is the distribution function of mixed method (MM), define by: for ∈ ,0, ( − )if ξ ≥ 0 and ∈ 00, .
Proof:
The conditional distribution of the excesses above the threshold with is defined by: for 0 ≤ ≤ − .
This is equivalent to: for ≥ .
U 1 : be the threshold beyond which a unit is declared as extreme, obtained by the record values, U 2 : be the threshold beyond which a unit is declared as extreme, obtained by the mean excess function, U 3 : the threshold beyond which a unit is declared as extreme, obtained by the GPD function and U: the threshold beyond which a unit is declared as extreme, obtained by the MM function.Let be the number of claims and 1 , 2 , … , the realizations of , which is the random variable representing the amounts of sinister.
Example 3: Threshold Calculation By the Graphical Method.
Knowledge of parameters (, ξ) allows to determine graphically the threshold 3 by the GPD method and by MM method (mixed method).To do this, we will write a program on the MAPLE software to determine these thresholds.
Table 1 .
Determination of threshold U by the mixed method (MM)
Table 2 .
Knowing the parameters (, ξ) and the thresholds, we can write the distribution functions of GPD and MM.
|
2018-12-12T03:05:56.381Z
|
2016-12-21T00:00:00.000
|
{
"year": 2016,
"sha1": "aaa3952bc679d4941636c492b6beda9a11b55093",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5539/ijsp.v6n1p71",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "aaa3952bc679d4941636c492b6beda9a11b55093",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
13931022
|
pes2o/s2orc
|
v3-fos-license
|
Flexible Neural Electrode Array Based-on Porous Graphene for Cortical Microstimulation and Sensing
Neural sensing and stimulation have been the backbone of neuroscience research, brain-machine interfaces and clinical neuromodulation therapies for decades. To-date, most of the neural stimulation systems have relied on sharp metal microelectrodes with poor electrochemical properties that induce extensive damage to the tissue and significantly degrade the long-term stability of implantable systems. Here, we demonstrate a flexible cortical microelectrode array based on porous graphene, which is capable of efficient electrophysiological sensing and stimulation from the brain surface, without penetrating into the tissue. Porous graphene electrodes show superior impedance and charge injection characteristics making them ideal for high efficiency cortical sensing and stimulation. They exhibit no physical delamination or degradation even after 1 million biphasic stimulation cycles, confirming high endurance. In in vivo experiments with rodents, same array is used to sense brain activity patterns with high spatio-temporal resolution and to control leg muscles with high-precision electrical stimulation from the cortical surface. Flexible porous graphene array offers a minimally invasive but high efficiency neuromodulation scheme with potential applications in cortical mapping, brain-computer interfaces, treatment of neurological disorders, where high resolution and simultaneous recording and stimulation of neural activity are crucial.
Graphene has recently become a desirable material for neural applications owing to superior properties such as high conductivity [30][31][32] , flexibility 30,33 , transparency 30,34 and biocompatibility 30,35 . Studies involving neural network cultures on graphene-based substrates have shown promising results on graphene's biocompatibility 35,36 . Transparent neural electrodes based on monolayer graphene have been recently demonstrated enabling simultaneous electrophysiology and neuroimaging 37 . Although optical transparency of monolayer graphene is appealing for neuroimaging applications, flat surface of 2D graphene limits charge transfer capacity, significantly impeding the efficiency of microstimulation. Here we present a flexible cortical array for cortical recording and microstimulation based on porous graphene, which is directly grown on polyimide substrate using laser pyrolysis. Laser pyrolysis produces three-dimensional graphene foam [38][39][40][41][42] , scalable to large area fabrication. Direct growth on the substrate eliminates the delamination problem associated with the coatings. High density microelectrode arrays built on polyimide substrate exhibit drastically low impedance, high charge injection capacity and flexibility, making them ideal for cortical recording and microstimulation. We have demonstrated in in vivo experiments with rodents that low impedance of porous graphene microelectrodes allows recording of low-amplitude evoked potentials from the rat's somatosensory cortex with high signal-to-noise ratio. High efficiency cortical stimulation in the motor cortex has been shown to evoke transient ankle and knee flexion using same exact arrays in consecutive recording and stimulation experiments.
Results and Discussion
We fabricated porous graphene arrays using the process flow shown in Fig. 1a-c. Porous graphene spots were first patterned on a polyimide film using direct laser pyrolysis, as illustrated in Fig. 1a. Limited by the spot size of the CO 2 laser, which is about 130 μ m, and the resolution of the software that drives the laser head, direct laser pyrolysis provides porous graphene spots of 250 μ m or larger side length with acceptable consistency and uniformity for the lithography afterward. As for patterning smaller structures, an indirect approach with a shadow mask during pyrolysis was also investigated, as shown in Figure S1 in Supplementary Information. 100 nm thick and 25 μ m wide Au wires were patterned using e-beam evaporation, photolithography and chemical etching (Fig. 1b). A buffer region of 50 μ m width between the wire and the nearest porous graphene spot is necessary to maintain the uniformity of the wires, because the direct laser pyrolysis changes the topography of the polyimide film adjacent to the porous graphene spots. SU8 was deposited and patterned as the encapsulation layer. Figure 1d shows the picture of a fabricated 64-electrode porous graphene array. A tilt SEM image is shown in Fig. 1e, with the inset showing an individual electrode. The SEM images confirm the porous morphology of the electrode surfaces and also show the spatial resolution is 500 μ m. With the aforementioned shadow mask method or micropatterning, the spatial resolution can be scaled down to 50 μ m. The temporal resolution of the recordings depends on the sampling frequency of the recording system (Intan RHD2000 Evaluation System). We used a sampling frequency of 10 KHz, leading to a temporal resolution of 0.1 ms. The impedance of the 64 electrodes in 0.01 M phosphate-buffered saline (PBS) solution at 1 kHz is shown in Fig. 1f. 61 out of 64 electrodes exhibit impedances within 2 kΩ to 8 kΩ, confirming the uniformity and high yield (95%). Chemical doping technique was adopted, to reduce the sheet resistance of monolayer graphene 43,44 . For porous graphene, chemicals can penetrate into the foamy structure and may yield better stability compared to planar graphene. We observed that doping with nitric acid substantially decreases the impedance and increased the charge storage and injection capacity (Figures S3 and S4, Supplementary Information). Impedance of doped samples was found to remain relatively stable in PBS solution over 28 days, implying enhanced long-term stability of chemical doping for porous graphene ( Figure S5 in Supplementary Information).
Porous graphene in various dimensions and geometries can be simply patterned on polyimide substrates using a computer-controlled CO 2 laser machining system. Besides being flexible, the polyimide substrate provides mechanical support to the porous graphene layer. During pyrolysis on polyimide films, localized temperatures rises over 2500 °C, which breaks the C─ C, C=O and N─ C bonds, as confirmed by the dramatically decreased oxygen and nitrogen contents. Then aromatic compounds rearrange to form graphene structures 31 . Raman spectra of the porous graphene samples acquired at different power levels are shown in Fig. 2a. Three dominant peaks, the G peak at 1580 cm −1 , the 2D peak at 2700 cm −1 , and the D peak at 1350 cm −1 , are observed. The D peak represents defects or bent sp 2 carbon bonds. The increase in laser power results in a decrease in D peak initially and a subsequent increase with power beyond 5.5 W, as shown by a minimum in the intensity ratio of D and G peaks in Fig. 2b. Figure 2c shows the SEM image of the porous morphology of graphene foam, which indicates pore sizes around 0.2 μ m. The porous three-dimensional network leads to large effective surface area, and hence significantly improves the charge injection capacity. Figure 2d shows the cross-section view of a large-area blanket porous graphene film on polyimide.
Detailed electrochemical measurements were performed on individual electrodes (Gamry Reference 600 Potentiostat). The impedance of a porous graphene electrode (Sample #1) was found to be approximately two orders of magnitude smaller than a similar-size Au electrode. Chemical doping further decreased the impedance, resulting in an impedance of 519 Ω at 1 kHz, as shown in Fig. 3a. Low impedance of the porous graphene electrodes confirms its potential for scaling the electrode dimensions and spatial resolution down to 10 μ m. Cyclic voltammetry (CV) measurements were carried out to compare charge storage capacity of porous graphene and Au electrodes (Fig. 3b). While the water window of the Au electrode was − 0.8 V to 0.8 V, it was extended to − 1.5 V to 0.8 V for the porous graphene electrode, as determined by the sharp increase in oxidation and deoxidization currents. Carbon species usually show a larger water window than metal electrodes, which is essential for exhibiting enhanced charge transfer capacity 21 . The CIC is defined as the maximum quantity of charge that an electrode can inject without reaching beyond the water window, which limits the maximal safe current stimulus. Symmetric biphasic cathodal-first current pulses with 400 ms pulse width and interphase period of 100 ms were applied to the electrodes. Voltage transients were measured to determine the maximum polarization, i.e. the most negative (E mc ) and most positive (E ma ) voltages across the electrode/electrolyte interface. Maximum polarization is reached when either E mc or E ma exceeds the water window. Figure 3c displays the voltage transient of an undoped porous graphene electrode, with E mc and E ma reaching − 0.98 V and 0.8 V, respectively. The injection current was 4.4 mA. Figure 3d shows the voltage transient of a doped porous graphene electrode, and the injection current increased to 7 mA. Doping helped to increase the CIC from 2 mC/cm 2 to 3.1 mC/cm 2 . High CIC results were consistent across many samples fabricated at different batches ( Figures S2-4). Table S1 in Supplementary Information compares charge storage capacity (CSC c ) and CIC for neural electrodes made of various materials. Coating layers, such as iridium oxide, poly(3,4-ethylenedioxythiophene) (PEDOT) and carbon nanotubes, have been employed for increasing the electrodes' surface area and hence charge transfer capability. However, mechanical failures due to cracking and delamination pose a threat to the surrounding tissue, limiting their use for long-term chronic studies or implantable medical systems 17,18 . For example, thicker PEDOT coatings have been observed to suffer more cracking and delamination due to the higher stress imposed on the film 18 . A threshold between 2 mC/cm 2 and 3 mC/cm 2 CIC for delamination of iridium oxide has been shown in in vivo experiments 17 . On the contrary, porous graphene is formed directly by pyrolysis of the bulk polyimide films, providing a substantially stronger adhesion. In this work, all the samples were repeatedly used in charge injection measurements and in vivo cortical stimulations, with current injections as high as 3.1 mC/cm 2 . However, no change in impedance or physical appearance of the electrodes was observed. Furthermore, SEM inspections of the porous graphene electrodes after soaking in PBS solution for 30 days show no obvious delamination or physical degradation compared to an unused electrode ( Figure S6). Mechanical durability of porous graphene has been previously demonstrated for supercapacitor applications 45 . We tested stimulation performance and cycling endurance by subjecting the porous graphene electrodes to continuous biphasic, cathodal first, charge balanced current pulses with 0.75 mA amplitude and 400 μ s durations for cathodic and anodic phases. Figure 3e shows the stable voltage window over 1 million stimulation cycles. Initial fluctuations in the voltage window are attributed to the impedance fluctuations of the porous graphene due to activation/inactivation of defect sites in the fresh sample. No physical degradation of the porous graphene due to charge injection is observed after 1 million cycles. Figure 3f shows that the impedance of the electrode only slightly increased and it was still low enough to allow high charge injection capacity after 1 million stimulation cycles.
In vivo neural recording experiments were performed on adult rat models. An anaesthetized rat was placed with its head fixed in a stereotaxic apparatus. A craniotomy exposed 4 mm × 4 mm region of right barrel cortex. A 16-electrode array was placed on the exposed cortical surface, as shown in Fig. 4a. Recordings were taken in reference to a distant stainless steel bone screw inserted through the skull during the surgery. In Fig. 4b, a representative example of 10-second recording from one of the electrodes in the array shows spontaneous up and down states of barrel cortex activity, implying active and inactive states of neuronal networks. The average power spectral density computed from the entire 5-minute recording exhibits three prominent oscillations with center frequencies of 0.8 Hz, 40 Hz, and 90 Hz (marked by gray arrows in Fig. 4c). These frequencies correspond to delta, low gamma, and high gamma rhythms, physiological oscillations generated by the brain.
Field potentials at the pial surface of barrel cortex in the anesthetized rat were recorded showing spontaneous up and down states cross the 16-electrode array, as shown in Fig. 4d. Four down cycles, marked by the gray lines, can be seen in the 3-second segment of the recording with each of the 16 electrodes. The distribution of down state amplitudes varies on a cycle-by-cycle basis. The color maps show the relative amplitude interpolated across the array for each of the cycles. In order to assess the capability of recording both spatial and temporal distribution of evoked potentials with porous graphene electrodes, whisker stimulation experiments were performed. A pair of needle electrodes was used to electrically stimulate the left mystacial pad, as illustrated in Fig. 4e. The somatosensory-evoked potentials (SEPs) recorded at the pial surface of barrel cortex by the 16-electrode array are shown in Fig. 4f. The amplitude and latency of the first positive peak of the SEPs varied systematically across the array, shown in Fig. 4g,h, respectively. Similar evoked potential recordings with a 64-electrode array shown in Figure S8 in Supplementary Information.
After recording physiological oscillations and evoked potentials, the next step was to investigate cortical microstimulation with porous graphene arrays. Using an array placed over motor cortex this time, stimulus trains were applied to a rat animal model to evoke transient ankle and knee flexion in the contralateral leg, as illustrated in Fig. 5a. Figure 5b demonstrates the placement of the electrode array covering the motor cortex. Knee flexion was only activated when the stimulation was applied on a particular anode electrode overlapping with the leg area in the motor cortex. By changing the stimulation site and scanning across the array, we were able to localize the hindlimb area precisely, suggesting the use of electrical microstimulation with high density porous graphene arrays to map cortical areas with high resolution and precision. The local bipolar stimulus trains were 17 anodic pluses with 0.2 ms pulse width and 3 ms inter-pulse interval. Movement of the hindlimb was measured using a resistive flex sensor spanning the knee joint, as shown in Fig. 5c. A voltage pulse was applied through the sensor, and the change of the corresponding current was measured simultaneously. Amplitude of the stimulus trains ranged from 0.5 mA to 1.5 mA. The higher the stimulus current, the stronger movement was recorded (Video S1, Supplementary Information). No movement was evoked for stimulus less than 0.75 mA (Video S2, Supplementary Information) and saturation of response started at around 1.25 mA as shown in Fig. 5e. We evaluated electrical threshold for stimulation induced tissue damage using Shannon equation, which describes the boundary between tissue damaging and non-damaging levels of electrical stimulation based on empirical data 46 . For in vivo cortical stimulation experiments, currents lower than 1.25 mA correspond to k parameters below 1.85, while achieving successful ankle and knee flexion control. Previous experimental studies have also shown that no tissue damage would be induced by small surface electrodes (areas less than 0.01 cm 2 ) as long as stimulation is performed within the Shannon limit for k = 1.85 47 . Those findings suggest that porous graphene electrodes can be employed for high-efficiency safe electrical stimulation in therapeutic applications involving modulation of central and peripheral nervous system.
In summary, the flexible porous graphene electrode arrays presented in this paper could be a powerful tool for neuroscience research, particularly for electrical microstimulation, and high density spatio-temporal cortical mapping applications. High CIC and lack of delamination and degradation for porous graphene electrodes can open up new avenues for brain computer interfaces based on minimally invasive cortical stimulation. The elimination of depth electrodes could improve the efficiency of clinical treatments, such as deep brain stimulation for Parkinson's and responsive neuro-stimulation for epilepsy.
Methods
The methods were carried out in "accordance" with the relevant guidelines. Fabrication of 3D porous graphene by direct laser pyrolysis method. A laser engraving and cutting system (PLS6.75, Universal Laser Systems Inc.) was used for irradiating polyimide films (50 μ m thick, Kapton). CO 2 laser with wavelength at 10.6 μ m was adopted.
Raman spectroscopy. Raman spectra of the porous graphene were taken by NTEGRA Spectra (NT-MDT Co.) system with a 532-nm laser excitation source.
Porous graphene electrode fabrication. After the patterning of porous graphene, the polyimide film was cleaned with acetone, isopropyl alcohol, and deionized water, followed by de-moisturizing on a hotplate at 150 °C for 5 minutes. Then the polyimide film was attached to a 4 inch Si wafer spin-coated with polydimethylsiloxane (PDMS), which is helpful to maintain the film flat during all following processes. Cr/Au (10 nm/100 nm) layers were deposited with electron-beam evaporation, and the metal wires and contact pads were patterned with S1818 photoresist and wet etching. 9 μ m thick SU8-2007 was spin-coated and patterned for encapsulation. Electrode openings were 250 μ m × 250 μ m. Doping in nitric acid (70%) for 30 seconds helped to decrease the impedance of the electrodes. Electrochemical characterization. The Gamry Reference 600 potentiostat was connected in the standard three-electrode configuration in 0.01 M PBS solution. The counter electrode was Pt and the noncurrent-carrying reference electrode was Ag/AgCl. EIS measurements were taken between 0.1 Hz to 300 kHz using 10 mV RMS AC voltage. For CV tests, the potential of the working electrode swept three times across the water window at the scan rate of 100 mV/s. In choronopotentometry measurements, three successive identical biphasic current pulses were applied after stabilization of the system, and the simultaneous voltage transient was recorded. CSC c is calculated from the time integral of the cathodic current of the CV within the water electrolysis window and divided by the geometry surface area of the electrode. CIC is calculated from the current pulse divided by the geometry surface area of the electrode.
In vivo neural recording and stimulation. Experiment procedures were approved by the Institutional Care and Use Committee of the University of Pennsylvania. Two rats were used. Each rat was anesthetized with a ketamine (60 mg/kg), dexdomitor (0.25 mg/kg) solution and placed in a stereotaxic frame. The ketamine-dexdomitor solution, at the concentrations we used in the experiments, put the animal at a surgical plane of anesthesia (loss of eyeblink and withdrawal reflexes, 60-80 breathes/min), as required for doing a craniotomy. Craniotomy was performed to expose the right barrel cortex (recording experiments) or motor cortex (stimulation experiment). A skull screw was placed in the left frontal bone to serve as the reference electrode for the recordings. The array was placed on the exposed cortical surface. For recording evoked activities, a pair of needles was used to electrically stimulate the left mystacial pad. Wide-band (0.35-7500 Hz) evoked and spontaneous cortical activity was recorded at 25 kS/s (ZC16, PZ2, RZ2, Tucker-Davis Technologies).
|
2018-04-03T02:01:08.078Z
|
2016-09-19T00:00:00.000
|
{
"year": 2016,
"sha1": "5c11c145e3f010ff29949b95d98f02b83be447fe",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/srep33526",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c11c145e3f010ff29949b95d98f02b83be447fe",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
8686333
|
pes2o/s2orc
|
v3-fos-license
|
The Detection of Ichthyophonus hoferi in Naturally Infected Fresh Water Ornamental Fishes
Ichthyophonus hoferi was first identified in cultured brown and brook trout in Germany by von Hofer in 1893. The protozoan nature of I. hoferi described and named by Plehn and Mulsow in 1911 as a fungus. Ultimately this parasite classified into Mesomycetozoea [1-3]. Signs are species-related and depend on the condition of individual fish. Ichthyophonus hoferi causes a systematic granulomatous infection in their hosts. I. hoferi has produced infections in many different species of freshwater, estuarine, marine teleosts and adapt to a wide range of environmental conditions [4]. Ichthyophonus sp. has been reported from many temperate and some tropical waters. Couch in 1985 detected two cases of fishes, sea catfish, and spot, infected by I. hoferi in Gulf coast. For the first time Ichthyophonus sp. is reported in two cultured marine fish including Mugil capito and Lia salines from Spain [5]. This disease has records from over 80 species of both marine and freshwater fishes and results into mass mortalities and economic losses [6]. This parasite detected in several commercial fish [7]. In the case of aquarium fish, Ichthyophonus hoferi reported in sumatra barb (Systomus tetrazona) and black tetra (Gymnocorymbus ternetzi) by Reichenbach-Klinke, H in 1954 and 1955. Ozturk et al. in 2010 screened this pathogen in some organs of goldfish (Carassius auratus). Discus fish (Symphysodon.spp) is the new host record of ornamental fish that infected by I. hoferi [8]. I. hoferi reported in herring in the northwest Atlantic (up to 25% infection) with associated reductions in the population, up to 10% infection in U.K waters with mortality, and mackerel up to 69% infection and haddock up to 80% infection [9]. There are many reports that I. hoferi causes mortality in post-smolts of Atlantic salmon and Chinook salmon [10,11]. The purpose of the present study is the description of the morphology and pathology of I. hoferi as found in two species of ornamental fish in Iran.
Introduction
Ichthyophonus hoferi was first identified in cultured brown and brook trout in Germany by von Hofer in 1893. The protozoan nature of I. hoferi described and named by Plehn and Mulsow in 1911 as a fungus. Ultimately this parasite classified into Mesomycetozoea [1][2][3]. Signs are species-related and depend on the condition of individual fish. Ichthyophonus hoferi causes a systematic granulomatous infection in their hosts. I. hoferi has produced infections in many different species of freshwater, estuarine, marine teleosts and adapt to a wide range of environmental conditions [4]. Ichthyophonus sp. has been reported from many temperate and some tropical waters. Couch in 1985 detected two cases of fishes, sea catfish, and spot, infected by I. hoferi in Gulf coast. For the first time Ichthyophonus sp. is reported in two cultured marine fish including Mugil capito and Lia salines from Spain [5]. This disease has records from over 80 species of both marine and freshwater fishes and results into mass mortalities and economic losses [6]. This parasite detected in several commercial fish [7]. In the case of aquarium fish, Ichthyophonus hoferi reported in sumatra barb (Systomus tetrazona) and black tetra (Gymnocorymbus ternetzi) by Reichenbach-Klinke, H in 1954 and 1955. Ozturk et al. in 2010 screened this pathogen in some organs of goldfish (Carassius auratus). Discus fish (Symphysodon.spp) is the new host record of ornamental fish that infected by I. hoferi [8]. I. hoferi reported in herring in the northwest Atlantic (up to 25% infection) with associated reductions in the population, up to 10% infection in U.K waters with mortality, and mackerel up to 69% infection and haddock up to 80% infection [9]. There are many reports that I. hoferi causes mortality in post-smolts of Atlantic salmon and Chinook salmon [10,11]. The purpose of the present study is the description of the morphology and pathology of I. hoferi as found in two species of ornamental fish in Iran.
Case Presentation
In a disease case, from a local aquarium fish propagation center, the culture facility with the size of 40 cm in length and 30 cm in width was regarded because of swimming disorders of one the black tetra (Gymnocorymbus ternetzi) and swollen abdominal of one tiger barb (Pentius tetrazona). This aquarium, involved 35 adult fish from both species. This facility was filled with closed systems water, tap water, of Ahvaz city. As a random seven fishes from both species, five black tetra and two tiger barb were collected. After anesthesia, the fishes were killed, and total length and total weight of the fishes were measured and recorded. Tiger barbs had average length of 5.2 ± 1.2 cm and average weight of 1.5 ± 0.5 g. The black tetras average length was 5.3 ± 0.8 cm and average weight was 2.5 ± 0.3 g. All organs were examined by routine clinical methods. Within examination the pericardial cavity, many visible white nodules were seen in spleens of the studied fishes. In addition, one specimen had some nodules in heart organ.
Microscopic examination
Wet and dry smear were prepared from spleen tissues and the dry smears were stained with Giemsa staining.
Tissue culture
Samples from infected tissues were taken and incubated in MEM medium (Eagle's minimum essential medium supplemented with 10% fetal bovine serum, 100 IU/ml penicillin, and 100 mg/ml streptomycin). Cultures were incubated in 25°C and the tissue cultures were examined every day under microscope. Eventually after 2 weeks, the tissue dry smear was prepared and stained by Giemsa method. black tetras showed the black spots or melano-macrophage centers. Although, these melanized centers were not seen in all examined tiger barbs. In addition, the outer walls of schizonts represented the strong Periodic Acid Schiff (PAS) positive staining reaction.
Discussion
In this study, samples from naturally infected two species of ornamental fishes, black tetra and tiger barb, were studied and screened. The result of our study demonstrated that the fishes were suffered from Ichthyophoniasis. This disease is more frequently see in marine fishes. However the studied fishes were freshwater species, but they may have been feed by contaminated marine food. As this substantially differs between different host species, obvious signs of disease may considerably differ among fish species [13]. This parasite by means of blood stream or lymphatic system, engaged to all parts of host body especially blood rich organs such as liver, heart or spleen [11]. Spherical, thick-walled, and plasmodium is a commonly internal sign of infection by I. hoferi. In addition, non-specific signs may emerge. These signs involve behavioral changes and changes associated with organs such as swimming abnormalities and loss of pigment control. Other signs were lethargy, fluid accumulation, abdominal distension, or enlargement of organs, wasting of body musculature, skin roughening, and increase
Histopathological sampling and examination
Samples from heart, liver, intestine, gonads and spleen were cut and fixed in 10% phosphate-buffered formalin for histological examination. By using standard techniques, samples were processed and then sections stained either in haematoxylin-eosin [12] and Periodic Acid-Schiff. Micrometric measurement was carried out using Sa-Iran camera microscope and Axio-vision software.
Results
Infection to Ichthyophonus hoferi was found in internal organs of all examined aquarium fishes including five black tetra and two tiger barb. During gross examination, white nodules were found on spleen organs of both fish species. In one specimen (tiger barb), some nodules were also seen in heart organ. However, no external sign of infection was observed on the body surface of naturally infected fishes. The morphology and developmental stage of life style of I. hoferi were similar in both specimens. Schizonts of Ichthyophonus hoferi were the most commonly characteristic of squashed infected organs. The size of the schizonts, include the wall varied from 54 µm to 182 µm in all of specimens (Average 85.10). Differences between schizonts size indicate that, the schizonts of black tetras were bigger compare to the tiger barbs. Microscopic examination of wet mount squash of infected organs were represented, thick walled spherical bodies (resting schizonts) ( Figure 1). The subdivision of the contents of each of the spherical bodies results to produce small uninucleate stages. The reason for this incident is rotational movement in the wall of spherical body (Woo and Btuno 2006). By studying the histopathological sections encapsulated and un-encapsulated schizont inside well-defined host cellular granulomas were observed. In some cases encapsulated schizont consist of 3-5 fibrous layers or fibrotic capsule that related to the passive phase and, some nodules had thin fibrotic layer, which represented active phase ( Figure 2). In addition, the Hydropic degeneration observed in the infected spleen. Detected black spots (melanin reaction) around schizonts had different sizes from 6.77 µm to 2.49 µm (average 4.57 µm) in infected spleen. After incubation in Minimum Essential Medium, the germination of the infective agent and formation of new budding yeast of plasmodium, non-septate germination tubes and terminating club-shaped cells from Ichthyophonus schizonts, were seen. (Figures 3-5). Then the two plasmodia free within the tissue and migrating away from an empty schizonts ( Figure 3). Histopathologic examination of the lesion of spleen showed the presence of characteristics granuloma. The granuloma consists of resting schizonts surrounded by eosinophilic leucocyte infiltrations. The microscopic examination of schizonts in nodulation was observed only in spleen organ. The host immunological reactions of the disease usually occurred in two phases involving acute and chronic [16]. In the case of acute, that reported in herring by Sindermann and Scattergood in 1954, the multiple germination of thick walled schizont, replacement of tissues by the parasites mass and gross external and internal signs such as roughened skin and nodules were seen. On the other hand, connective tissue encapsulation of schizont, and cell infiltration were detected in chronic phase. One of the characteristics of this parasite is sever response around the schizonts. The resting schizonts, could observe in tissues and appears roughly circular, thick fibrous wall. The pathological examination of this study showed the two form of schizonts including undeveloped and developed and defined by just one author, Hassan Rahimian (1998), as passive and active form of this parasite respectively. In active phase, the Pathogen grows and speared on the body of host with no or little microscopically reaction. On the other hand, single or multiple whitish nodules are observed in the passive form [16]. It is important to note that the active and passive related to parasite and the acute and chronic related to host. The undeveloped schizonts of I. hoferi surrounded by thick layers of fibrious host reaction. However the developing schizonts covered by thin fibrotic layers that demonstrate the rapid growth [16]. In the present study, related to the nodulation and body reaction, both phases, active and passive, were detected in tiger barb and black tetra respectively. These lesions usually consist of the granulomatous reaction surrounding single schizont or group of schizonts [11]. This study showed some melanin pigments in the schizont. Melanized centers usually represent in the form of nodular (Agius and Roberts 2003). In the late stage of chronic diseases response to tissue damages and cellular response such as Ichthyophonus these melanized centers emerge [17]. Agius in 1979b reported that the fishes who suffered from I. hoferi have been associated with raised pigment macrophage aggregates (PMA). Roberts in 1975 reported that the function of these black spot is deposition of pathogens including parasitic and bacterial spores. In addition, Agius [17] showed that these pigments play an antigen-processing role in immune response. In the present study, 84 per cent of 35 adult tiger barb and black tetra in target culture facility did not show any external signs in surface of body and muscles; however, this parasite engaged to inner organs of all examined fish.
In the early stages of infection, there is usually a marked inflammatory reaction surrounded by individual schizonts, which is followed by the deposition of a fibrous or connective tissue capsule [13]. By studying the histopathology slides, the reaction of host showed two different responses. The infected spleen of black tetras had encapsulated schizont with thick layers of fibrous reaction. However, the encapsulated schizonts in the spleen of tiger barbs represented thin layers of fibrotic reaction. On the other hand, sometimes there is no reaction around the schizonts. In this situation the un-encapsulated schizont were emerge. There have been some suggestions for the ways of natural transmission of I. hoferi infections among marine fishes that may occurred by ingestion of infected food (especially fish) or directly from the water [11]. By ingestion of resting schizont or latent cysts that transformed into germ-tubes or hyphae, the natural transmission is creating. Then the amoebolasts produce and penetrate into the intestine wall and transported in blood to the viscera where they change in to uni-or binucleate cysts and then the multinucleate cysts grow. Although, in this study, the way of transmission is not clear, some possibility ways are including infected facility, fishes, or water. Although, there is a more possibility that polluted water of Ahvaz city is the suspected factor to spread of I. hoferi in our region. Five stages in life cycle of this parasite have been detected including Plasmodial in mortality [13]. In haddock [13], herring [14] and plaice [13,14], the most obvious lesions occurred in white muscle, heart, liver and kidney respectively. The signs of the examined fishes were swimming disorders, lethargy, swelling abdominal, and mortality. In all of studied fishes, bodies, multinucleated cysts, resting schizont, germinating resting schizonts and conidial elements [18]. After opening the viscera of infected fishes, just two stage, resting schizont and multinucleated cysts, were seen. One of the important stages of identification this parastite is germination of schizonts. For this reason, this parasite grew in MEM. In this case, Non septate germination tubes emerge when the host dead in culture. The last stage (conidial elements), which we did not see, always occurs accomplished with tissue necrosis [19] and hyperplasia [20]. The inside of the schizonts represented PAS positive reaction that is an evidence of vacuoles storing polysacharids [16]. As we mentioned before by implementing the PAS staining, one way of confirmation this type of parasite, the walls showed positive reaction. The first report of Ichthyophoniasis in wild and cultured sea bass reported in 1990 by Sitja-Bobadilla and Alvarez-Pellitero. Rand and Cone [4] reported the highest numbers of lesions were associated with the liver, spleen, and kidneys of infected trout and ranged from one to a few focal, creamy white patches at the end of week two, too many confluents at the end of week six in experimentally infected rainbow trout. Donaldson [21] suggested that the absence of elevated cortisol levels show that there is only a limited stress response period of I. hoferi in rainbow trout. However, Rand and Cone [4] suggested that cortisol elevation might not be component of the host response to the natural disease process. Spanggaard and Huss [23] showed that different types of development of Ichthyophonus could be triggered by changes in important factors of water such as pH, temperature, salinity, and carbon dioxide tension. Spherical cell or often called the resting schizonts is the most commonly detected stage of the parasite and has thick-walled and surrounded with fibrous reaction tissue. Most or all of the pathogenesis of Ichthyophonus can be directly linked to the replacement, disruption, and atrophy of infected tissues by the proliferation of the parasite. In some cases, tissues of normal organs are almost completely replaced. In this study, the vacuolation and fragmentation of schizont cytoplasm were observed. This incident is occurred because of the loss of integrity of schizont walls and lack of distinction between schizont organells. As a result, the vaculation were represented [13]. No tissue has found to be immune from infection by I. hoferi. However, some organs that have high blood supply seem to be more susceptible affect compare to the other organs [23]. A survey that conducted in Al-Madaen drainage network showed that L.abu is one of the host native species of this parasite [24]. Similarly, Mansor et al. [25] reported one case that infected by I. hoferi among five different species in Tigris River -Iraq.
It is obvious that the aquarium industry is becoming the popular entertainment in all parts of the world and increase significantly in different countries, for this reason, Ahvaz city do not except from this kind of industry. Export and import of ornamental fishes from different countries may lead to the outbreaks of different types of infections. Since, the number of species that introduce to Iran is impressive, it is inevitable to face with numerous diseases. While some portion of sewage system, have been released into the Karun River that located in Ahvaz city, the spread of pathogens from the water of aquarium and ornamental fishes to native fishes is more probability. As we mentioned before, this parasite was reported in native fishes of Tigris River in Iraq, Iran's neighborhood. Tigris and Euphrates Rivers, from Baghdad and Karun, from Ahvaz join in Arvand Rud, then release into Persian Gulf ( Figure 6). As a result, it is more likely to expand the pathogen from infected fishes and water from rivers of Iraq and Iran to native fishes of Persian Gulf. Overall, the main source of Ichthyophonus hoferi in these two fresh water ornamental fish is not clear. This important disease has reported in many regions in the world, although in our country there is no report of Ichthyophoniasis from native, cultural, and ornamental fishes. It can be regarded as first report in Iran.
|
2019-03-22T16:18:14.662Z
|
2014-11-12T00:00:00.000
|
{
"year": 2014,
"sha1": "c40e6a748280272ef68a91104715d78501c20122",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/the-detection-of-ichthyophonus-hoferi-in-naturally-infected-fresh-water-ornamental-fishes-2155-9546-5-289.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7ac38514796fadbf2e66980454256b45b23c4d94",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
119164522
|
pes2o/s2orc
|
v3-fos-license
|
Interpretation of Infrared Vibration-Rotation Spectra of Interstellar and Circumstellar Molecules
Infrared vibration-rotation lines can be valuable probes of interstellar and circumstellar molecules, especially symmetric molecules, which have no pure rotational transitions. But most such observations have been interpreted with an isothermal absorbing slab model, which leaves out important radiative transfer and molecular excitation effects. A more realistic non-LTE and non-isothermal radiative transfer model has been constructed. The results of this model are in much better agreement with the observations, including cases where lines in one branch of a vibration-rotation band are in absorption and another in emission. In general, conclusions based on the isothermal absorbing slab model can be very misleading, but the assumption of LTE may not lead to such large errors, particularly if the radiation field temperature is close to the gas temperature.
The Promise
Infrared spectroscopy of molecular vibrationrotation lines holds the promise of providing a valuable probe of chemical and physical conditions in interstellar and circumstellar gas. Vibrationrotation spectra complement pure rotational spectra in several important ways. First, molecules without permanent dipole moments, such as CH 4 , C 2 H 2 , and CO 2 , can be observed when vibration breaks the symmetry of the molecule. Second, because vibration-rotation lines are typically seen in absorption toward embedded young stars they selectively probe gas in star-forming regions. And third, because vibration-rotation lines of different rotational states lie close together in wavelength they can be observed with one instrument and telescope, and often simultaneously.
It seems that the interpretation of absorption lines should be straightforward. The optical depth of an absorption line depends primarily on the column density in the lower state, that is N J (v = 0), so the calculation of N J from the spectrum is simply a matter dividing the equivalent width of a line by the line-strength factor obtained from laboratory spectroscopy. If many lines are observed, it is not even necessary to assume a thermal distribution of lower state populations, as the population of each rotational state can be measured. Of course, for optically thick lines curve-of-growth or other corrections for saturation may be necessary. This introduces uncertainties due to the generally unknown lineshape, but the problem can be avoided if necessary by observing optically thin lines of rare isotopes. This interpretation procedure is referred to as the isothermal absorbing slab model. Variations on it have been used by astronomers to interpret almost all infrared molecular absorption spectra.
The Problems
There are several problems with the isothermal absorbing slab model: the dust which emits the 'background' continuum radiation is often mixed with the absorbing gas, the molecular gas emits as well as absorbs radiation, and there is a temperature gradient in the gas and dust around embed-ded sources.
As a first approximation, the effect of mixed gas and dust can be taken into account by considering the derived molecular column densities to be the columns to a depth in the source where the dust optical depth, τ , reaches one. But since the dust opacity is wavelength dependent, this depth generally differs for different molecules or different vibration-rotation bands, and may even differ substantially for lines within a band. In addition, the emergent continuum radiation at a given wavelength is not necessarily emitted from near τ = 1. If there is a temperature gradient in the dust, with the dust in the source being hotter at greater depth, emission by dust at τ > 1 may dominate the outgoing radiation, especially at wavelengths shortward of the peak of the Planck function. Without a radiative transfer calculation that takes these effects into account, the column of gas through which absorption line observations are made may be substantially underestimated, and even derived molecular abundance ratios may be in error.
The effect of emission in the molecular lines is even more difficult to determine and account for. If a source is spatially resolved and the molecular gas is in LTE at a known temperature, the equation of radiative transfer along the line of sight is easily solved. The result is that molecular lines saturate at an intensity equal to the Planck function at the line wavelength. But often the continuum source is unresolved and the absorbing and emitting gas may have different angular extents, which may not fall entirely within the observing beam. Even more problematical is the fact that collisional deexcitation cross sections are small (Gonzalez-Alfonso et al. 2002) and radiative rates are relatively large for vibration-rotation transitions, making the critical densities for vibrational thermalization large, typically > 10 12 cm −2 . Consequently, vibrational LTE at the gas kinetic temperature could be a very poor approximation. It may be a better approximation to assume that all radiative excitations are followed by radiative decay. In a spherically symmetric situation in which the molecules lie in a shell separated from the continuum source, but within the observing beam, there may be no net absorption even if the lines are optically thick, since an equal number of photons are emitted into the beam as are absorbed out of the line of sight. The actual net absorption or emission depends on whether more or less molecular gas lies along the observing line of sight than along a typical direction and whether reemitted photons are absorbed by dust before escaping from the source region. In the case of sources with systematic motions, notably expanding shells around evolved stars, the emission and absorption may be separated spectrally by the Doppler effect. In those cases high spectral resolution observations would avoid this problem, but in many other cases the systematic motions are smaller than turbulent linewidths or are unresolved, making it impossible to know how much the emission and absorption cancel.
In many cases the radiative transfer effects we have been discussing change the strengths of lines, so lead to errors in derived abundances, but are not obvious from inspection of the spectra. But in a few cases there are recognizable symptoms of these effects. The importance of emission in vibration-rotation lines is clearest in the spectra of expanding circumstellar shells, with IRC+10216 being a particularly good example. The bendingmode Q branches of HCN and C 2 H 2 near 14 µm are prominent in the R=2000 ISO SWS spectrum of IRC+10216 (Cernicharo et al. 1999), with depths of ∼ 20% of the continuum, but the P and R branch regions of the spectrum show little evidence of line absorption or depression of the continuum due to lines. In contrast, the R=80,000 TEXES observations of Fonfría et al. (2008) show the R branch lines of HCN and C 2 H 2 to have typical depths of ∼ 50%. The reason the lines are so much weaker in the ISO SWS spectrum is that the lines have P-Cygni profiles, with nearly equal emission and absorption components that are blended together in the R=2000 spectrum. Apparently most vibrational excitations caused by absorption of photons in these bands are followed by emission, and most emitted photons escape from the shell, perhaps after multiple absorption and emission events.
The 13.7 µm ν 5 band of C 2 H 2 has also been observed toward a number of embedded high-mass young stars (Lacy et al. 1989;Evans et al. 1991;Lahuis & van Dishoeck 2000;Knez et al. 2009;Barentine & Lacy 2012). Evans et al. (1991) and Barentine & Lacy (2012) also observed absorption in the 7.5 µm ν 4 +ν 5 band toward OMC1 IRc2 and The ν 4 and ν 5 modes are the anti-symmetric and symmetric bending modes, respectively. Allowed radiative transitions are indicated with solid arrows; the ν 4 transition that only occurs collisionally is indicated with a dashed arrow. Allowed radiative transitions between these levels must change the quantum number v 5 ; v 4 can change only if v 5 also changes. The 85 µm ν 5 − ν 4 transitions have not been observed. Rotational levels and level splitting are not shown. The vertical scale is in cm −1 .
NGC 7538 IRS 9. An energy level diagram of C 2 H 2 with the relevant vibrational levels is shown in Figure 1. Both bands involve absorption from the ground vibrational level, but the derived C 2 H 2 column density from the ν 4 +ν 5 lines is greater than that derived from the ν 5 lines by a factor of 3-10 toward IRc2 and 20 toward IRS 9. Both of the effects described above may contribute to this discrepancy. The dust opacity at 13.7 µm is about twice as large as that at 7.5 µm, because of the wings of the 9.7 and 18 µm silicate features. In addition, 7.5 µm is farther off of the peak of the dust thermal emission, also causing the continuum to be formed farther into the sources (where the temperature is higher) at 7.5 µm. The amount of reemission in spectral lines may also differ between the ν 5 and the ν 4 +ν 5 bands, since the ν 5 state can only decay to the ground via the ν 5 band, whereas the ν 4 +ν 5 state can decay either to the ground through emission of a ν 4 +ν 5 photon or to the ν 4 level by emission of a ν 4 +ν 5 -ν 4 photon.
Ammonia (NH 3 ) also shows evidence of radiative transfer effects. Barentine & Lacy (2012) observed lines in two branches of the NH 3 ν 2 band toward NGC 7538 IRS 9. Although other molecules seen toward IRS 9, including C 2 H 2 and HCN, are seen in absorption and the aQ branch of NH 3 shows P-Cygni lines or weak absorption, aP and sP-branch NH 3 lines are seen in emission. Barentine & Lacy (2012) briefly discuss a radiative transfer model to explain these observations. It assumes that the NH 3 molecules are exposed to optically thin silicate emission, which is stronger on the NH 3 R and Q branches, so causes more upward transitions in those branches than in the P branch. Since downward transitions are about equally likely in the three branches, emission dominates over absorption in the P branch, and absorption dominates over emission in the Q and R branches.
A Radiative Transfer Model
To reproduce and understand the radiative transfer and radiative excitation effects in molecular vibration-rotation spectra, I constructed a computer model with includes collisional and radiative excitation and deexcitation of molecules and calculates the outgoing spectrum.
The model assumes a parameterized distribu-tion and composition of gas and dust in a shell around a blackbody luminosity source. The input parameters can be adjusted through a grid search procedure to fit the continuum and line spectra of individual sources. For the calculations in this paper we assume a spherically symmetric expanding shell. The model first calculates the dust temperature and continuum radiation field by alternately solving the equation of radiative transfer and the equation of thermal equilibrium for the dust. The radiative transfer calculation involves following a grid of rays through the shell. Typically, a 64×64 ray grid is used at the outer edge of the model. After following the rays half way in to the star, the inner 32×32 rays are split into 64×64 rays to increase the resolution. This is done up to 11 times, and then the rays are rejoined going out from the star. Rays that overlap the star are replaced by an appropriate fraction of the stellar spectrum before continuing out through the shell. As the rays are followed through the shell, the ray spectrum is added to a mean intensity spectrum stored for each grid cell, which is used to calculate the dust heating. The dust temperatures are calculated for cells on a spherical coordinate grid. For each grain type and size the equilibrium temperature is calculated that balances radiative heating and cooling. The dust source function is then recalculated and the radiative transfer calculation repeated. Typically 16 iterations of the radiative transfer and thermal equilibrium calculations are sufficient for even very optically thick dust shell models to converge. The gas temperature is not calculated separately, but is assumed to equal the dust temperature. For the models described in this paper, the dust shell and stellar parameters were chosen to roughly reproduce the observed SED of NGC 7538 IRS 9. After solving for the shell thermal structure, the program solves for the molecular level populations and the outgoing line spectrum. This is done by alternately solving the equation of radiative transfer in molecular lines and the equation of statistical equilibrium involving collisional and radiative rates in and out of vibrationally excited energy levels. Line strengths are calculated from band strengths and Hönl-London factors, as in Evans et al. (1991), and are consistent with those in the HITRAN database. The calculation is simplified for linear and symmetric top molecules by the selection rules that cause each vibrationally excited rotational level to be coupled radiatively to only two or three rotational levels in the ground vibrational state. It is further simplified by assuming that the rotational levels in the ground vibrational state are in LTE at the gas temperature. This and other assumptions are discussed in §5. We also assume that collisional transitions between vibrational states do not change the rotational state. This is not actually valid, but since collisions must drive populations toward LTE, and the ground vibrational state is assumed to be in LTE, on average collisions should not change the rotational excitation of molecules. In addition, vibration-changing collisional transitions are relatively unimportant compared to radiative transitions at the densities found in the model. With these simplifications, the excited vibration-rotation level populations and the vibration-rotation lines between them and the ground vibrational state can be calculated for one excited vibration-rotation level at a time, neglecting any coupling between rotational levels of the excited vibrational state.
As with the dust calculation, the code follows a grid of rays through the shell, in this case calculating the spectrum for wavelengths close to the relevant vibration-rotation lines. At each step through the shell the molecular line profile is Doppler shifted by the component of any assumed systematic motion along the ray direction. As the rays propagate through the shell the Dopplershifted spectrum is added to a sum for the rays passing through each spherical coordinate cell, so that the mean intensity, averaged over each spectral line, can be calculated for each cell. With the mean intensities, the radiative excitation and deexcitation rates are calculated, and with those and the collisional rates, the vibration-rotation level populations are calculated. The radiative transfer and level population calculations are then repeated until they converge. However, the fact that the molecular lines are typically more optically thick than the dust continuum results in slower convergence than for the calculation of the thermal structure. This problem was overcome by comparing the change in level populations in each iteration with that in the previous iteration, and amplifying the correction if iterations are well correlated. With the accelerated conver-gence scheme, the level populations almost always converge within 16 iterations.
Results of Modeling
Models were run with parameters meant to roughly reproduce the IRS 9 observations. The central source was taken to be a 12,000 K, 10 4 L ⊙ blackbody. The gas and dust shell had a constant H 2 +He density of 2 × 10 7 cm −3 from 4 to 400 AU, then fell with an r −2 density profile out to 1.6 × 10 4 AU. Its mass was 4.2 M ⊙ , and its radial column density was 1.8 × 10 23 cm −2 . The calculated dust temperature was 1800 K at the inner edge of the shell, falling to 160 K at 400 AU and 40 K at the outer edge of the shell. A constant expansion speed of 2 km s −1 and a turbulent velocity dispersion of 3 km s −1 were assumed. Molecular line profiles were calculated for the C 2 H 2 ν 5 and ν 4 +ν 5 bands, with C 2 H 2 /H 2 = 1 × 10 −7 , the NH 3 ν 2 band, with NH 3 /H 2 = 4 × 10 −6 , and the CO v=1-0 band, with 13 CO/H 2 = 1 × 10 −6 and C 18 O/H 2 = 2 × 10 −7 . These parameters are not meant to be fitted values, but they produced line depths similar to those observed.
Several of the calculated NH 3 line profiles are shown in Figure 2. In each panel, the P, Q, and R-branch lines between a given J,K state in the excited vibrational level and the ground vibrational level are shown. Note that these are normally referred to as the P(J u +1,K), Q(J u ,K), and R(J u -1,K) lines, being labeled by their rotational levels in the lower vibrational state. For this 'umbrella' vibrational mode, Q-branch lines have the largest optical depths for K=J, and P and R-branch lines are strongest for K=0. The nuclear statistical weights favor lines with K=3n by a factor of 2. In all cases, the model P-branch lines are predominantly in emission, the Q-branch lines show weak absorption, and the R-branch lines are more strongly in absorption. This is in agreement with the observed P and Q-branch lines toward IRS 9. The R-branch lines have not been observed, but are predicted to be seen in absorption, with depths of 1-2%.
Several calculated C 2 H 2 ν 5 lines are shown in Figure 3, and ν 4 +ν 5 lines are shown in Figure 4. For C 2 H 2 , rotational states in the ground vibrational level with J l odd (ortho states) have nuclear statistical weights of 3; those with J l even (para In each panel, the P, Q, and R-branch lines between a given J,K state in the excited vibrational level and the ground are shown (top to bottom). Spectra are normalized, with the Q and R-branch lines offset downward. Q-branch lines are strongest for K u = J u and are absent for K u = 0. R-branch lines are absent for K u = J u . states) have nuclear statistical weights of 1. Linestrength factors for P, Q, and R-branch lines are proportional to J u , 2J u +1, and J u +1, respectively. Combining these factors for the lines shown, all of which have J u even, so J l odd for the P and Rbranch lines, the P and R-branch lines have similar line-strength factors, whereas Q-branch lines are weaker on average by a factor of 2/3. In addition, for absorption lines the line-strength factors must be multiplied by the Boltzmann population factors of the lower rotational states.
In agreement with the IRS 9 observations, the ν 5 R-branch lines have depths only about twice those of the ν 4 +ν 5 lines, even though the ν 5 band strength is nearly 10 times greater. The P-branch line depths are similar in the two bands. A new prediction is that the ν 5 P-branch lines should be substantially weaker than the R-branch lines. At higher abundances the P-branch lines go into emission and the Q-branch lines show P-Cygni profiles. The P-branch lines are unobservable from the ground, due to telluric CO 2 absorption. The Q-branch lines are difficult to observe, due to telluric absorption and blending of the closely spaced lines, but do show some evidence of P-Cygni emission. The explanation of the strengths and profiles of these various lines is discussed below.
Calculated CO lines are shown in Figure 5. CO has no Q branch; the third line shown in each panel is for an average of the P and R-branch line strengths, but with emission by molecules in the v=1 state turned off in the program, to show what would be observed if the pure-absorption model used in the past were valid. Both P and R-branch lines of CO have P-Cygni profiles, with the relative strength of the emission component being greater in the P-branch lines. The observed lines of CO from IRS 9 also have P-Cygni profiles, but are considerably broader than the lines of other molecules. This is probably a result of a high velocity outflow with a different chemistry from the gas that dominates the spectra of other molecules. The high optical depths of the 12 CO and 13 CO lines may also contribute to the prominence of line wings that are not apparent in spectra of other molecules. C 17 O and C 18 O lines are blended with 12 CO and 13 CO lines in the existing observations; further observations would be desirable. Only P-branch (top) and R-branch (bottom) lines occur for this band; the middle spectrum is for the ν 4 + ν 5 − ν 4 Q-branch line, which accounts for much of the radiative deexcitation of the ν 4 +ν 5 state. Since it is scaled by the continuum flux, the ν 4 + ν 5 − ν 4 line appears weaker than the ν 4 +ν 5 lines.
Explanation and Discussion
The model spectra are generally in at least qualitative agreement with the observations of IRS 9. In particular, they reproduce the emission seen in NH 3 P-branch lines and the similar depths of C 2 H 2 ν 5 and ν 4 +ν 5 lines. However, they make additional predictions not anticipated by the simple radiative transfer models of Barentine & Lacy (2012). Notably, they predict that C 2 H 2 should show differences between its P, Q, and R-branch lines like those seen in NH 3 . Somewhat less prominent effects are predicted for CO. The explanation given by Barentine & Lacy (2012) for NH 3 was based on the brighter continuum radiation at the Q and R-branch wavelengths. But this should not be the case for C 2 H 2 , for which the dust opacity is similar at the wavelengths of its different branches.
To help understand the radiative transfer effects responsible for the line profiles of different lines, several physical effects were changed in the model. First, a model was run assuming LTE populations of the excited vibration-rotation levels. Perhaps surprisingly, the model results did not change greatly; P-branch lines still favored emission relative to R-branch lines. Apparently the non-LTE populations were not very different from LTE populations. The explanation for this may be that the continuum radiation field seen by the molecules is not very different from a blackbody field at the dust temperature, and the gas temperature was assumed to be equal to the dust temperature. As a result, populations controlled by radiation were not very different from populations controlled by collisions. This would not be the case if the molecular gas were more distant from the warm dust responsible for the continuum, so the conclusion that non-LTE effects were relatively small for the IRS 9 model may not be valid in general. A second modification to the physics of the model was to set the rotational constant (B e ) of the molecules to zero, so that the P, Q, and Rbranch lines all fell at the same wavelength, and their lower state energies were all the same. The differences between P, Q, and R-branch lines then disappeared. Finally, a model was run with the gas and dust temperature held constant throughout the shell. Again, the differences between P, Q, and R-branch lines largely disappeared. The conclusion from these models with modified physics was that the difference between the lower state energies, and the associated Boltzmann factors, was the primary cause of the different line profiles.
With the results of the modified models, the explanation for the differences between the P, Q, and R-branch lines can be understood. We will consider P and R-branch lines, which have similar behaviors for all molecules. (The strengths of Q-branch lines depend on the molecular symmetry and vibrational mode.) The P and R-branch lines from a given rotational state of the upper vibrational level have nearly equal Einstein A coefficients (depending weakly on J u ), so nearly equal branching ratios on emission. However, the absorption in these lines depends on the radiation field and the populations of the rotational states of the lower vibrational level. Especially for high values of J, these populations can differ substantially, because of the different energy levels of J l = J u +1 and J l = J u − 1 for the P and R-branch lines, respectively. As a result, the two lines have similar emission strengths, but the R-branch line has greater absorption. If most upward transitions are followed by a downward radiative transition, emission and absorption nearly cancel, but with emission dominating in the P-branch line and absorption dominating in the R-branch line. In the case of NH 3 the greater radiation field at the wavelengths of the R-branch lines enhances the effect, but the effect is present even for C 2 H 2 and CO in models with a temperature gradient through the shell.
The disappearance of the difference between P and R-branch lines in the isothermal model can be explained by the fact that the continuum radiation field at the R-branch wavelengths is weaker than that at the P-branch wavelengths for a given J u by the same Boltzmann factor that the lower level population is greater, resulting in approximately the same number of upward transitions in the two lines. The temperature gradient in the dust shell model caused the color temperature of the continuum radiation field to be higher than the gas temperature where the lines are formed. As a result, heat must flow from the radiation field into the gas. This happens by net absorption of R-branch photons and net emission of P-branch photons, pumping the rotational populations.
It would be very desirable to estimate the magnitude of the error that is made using the isother-mal absorbing slab model. For the models shown here, R-branch lines are typically a factor ∼ 2 weaker when reemission is included, but P-branch lines may be dominated by emission. Unfortunately, the extent to which emission cancels absorption depends on the probability that an emitted photon escapes, rather than being absorbed by dust, and the asymmetry of the source, which determines how the absorption along our line of sight compares to that along other lines of sight. An understanding of the geometry of the source being observed would be needed to make a realistic model including reemission.
Several simplifications and approximations made in the model should be noted. First, the gas temperature was assumed to be equal to the dust temperature. This assumption may be valid through much of the modeled shell, even if the density is too low for collisions with dust grains to dominate the heating and cooling of the gas, since the radiation field temperature is close to the dust temperature through most of the shell. However, close to the exciting star ultraviolet radiation may heat the gas to a higher temperature than the dust, via the photoelectric effect. The optical depth of the dust in the model is high enough to hide this region, but in the case of less deeply embedded objects, where the gas near the exciting source can be observed, departure of the gas temperature from the dust temperature could be important. To test this effect, a less optically thick model was run that included photoelectric heating of the gas and allowed the gas and dust temperature to differ. In this model, the gas temperature was greater than the infrared radiation temperature, so energy should flow from the gas into the radiation. In fact, P-branch lines then showed greater absorption than R-branch lines, as is expected by this thermodynamic argument.
Another assumption was that the rotational temperature of the molecules was equal to the gas temperature. This might be justified by the fact that the density in most of the shell is greater than the critical density for rotational thermalization, especially for symmetric molecules, like C 2 H 2 and CH 4 , which have no allowed rotational transitions. However, the infrared pumping that led to differing P and R-branch line profiles may modify the populations. The importance of infrared pumping can be estimated by comparing the rate of colli-sional transitions between rotational levels to the rate of infrared transitions between the ground and excited vibrational levels. The collisional transition rate is given by the product of the collisional rate coefficient, which is typically ∼ 10 −10 , and the gas density, giving a rate ∼ 2 × 10 −3 s −1 . The radiative rate is given by the product of the vibrational A coefficient and the fractional population of the excited vibrational state. For C 2 H 2 , with an A coefficient of 6 s −1 for the sum of the P and R-branch lines to a given rotational state, infrared pumping should dominate for vibrational temperatures > 130 K, or throughout much of the model shell. This calculation may overestimate the importance of infrared pumping since absorption followed by emission of a vibration-rotation photon can at most change the rotational quantum number by 2, whereas large ∆J can occur in collisional transitions. But if the rotational populations are controlled by infrared pumping, the rotational temperature should equal the infrared radiation color temperature, and the difference between P and R-branch lines should disappear. The resolution of this problem may be that the density of the gas around IRS 9 is enough greater than that assumed in the model so that collisions, rather than infrared pumping control the rotational level populations. A similar effect is the pumping of vibrational levels by radiative and collisional transitions to other vibrational states. For example, the ν 5 state of C 2 H 2 could be populated by collisional or radiative transitions from the ν 4 state, which itself has several possible methods of excitation. To include the various neglected ways in which the excited vibration-rotation states can be populated would require the simultaneous calculation of many vibration-rotation states. Unfortunately, that is beyond the capability of the model used here.
Conclusions
The first conclusion of this work is that interpretation of vibration-rotation spectra based on the isothermal absorbing slab model can be very misleading. In general, column densities derived with this model can be expected to be underestimates, due to the neglect of reemission by molecules following absorption, but the magnitude of the error depends on the source geometry and the lines observed.
The second conclusion is more positive. Although collisions are not normally sufficiently frequent to maintain LTE at the gas kinetic temperature, the radiation field temperature may be similar to the gas temperature, and so molecular level populations may nevertheless be close to LTE populations. This was the case for the model considered, and should generally be the case if the dust optical depth is large. As a result, non-LTE effects may not be large. Effects of radiative transfer through gas and dust with a temperature gradient are likely to be more important. Fortunately, these effects are easier to include in models used to interpret observations, and they should be included.
I thank the anonymous referee for several interesting comments, in particular pointing out the possibility of infrared pumping of the rotational populations.
|
2013-02-20T15:50:02.000Z
|
2013-02-20T00:00:00.000
|
{
"year": 2013,
"sha1": "90e15db19b8b05fa0681edd5d5300f6e45fc34e3",
"oa_license": null,
"oa_url": "https://repositories.lib.utexas.edu/bitstream/2152/34910/1/2013_03_infraredvibration.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "90e15db19b8b05fa0681edd5d5300f6e45fc34e3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
15354586
|
pes2o/s2orc
|
v3-fos-license
|
Statins Therapy Can Reduce the Risk of Atrial Fibrillation in Patients with Acute Coronary Syndrome: A Meta-Analysis
Background: It is a controversy whether statins therapy could be beneficial for the occurrence of atrial fibrillation (AF) in acute coronary syndrome (ACS). To clarify this problem, we performed a meta-analysis with the currently published literatures. Methods: The electronic databases were searched to obtain relevant trials which met the inclusion criteria through October 2011. Two authors independently read the trials and extracted the related information from the included studies. Either fixed-effects models or random-effects models were assumed to calculate the overall combined risk estimates according to I2 statistic. Sensitivity analysis was conducted by omitting one study in each turn, and publication bias was evaluated using Begg's and Egger's test. Results: Six studies were eligible to inclusion criteria, of the six studies, 161305 patients were included in this meta-analysis, 77920 (48.31%) patients had taken the statins therapy, 83385 (51.69%) patients had taken non-statins therapy. Four studies had investigated the effect of statins therapy on occurrence of new-onset AF in ACS patients, another two had described the association between statins therapy and occurrence of AF in ACS patients with AF in baseline. The occurrence of AF was reduced 35% in statins therapy group compared to that in non-statins group (95% confident interval: 0.55-0.77, P<0.0001), and the effect of statins therapy seemed more beneficial for new-onset AF (RR=0.59, 95%CI: 0.48-0.73, p=0.096) than secondary prevention of AF (RR=0.70, 95%CI: 0.43-1.14, p=0.085). There was no publication bias according to the Begg's and Egger's test (Begg, p=0.71; Egger, p=0.73). Conclusion: Statins therapy could reduce the risk of atrial fibrillation in patients with ACS.
Introduction
Atrial fibrillation (AF) is a common arrhythmia associated with acute coronary syndrome (ACS) and the estimates were up to 20% in acute myocardial infarction (AMI) patients [1][2][3][4]. The ACS patients developed AF may have poor prognosis. Previous studies had showed that ACS patients associated with AF may have longer in-hospital duration, higher rates of stroke, and increase the short and long-term mortality [2,[4][5][6][7]. Therefore, the treatment or prevention of AF may improve the prognosis in ACS patients. Accumulating evidences have been demonstrated that statins, hydroxymethylglutary-CoA reductase inhib-Ivyspring International Publisher itors, can reduce the incidence of AF in coronary artery disease (CAD) patients [8][9][10], nevertheless other studies showed that statins therapy was nearly no beneficial to prevent AF in patients with acute coronary disease [11], isolated coronary artery bypass grafting [12], undergoing cardiac surgery [13]. In order to elucidate whether statins are beneficial to ACS patients complicated with AF, we performed a meta-analysis to clarify this problem through analyzing the currently published literatures.
Retrieval strategy
In order to obtain relevant trials, we searched through EMbase, PubMed, Medline, ISI Web of Science for all cohort studies or randomized controlled trials through October 2011, using the following terms: statins, hydroxymethylglutary-CoA reductase inhibitors, HMG-CoA reductase inhibitors, lipid lowering therapy, coronary disease, coronary heart disease, coronary artery disease, acute coronary syndrome, ACS, atrial fibrillation, AF. The retrieval strategies were presented in table 1 with restrictions to English language studies only. We also manually searched the relevant studies from related review articles and meta-analyses. With these key words, 1634 abstracts were retrieved. Full text reviews were obtained when the studies had possible relevance to this study. For the unavailable data, we did not contact the authors.
Study selection
We firstly excluded the reduplicated studies using Endnote software, then screened the studies according to the titles or abstracts. The second screening was preformed based on the full-text reviews. We looked for the studies that met all the specific criteria: (1) the study design was cohort study or randomized controlled trial; (2) the study subjects were the patients with acute coronary syndrome; (3) patients with statins therapy compared to those with non-statins therapy; (4) clinical outcome was occurrence or new-onset atrial fibrillation; (5) the data for extraction in the original article were available. Articles were excluded if they met one of the following criteria: (1) acute coronary syndrome patients with extensive or high dose statins therapy compared to those with traditional statins therapy; (2) patients who were under coronary artery bypass grafting or coronary artery disease; (3) reviews. If one trial had published two or more publications with different durations, we would choose the publication with the longest duration. Two authors (Xue Zhou and Jia Yuan) independently read the titles, abstracts, and/or full texts to determine whether they were satisfied with the inclusion/exclusion criteria (figure 1). For the disagreement or uncertainty, we got to resolve by discussion or consensus of a third reviewer.
Data extraction
The key exposure variable was the presence (statins therapy group) or absence of statin use (non-statins therapy group) at baseline.
The quality of the non-randomized studies was assessed by using the Newcastle-Ottawa Scale (NOS) with slight modification [14]. The quality of the studies was evaluated by examining three items: patient selection, comparability and assessment of outcome ( Table 2).
The data extraction was performed independently more than twice by two authors (Xue Zhou and Jianlin Du) in order to get the exact information. The following information were extracted from each retrieved article: first author, year, study population, women, design, type of statin, endpoint type of AF, duration, total patients (patients in statin group/non-statin group), the total incidence of AF(patients in statin group/ non-statin group), Diabetes Mellitus, heart failure, renal insufficiency, hyperlipidemia, hypertension, ACEI or ARB, be-ta-blocker (Table 3). For the disagreement or uncertainty, it was resolved by consensus.
Statistical analysis
Relative ratio (RR) was used to measure the effect size. Heterogeneity test was performed using I 2 statistic, which is a quantitative measure of inconsistency across studies [15]. Pooled RR were calculated for statins therapy group versus non-statins therapy group, using either fixed-effects models or random-effects models according to I 2 statistic, if I 2 is less than 50%, we would choose fixed-effects models; otherwise the random-effects models would be used. The 95% confident interval(CI) was also calculated. Sensitivity analysis was conducted by omitting one study in each turn in order to investigate the influence of a single study on the overall risk estimate and test the stability of the results. Publication bias was evaluated using Begg's and Egger's test. All statistical analyses were performed using STATA version 11.0 (StataCorp LP, College Station, Texas). A p value<0.05 was considered statistically significant, except where otherwise specified. Check list Selection 1. The exposed cohort was truly or somewhat representative of the average described in the community? (If yes, one star) 2. The non-exposed cohort was drawn from the same community as the exposed cohort? (If yes, one star) 3. The exposure was ascertained through secure record or structured interview? (If yes, one star) 4. The outcome of interest was not present at start of study? (If yes, one star) Table 3 The characteristics of the included studies in the meta-analysis Ramani[16] Vedre [17] Dachin [18] Dziewierz [19] Ozaydin [
Results
A meta-analysis with data derived from six cohort studies, which applying with statins in patients with ACS or suspected ACS for preventing atrial fibrillation, was conducted. The endpoint was occurrence or new-onset AF. Among the six references, five references were full-text reviews and another one was abstract. Four studies had investigated the effect of statins therapy on occurrence of new-onset AF in ACS patients, another two had described the association between statins therapy and occurrence of AF in ACS patients without AF free in baseline. Of the six eligible trials, 161305 patients were included in this meta-analysis, 77920 (48.31%) patients had taken the statins therapy, 83385 (51.69%) patients had taken non-statins therapy. All six trials reported AF outcomes. Incidence or recurrence of AF occurred in 16176 patients: 7070 of 77920 in patients treated with statins versus 9106 of 83385 in control subjects. The characteristics of these included studies were presented in table 3 and the quality assessment was performed according to NEWCASTLE -OTTAWA QUALITY ASSESSMENT SCALE (Table 4). The occurrence of AF was reduced 35% in statins therapy group compared to that in non-statins group (95% confident interval: 0.55-0.77, P<0.0001) with high heterogeneity (I 2 =87.9%, p<0.01), therefore we assumed random-effects models (Fig 2). After we omit-ted the study with abstract available, the remaining studies showed the similar results (OR 0.58, 95%CI: 0.42-0.80; p=0.001), with substantial evidence of heterogeneity (I 2 =79.6%) (Fig 3). The effect of statins therapy seemed more beneficial for new-onset AF (RR=0.59, 95%CI: 0.48-0.73, p=0.096) than secondary prevention of AF (RR=0.70, 95%CI: 0.43-1.14, p=0.085) (Fig 4). In order to explore potential sources of heterogeneity and test the stability of the results, we performed sensitivity analysis by excluding any single study, and the result did not materially alter the overall combined RR, with a range from 0.58 (95%CI: 0.42 to 0.80; p=0.001) to 0.68 (95%CI: 0.57 to 0.81; p<0.001). Publication bias was evaluated using Begg's and Egger's test, and there was no publication bias among the include studies (Begg, p=0.71; Egger, p=0.73).
Discussion
This meta-analysis derived from six cohort studies suggested that statins therapy was associated with a 35% reduction in the risk of new-onset or recurrence AF in patients with ACS compared to those with non-statins therapy, after we excluded the study with only abstract available, the beneficial effect was nearly the same. The reduction effect seemed more advantageous in the prevention of new-onset AF than in the prevention of recurrence AF.
For the subject with no restriction of CHD, a previous meta-analysis by Rahimi K showed that statins therapy had no evidence of a reduction in the risk of atrial fibrillation [22]. For the CHD patients, the impact of statins therapy on prevention AF was conflicting. The MIRACL trial demonstrated that statins use was nearly no beneficial to prevent AF in patients with acute coronary disease (OR 0.97, 95%CI: 0.72-1.31) [11], statins therapy was not associated with a decrease in the risk of AF in patients after coronary artery bypass grafting [23], in patients with CHD [24]. While some studies had showed statins therapy could reduce the risk of AF in patients with CHD [8][9][10]. And the present meta-analysis, which demonstrated statins therapy would decrease the risk of new-onset or recurrence AF in patients with ACS, might be more credible for its large subjects, and may provide some advice to the clinical practice. Therefore, the effect of statins therapy on preventing AF is not beneficial to patients with no restriction, and is conflicting in CHD patients, but is beneficial to ACS patients in the present meta-analysis.
The underlying mechanism behind the protective effects of statin on the risk of developing AF in CHD patients is unclear. Both AF and CHD are an inflammatory condition in which myeloperoxidase (MPO) is known to play a significant role [25,26]. What is more, in the condition of AF or CHD, patients have increased C-reactive protein (CRP) levels in their blood [27,28], the concentration of CRP in the blood seems to be associated with the total amount of time that patient experiences AF [29] and directly correlate with adverse effects in CHD patients. Statins can inhibit interleukin-6 (IL-6) and tumor necrosis factor α (TNF-α) production and nuclear factor kappa B (NF-κB) activation, and are anti-inflammatory in nature [30], and they may decrease the production of MPO and MPO-accompanied fibrosis and initiation and progression of AF and atherosclerosis. A systematic literature written by Oliver Adam [31] and a research reported by Reilly SN [32] showed statins can play a role in antiarrhythmic effects through improving endothelial nitric oxide (NO) availability and reducing inflammation, oxidative stress, and neurohormonal activation.
Source of heterogeneity
The heterogeneity was observed in the present meta-analysis, sensitivity analysis by omitting every single study was performed to explore the potential source of heterogeneity, but the result did not materially alter the overall combined RR, which means our results are stable. The meta-regression on sample size was performed for the sample size in present meta-analysis ranges from 1000 to 89703, whereas the result showed the sample-size was not the source of heterogeneity (p=0.138). The baseline characteristics of subjects may result in heterogeneity in present meta-analysis: (1)underlying disease associated with heart is an important factor to affect the incidence of AF, for heart related basic disease may lead to the changes of heart structure, which may influence the electrophysiology of heart; (2)the gender is another an important factor, for aged-female is much easier to have CAD and the prognosis may be more poor than aged-male, whereas the amount of women in included studies changes quite a lot, and this may lead to heterogeneity; (3) the basic treatments could also affect the heterogeneity.
Limitations
It is noteworthy that the results of the study by Bang et al. [21] have not been published in a full text review to date. However results were similar when the study [21] was not included in the present analysis. The substantial heterogeneity was observed in our meta-analysis, although sensitivity analysis and meta-regression were conducted to detect the source of heterogeneity, we could not identify the exact source of heterogeneity. Only GRACE study reported by Vedre et al. [17] had study the effect of statins therapy on death, cardiac arrest, ventricular fibrillation in patients with ACS, the majority of studies included in present meta-analysis were focus on the occurrence of AF, so it is unclear whether the early statins therapy in ACS patients could really improve the prognosis or not, and need more studies to clarify. We do not study the effect of different type and/or does of statins on prevention AF in ACS patients, for the type and/or does of statins in some studies were unclear, we could not make further efforts to identify the association between type and/or does of statins and AF.
Conclusion
The statins therapy was associated with a 35% reduction in the risk of new-onset or recurrence AF in patients with ACS compared to those with non-statins therapy, and the beneficial effect may more marked in prevention new-onset AF than in prevention recurrence AF.
|
2016-05-04T20:20:58.661Z
|
2013-01-10T00:00:00.000
|
{
"year": 2013,
"sha1": "063e4e7676e6823e048f693127e6fe4c2568f621",
"oa_license": "CCBYNCND",
"oa_url": "http://www.medsci.org/v10p0198.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "063e4e7676e6823e048f693127e6fe4c2568f621",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1689766
|
pes2o/s2orc
|
v3-fos-license
|
Incidental Dural Tears in lumbar decompressive surgery: Incidence, causes, treatment, results.
are investigated retrospectively for the period January 2005 – march 2009. RESULTS: The overall incidence of the incidental durotomies in the investigated group is 12.66%. In the subgroups it varies depending on the specificity of the surgical procedures performed. The biggest is the number of IDs in the reoperative spinal surgery subgroup, followed by the subgroup of the patients who sustained spinal trauma, followed by those with degenerative spinal stenosis, tumors and lumbar disc herniations. CONCLUSION: IDs should be considered as a serious complication with a multitude of unwanted consequences for the patients. Prevention is the best way to treat the complications and disability that attend the unwanted dural tears. Knowing about the mechanisms and predisposing factors for that objectionable complication is a matter of utmost importance when planning and performing spinal surgical procedures.
is likely to experience a postural headache with a combination of the following symptoms: nausea, vomiting, pain or tightness in the neck or back, dizziness, diplopia due to VI cranial nerve paresis, photophobia, tinnitus, etc. Cerebrospinal fluid (CSF) leakage following dural tears can pose potentially serious problems such as CSF fistula formation, pseudomeningocele, meningitis, arachnoiditis and epidural abscess 1,3,10,12,15 .
With the present investigation we aim to evaluate the incidence of the incidental durotomies during the different types of decompressive and reconstructive surgical procedures in the lumbar region, also to point the most common reasons for the incidental dural tears (durotomies), treatment options and its influence to the early and late outcome.
The records of 553 consecutive patients with different types of posterior and posterolateral decompressive and reconstructive procedures in the lumbar region treated in our department within the period January 2005 -march 2009 were investigated retrospectively. In the investigated group 237 patients (114 men and 123 women) were operated for lumbar disc herniation, 143 patients (68 men and 75 women) were operated for degenerative lumbar spinal stenosis, 75 patients (52 men and 23 women) were treated for traumatic lumbar vertebral fractures, 38 patients (24 men and 14 women) had spinal tumors (8 primary and 30 metastatic) and 4 (1 man and 3 women) were operated for lumbar spondylolisthesis. (Tabl. 1) A subgroup of 56 (10,13%) patients (23 men and 33 women) are re-operated (up to 4 reoperations). Another subgroup of 111 patients, out of 553, were treated with complex surgical procedures that include decompression and spinal reconstructions. In the whole investigated group incidental durotomies (IDs) were found in 70 cases and in 59 of them the ID was diagnosed and treated during the initial surgery. The most important signs and symptoms, that were considered suggestive for IDs were persisting or postural headache, signs of meningeal irritation, neurological deficit, subcutaneous fluid collection or pseudomeningocele formation. MRI was used as a diagnostic tool for three patients with pseudomeningocele. (Fig. 1) One patient was diagnosed with CT myelography. The overall incidence of the clinically significant undetected intraoperative IDs was 0.2%. Seven patients were reoperated: 3 for closure of pseudomeningocele and one due to CSF leak through the surgical site. If intaoperative ID occured and it was detected by the surgeon, the closure was achieved by application of running suture over the localized dural defect together with autologous free muscle or fat graft sutured over the defect. When suturing was technically difficult or impossible the graft was fixed with fibrin glue. At re-operations the existing CSF fistulas and pseudomeningoceles were excised and the dural defect was closed. The conservative management included application of continuous spinal drainage and bed rest for 4-7 days together with administration of broadspectrum-antibiotics.
The early postoperative results among the patients with IDs were evaluated on the 1 st , 6 th and 24 th month after the intervention. VAS (Visual Analog Scale) and ODI (Oswestry Disability Index) scales were used for the evaluation of 66 (93%) of the patients on the 1 st month after the intervention, 50 (71.4%) patients were evaluated on the 6 th month after the intervention and 42 (60%) patients on the 24 th month after the intervention.
Statistical Analysis
The stastical significance was evaluated using Student-Fisher t test with value of P = 0.05.
The overall incidence of the IDs in the investigated group was 12.66%. In the subgroups it varied depending on the specifics of the surgical procedures performed. The biggest number of IDs was in the subgroup of reoperations-16 (28.6%) cases, followed by the ID and CSF leakage is an undesirable but significant complication of lumbar decompressive surgery. The introduction and development of spinal instrumentation during the last decades, in association with the aggressive management of many spinal disorders raised the number of IDs. In an investigation Wang et al. 16 reported 88 (14%) IDs out of 641 consecutive patients who sustained surgical procedures in the lumbar region. Goodkin and Laska8 found 23 (16%) IDs out of 146 cases and reported that all 23 patients have residual complaints. Analyzing their results these authors claim that ID is a serious complication in spinal surgery. Investigating the postoperative complications in a large cohort of 18122 patients sustained different spinal procedures Deyo et al. 4 report that ID was with its lowest rate among young patients and microdiscectomy procedures, while the highest rate of IDs they found among the elderly patients and reinterventions. Similar results report also Morgan-Hough et al. 13 . They found 29 (5.5%) IDs out of 531 primary interventions (3 of them with pseudomeningocele), and 14.3% IDs among the patients with re-interventions respectively. In our series of 553 lumbar decompressive interventions, IDs were detected in 70 (12.66%) cases, 3 of them with pseudomeningocele. Our investigation showed the highest rate of IDs among the patients with re-interventions (28.6%) and spinal trauma (20%), while the lowest rate was found among the patients that underwent microdiscectomy procedures (8%) and spinal reconstructive interventions (9.9%).(Tabl. 2) IDs could be detected during the initial surgery or during the postoperative period based on clinical signs and symptoms that suggest CSF leakage, or with MRI study 9 . Usually IDs followed by CSF leaks are produced by the surgeon himself directly causing dural tear while manipulating the dural sack or nerve roots. In respect of unwanted dural tears manipulations of the dura and nerve roots are extremely dangerous among the patients with advanced degenerative spinal stenosis and in re-operated patients. As the area of the dural defect is exposed to the surgeon the tear could be immediately repaired. Leaving behind small sharp bone particles during surgery is another mechanism that could contribute to small dural tears that could be left unattended during DICUSSION surgery, especially if the arachnoid membrane is intact and there are no CSF leaks. These small dural tears could be converted to open ones(with arachnoid membrane opened and CSF leakage) due to rapidly increased intradural pressure during the recovery from anesthesia, especially if it is very fast and violent.
RESULTS
The CSF leaks after IDs are most commonly detected during the initial surgical procedure. In these cases they are immediately sealed with suture, fibrin glue, autologous muscle or fascial graft, heterologous dural graft, etc. 1,2,6,15 . Ocasionally ID remains undetected and unattended by the surgeon and is detected after surgery. If a defect goes undetected or is not properly closed, the patient is likely to experience a postural headache, nausea, vomiting, pain or tightness in the neck or back, dizziness, diplopia due to VI cranial nerve paresis, photophobia, tinnitus, etc. Furthermore cerebrospinal fluid (CSF) leakage following dural tears can pose potentially serious complications such as CSF fistula formation, pseudomeningocele, meningitis, arachnoiditis and epidural abscess 1,3,4,7,10,11,13,15 . Though rarely, complicated CSF leaks could be lethal. 4 Many authors describe different operative and nonoperative methods for the treatment of unattended ID detected after the surgery. Unfortunately there are no comparative randomized clinical trials demonstrating the advantages of one or another approach to the problem. 3,15 Some of the authors prefer immediate reoperation once CSF leakage is detected while others initially start with conservative management. A widely used conservative method for CSF leakage treatment is spinal drainage and bed rest for 4-7 days. Another method is the "blood patch" -injection of 10-20ml autologous blood into the epidural space at the site of the dural puncture 1,10,15 . However all of the authors consider the prevention of IDs as a matter of utmost importance when planning and performing spinal surgical procedures.
There are many studies that analyze early and late postoperative results among patients with IDs. Several authors report that patients do not have residual complaints attributable to IDs if they are detected and closed during the initial surgery 3,5,11,16 . Nevertheless in a long 10 years follow-up of a large goup Saxler et all. 15 report that patients with IDs have worse clinical results, namely functional restrictions and reduced working capacity, if compared with the patients without IDs. Furthermore the patients with IDs have increased tendency for reoperations. Though our investigation spans the close postoperative results within 2 years after surgery, the analysis of our results supports the aforementioned conclusions.
ID should be considered as a serious complication with a multitude of unwanted consequences for the patients. Prevention is the best way to treat the complications and disability that relates to inadvertent dural tears. Knowing about the mechanisms and predisposing factors for that serious complication is a matter of utmost importance when planning and performing spinal surgical procedures.
|
2018-04-03T03:18:25.675Z
|
2010-10-01T00:00:00.000
|
{
"year": 2010,
"sha1": "542555a9d2d9a1269769882a74c9ed2068b31441",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b6439d35d69d83ed876e25a1551f912fb7aa3db0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229713724
|
pes2o/s2orc
|
v3-fos-license
|
Socioeconomic Factors in Patients with Ulnar Nerve Compression at the Elbow: A National Registry-Based Study
Aims To investigate demographics and socioeconomic status in patients with ulnar nerve compression and the influence of socioeconomic factors on patient-reported outcome measurements (PROM) as evaluated by QuickDASH (short version of Disabilities of Arm, Shoulder and Hand) after surgery for ulnar nerve compression at the elbow. Methods Patients operated for primary ulnar nerve compression from 2010 to 2016 were identified in the National Quality Registry for Hand Surgery Procedures (HAKIR). Patients filled out questionnaires before and at three and 12 months after surgery. A total of 1346 surgically treated cases were included. Data from HAKIR were linked to data from Statistics Sweden (SCB) on socioeconomic status (i.e., education level, earnings, social assistance, immigrant status, sick leave, unemployment, and marital status). Results Patients surgically treated for ulnar nerve compression at the elbow differed from the general population with lower levels of education, higher social assistance dependence, a high proportion of unemployment, and lower earnings. However, the results were not clear concerning the influence of socioeconomic factors on the outcome of surgery, except for long-term sick leave. Conclusion Patients surgically treated for ulnar nerve compression at the elbow are socioeconomically deprived, but only a history of long-term sick leave influences the outcome of surgery. This information is crucial in the diagnosis and treatment of these patients.
Introduction
Ulnar nerve compression is the second most common nerve entrapment in the upper extremity [1], with a reported incidence of 25-30 per 100,000 person-years [2,3]. Known risk factors include smoking [4] and occupational factors [5]. Sex distribution varies between studies [3,6], and there is often considerable comorbidity [5][6][7]. There is no clear consensus nor evidence regarding when ulnar nerve compression at the elbow is best treated conservatively and when surgery is indicated [1], a procedure that may induce complications, including neuropathic pain [6]. Often, milder symp-toms are treated conservatively, and more severe symptoms or motor impairment are treated surgically.
When using patient-reported outcome measurements (PROM), such as the QuickDASH (short form of Disabilities of the Arm, Shoulder and Hand), outcomes do not seem as favorable as for other surgeries, such as carpal tunnel release [1], but the reason for this is unknown.
Socioeconomic factors are known to affect an individual's perception of health as well as outcome after surgery and other illnesses. It has been extensively studied in total joint arthroplasty and orthopedic trauma surgery, where socioeconomic factors affect postoperative results and patient health [8]. In ulnar nerve compression at the elbow, a low education level is a risk factor development [5], but whether socioeconomic factors influence patients' preoperative symptoms or affect outcome after surgical treatment for ulnar nerve compression is unknown. The aim of this study was to investigate demographics and socioeconomic status in a population surgically treated for ulnar nerve compression and whether socioeconomic factors affect the outcome of surgery.
Study Population.
We identified patients surgically treated for ulnar nerve compression (using ICD-10 code G562 and primary surgical codes (KKÅ97) ACC53, ACC43, or NCK19) in the National Quality Registry for Hand Surgery procedures in Sweden (HAKIR) during the period from 2010 to 2016. Patients above 16 years of age that are able and willing to provide informed consent are included in the registry. In the present study, only patients ≥ 18 years of age and primary surgeries were included. Patients were asked to fill out the Swedish version of the QuickDASH questionnaire [9] before surgery, and at three and 12 months after surgery, either online or by traditional mail, with a reminder by text if not answered within 48 hours. The QuickDASH contains 11 questions, where the patient is asked to score disability in the affected arm. A total score ranging from 0 to 100, where 100 represents the worst possible disability, is calculated. Data from HAKIR was combined with socioeconomic data from Statistics Sweden (SCB) using personal identification numbers, and with the Swedish National Diabetes Register (NDR) for diabetes status.
Socioeconomic
Factors. Data on earnings were available from 1990 to 2016. Mean earnings during these years was calculated as mean earnings for the years that the patient was between 30 and 65 years of age to get an estimate of the earnings during working years (65 years was in 2010-2016 the common retirement age in Sweden). A binned variable was created with 25% of the patients in each group. Earnings were indexed to December 2016 using the consumer price index.
Sick leave was calculated as net days, and only sick days exceeding 14 calendar days are included due to the way reimbursement is organized in Sweden, where the employer pays for the first two weeks of sick leave. Only citizens that have been employed during the last six months are entitled to paid sick leave. A mean number of sick days were calculated as mean sick days per year of employment and were above 20 years of age during the period that data was available .
Social assistance data was available from 1990 to 2016. Social assistance was individualized from family. We calculated a new variable based on whether the patient had received social assistance once during these years, several times, or never.
Unemployment data was available from 1992 to 2016 and calculated as mean days per year. We also calculated one variable corresponding to whether the patient was employed during the year of surgery or not.
Marital status data was available from 1990 to 2016 and was recorded from the year of surgery. These data included if the patient was married, registered partner, divorced or divorced partner, or widowed.
Educational level was based on the education level that the patient had during the year of surgery. Levels were based on the International Standard Classification of Education (ISCED) [10], as follows: primary: ISCED 0, 1, and 2 (≤9 years of education, compulsory school); upper secondary: ISCED 3 (9-12 years of education); and tertiary: ISCED 4, 5, and 6 (>12 years of education).
Statistical Analyses.
A binned variable was created for age, based on equal percentiles (25% of the population in each age group). Nonparametric data is presented as median (interquartile range, IQR). Nominal data is presented as number (%) and was compared using the chi-squared test. The Mann-Whitney U-test was used to compare data when there were two groups, and the Kruskal-Wallis test was used for comparisons between more than two groups. Subsequent pairwise comparisons were adjusted by the Bonferroni corrections for multiple tests. To investigate the effect of socioeconomic factors on QuickDASH scores, we used multivariate linear regression analysis. Each variable was analyzed separately in model 1. In model 2, we analyzed all variables separately but adjusted for age at surgery, sex, and diabetes at surgery. In model 3, all variables were included and adjusted for age, sex, and diabetes at surgery. In the reduced model 1 [11], we included all variables with a p value < 0.3 in model 3. In the reduced model 2, we included all variables with a p value < 0.1 in reduced model 1.
We considered each treated arm a separate statistical entity. We considered a p value of <0.05 statistically significant. All calculations were performed using IBM SPSS Statistics version 24 or 25 (SPSS Inc., Chicago, IL).
Results
During the study period, we identified 1278 individuals (1346 arms), who met the inclusion criteria and were surgically treated for ulnar nerve compression at the elbow. The study population has been described previously [12]. The majority of patients were treated with simple decompression (1167/1346; 86%), 150/1346 (11%) were treated with transposition of the ulnar nerve, and 32/1346 (2%) were treated with medial epicondylectomy.
3.2.
Age. Socioeconomic factors in different age categories are presented in Table 1. In the youngest group (18-42 years), there were more men than in the other groups. The lowest 2 BioMed Research International proportion of immigrants was found in the oldest age group (>63 years). The oldest group also had the lowest number of sick days, days as unemployed, and the least dependence on social assistance (Table 1).
In the study population, 330/1346 (25%) had >12 years of education ( Table 1). The group with the lowest level of education scored highest on all occasions in the QuickDASH ( Figure 1). A high education level reduced the postoperative QuickDASH score at 12 months when adjusting for age, sex, and diabetes. In model 3, adjusting for all other socioeconomic factors as well, education level was no longer statistically significant ( Table 2). Data on education level was missing in 10 (0.8%) cases.
Postoperative QuickDASH scores were lowest in the highest-earning group (Figure 2). Higher earnings reduced postoperative QuickDASH scores at 12 months in the linear regression analysis when adjusting for age, sex, and diabetes. However, when also adjusting for other socioeconomic factors, earnings were no longer a statistically significant determinant of postoperative QuickDASH scores ( Table 2).
During the year of surgery, 530/1346 (40%) had work that was classified as manual. Manual work did not affect postoperative QuickDASH scores (Table 2).
Low sick leave reduced the postoperative QuickDASH scores at 12 months in linear regression models 1 and 2 but did not remain statistically significant when adjusting for other socioeconomic factors in model 3 ( Table 2). It remained in the reduced models. In the reduced models, long-term sick leave (>39 days) predicted a higher postoperative QuickDASH score. About 1/3 of cases of the working age were unemployed during the year of surgery (Table 1). Unemployment did not affect postoperative QuickDASH scores ( Table 2).
Discussion
The main finding of this study was that the study population in general was socioeconomically deprived as defined by low education levels, low earnings, high dependence on social assistance, high unemployment rate, and high sick leave. All these factors might affect surgery outcome, but when adjusting for other socioeconomic factors, only long-term sick leave seemed to affect the patient-reported outcome.
BioMed Research International
Socioeconomic factors are important determinants in an individual's health and even mortality [13]. Only 25% of the population had more than 12 years of education. To put this in perspective, in 2016, 41% of the Swedish population between 25 and 64 years of age had more than 12 years of education [14]. In distal radius fractures, education is correlated both to postoperative DASH scores and postoperative range of motion, with higher education predicting both better subjective and objective outcomes [15].
In the present study, there was also a high proportion (44%) of the population that had received social assistance at least once during the years 1990-2016. Based on results from a study on the general Swedish population, approximately only 10% of the general population have received social assistance at least once [16]. In addition, about a third of the studied population in working age was unemployed at the time of surgery. As a comparison, 6-8% of the Swedish population were unemployed between the years 2010 and 2016. One other unexpected result in the present study is that being divorced was associated with a better outcome. This is contradictory to previous studies on other conditions. In BioMed Research International cardiac surgery, being divorced, separated, or widowed was associated with 40% greater odds of death or new disability within the first two years after surgery [17]. It is possible that after surgery for ulnar nerve compression, the patient is not to the same extent dependent on spouses and relatives as after a major surgery. One shortcoming of our data is that we had no record of people living together with their partners whilst unmarried, a relationship form that is common in Sweden, which might obscure the results. Two other factors, occupational factors and immigrant status, that one may consider influencing the occurrence of ulnar nerve compression and results of surgery did not show any signs of being important. Occupational factors, e.g., if the patient has a profession that includes manual labor, are often mentioned in studies as a risk factor for the development of ulnar nerve compression [5,18,19]. However, we did not find any evidence that manual labor contributed to worse outcome following surgery. Furthermore, immigrant status might also be a factor contributing to inequalities in surgical outcomes [20]. We did not however find any indication that so would be the case in ulnar nerve compression, in contrast to carpal tunnel syndrome [21].
Socioeconomic status is known to influence a number of other conditions [8]. In comparison to patients suffering and treated for another more common nerve compression lesion, carpal tunnel syndrome, our population was more socioeconomically deprived. This fact, in combination with the finding that outcomes were not very favorable using the QuickDASH, raises concerns about treatment indications. One study found, in a European population of approximately 25,000 individuals, that arm or hand pain was more prevalent in individuals with a low education level compared to individuals with a higher level of education [22]. Diagnosis of ulnar nerve compression is based on the patient history and the clinical examination, often with electrophysiology as a complement although its utility for diagnosis and predicting outcome may be debated [23].
In a US population, private insurance, which might indicate higher socioeconomic status, was associated with faster evaluation and surgical treatment for ulnar nerve°2 7 BioMed Research International compression than patients with public insurance [24]. In Sweden, the vast majority of health care is publicly funded, but it is possible that patients with low socioeconomic status and low health literacy do not seek, or seek care at a later stage, or have less access to health care, leading to a delayed diagnosis and treatment, possibly to a stage where surgical intervention is unavoidable or when severe and irreversible symptoms have developed. A higher preoperative McGowan score (used to classify sensorimotor deficit in ulnar nerve compression, i.e., higher score means greater deficit) is associated with worse postoperative DASH results [6]. Thus, part of the explanation as to why such a large proportion of this surgically treated population was socioeconomically deprived could be that their compression was severe at presentation.
In total, our population with surgically treated ulnar nerve compression seems to be different from the general population in several aspects. A high proportion of the population was unemployed, received social assistance, and had low education and low earnings. However, we did not find any conclusive evidence regarding the effect of socioeconomic factors on the outcome of surgery for ulnar nerve compression. The only variable that seemed to matter was long-term sick leave by all causes (i.e., not adjacent to the surgery). This might be obscured by the fact that many patients in our study had a low socioeconomic status but is in accordance with one previous study that found no association between socioeconomic status (based solely on occupation) and ulnar nerve compression [4]. It is also possible that comorbidities, such as shoulder or neck pain, are present to a higher degree in this population than in the general population, contributing to the high rate of sick leave and to the fact that long-term sick leave was associated with worse results. Outcomes after surgery, as measured by the Quick-DASH, were not impressive in any of the groups, in agreement with our previously published data. When looking at absolute numbers, it seemed that education level and level of earned income might affect outcomes. This result, however, did not gain statistical significance in the multivariate regression model, suggesting that unfavorable outcomes have 8 BioMed Research International a multifaceted background. Thus, the patients' whole situation should be taken into consideration when planning appropriate treatment and rehabilitation.
The major strengths of this study are the large number of patients and the various socioeconomic factors included. The main limitation is the response rate, which was around 30%. This might introduce a selection bias, even though we did not find any large differences between responders and nonresponders.
Conclusion
Patients having surgery for ulnar nerve compression greatly differ from the general Swedish population concerning socioeconomic factors, but only a history of long-term sick leave influences the outcome of surgery. The patients' whole situation should be taken into consideration when planning appropriate treatment and rehabilitation.
Data Availability
Public access to the data is restricted by the Swedish Authorities (Public Access to Information and Secrecy Act; http:// www.government.se/information-material/2009/09/publicaccess-to-information-and-secrecy-act/), but data can be made available for researchers after a special review that includes approval of the research project by both an Ethics Committee and the authorities' data safety committees.
|
2020-12-24T09:08:41.694Z
|
2020-12-18T00:00:00.000
|
{
"year": 2020,
"sha1": "6f402ed85cac2a96c8b510406a5b68990ef3be86",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/5928649",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3318b7737d629b9420e2be2b27f705fb314469a0",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261443371
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Inbreeding and Ancestral Inbreeding on Longevity Traits in German Brown Cows
Simple Summary Observed increasing levels of inbreeding in German brown cattle imply the need to evaluate the resulting effects. If increasing levels of inbreeding come along with a reduction in the phenotypic performance, this is referred to as inbreeding depression. However, not all inbreeding is assumed to be unfavorable. In the context of inbreeding in combination with selection, it is presumed that negative variants can be eliminated from the population over time, implying ancestral inbreeding to be less harmful than new inbreeding. This is referred to as purging. To evaluate the effect of purging, ancestral inbreeding coefficients have been developed to allow to distinguish between new and ancestral inbreeding. The aim of this study was to evaluate inbreeding depression and purging for lifetime, lifetime performance, survival, and culling rates due to different reasons in German brown cows, by calculating the effects of classical and ancestral inbreeding coefficients. All longevity traits under study were affected by inbreeding depression. As the effects of ancestral inbreeding coefficients were significantly negative for lifetime and lifetime performance, while new inbreeding had no significant effect, there was no evidence of purging in the population under study. Thus, considering inbreeding levels in future mating plans helps to avoid a further decline in longevity due to inbreeding. Abstract A recent study on the population structure of the German Brown population found increasing levels of classical and ancestral inbreeding coefficients. Thus, the aim of this study was to determine the effects of inbreeding depression and purging on longevity traits using classical and ancestral inbreeding coefficients according to Kalinowski (2002) (Fa_Kal, FNew), Ballou (1997) (Fa_Bal), and Baumung (2015) (Ahc). For this purpose, uncensored data of 480,440 cows born between 1990 and 2001 were available. We analyzed 17 longevity traits, including herd life, length of productive life, number of calvings, lifetime and effective lifetime production for milk, fat, and protein yield, the survival to the 2nd, 4th, 6th, 8th, and 10th lactation number, and the culling frequencies due to infertility, or udder and foot and leg problems. Inbreeding depression was significant and negative for all traits but for culling due to udder and to foot and leg problems. When expressed in percentages of genetic standard deviations, inbreeding depression per 1% increase in inbreeding was −3.61 to −10.98%, −2.42 to −2.99%, −2.21 to −4.58%, and 5.13% for lifetime production traits, lifetime traits, survival rates, and culling due to infertility, respectively. Heterosis and recombination effects due to US Brown Swiss genes were positive and counteracted inbreeding depression. The effects of FNew were not significantly different from zero, while Fa_Kal had negative effects on lifetime and lifetime production traits. Similarly, the interaction of F with Fa_Bal was significantly negative. Thus, purging effects could not be shown for longevity traits in German Brown. A possible explanation may be seen in the breed history of the German Brown, that through the introgression of US Brown Swiss bulls ancestral inbreeding increased and longevity decreased. Our results show, that reducing a further increase in inbreeding in mating plans is advisable to prevent a further decline in longevity due to inbreeding depression, as purging effects were very unlikely in this population.
Introduction
Targeted selection programs in dairy cattle populations led to an enormous genetic progress, especially in milk production traits, over recent decades, at the cost of increasing levels of inbreeding as a result of focusing on a low number of highly selected elite bulls.German Brown is characterized by high milk production with high quality and superiority in length of productive life compared to Holsteins and German Fleckvieh.Introgression of US Brown Swiss bulls since 1966 contributed to the improvement in milk production traits in German Brown and, on the other hand, has led to increasing levels of individual inbreeding in animals with higher proportions of US Brown Swiss genes [1].A previous study on milk performance for the first three lactation numbers of German Brown from the birth years between 1980 and 1992 indicated an inbreeding depression by 10 to 13 kg milk yield, 0.36 to 0.51 kg fat yield, and 0.35 to 0.47 kg protein yield per 1% increase in the inbreeding coefficient [2].A slight tendency to lower inbreeding coefficients was seen in cows surviving the first three lactations compared to cows that left the herd before completing three lactations [2].In Swiss Brown cows, first lactation and milk and fat yield with first calving between 1971 and 1986 were reduced by 26 and 0.08 kg per 1% of the inbreeding coefficient [3].In Austrian Brown cows with first parturition between 1979 and 1991, estimates for inbreeding depression by 1% increase in the inbreeding coefficient were 6.3 to 10.4 kg milk yield, but close to zero for fat and protein percentages [4].Length of productive life and lifetime energy-corrected milk yield decreased by 4.3 days and 165 kg per 1% inbreeding, respectively [4].In addition, juvenile mortality of calves and heifers of Austrian Brown, born in the years 2001 to 2007, have been shown to be negatively affected by 0.49% per 1% increase in inbreeding [5].Next to the effect of inbreeding depression, there is an ongoing debate about the existence of positive effects of inbreeding caused by inbred ancestors and referred to as purging [6,7].Purging assumes that intensive selection may lead to an elimination of deleterious alleles from a population and therefore counteracting inbreeding depression [8].This means that after a few generations of inbreeding, the best-performing individuals survive and reach the performance level of the non-inbred or less inbred individuals, or even achieve higher performance, while the poorer-performing individuals carrying deleterious alleles die or do not reproduce, so that these alleles are eliminated from the population [9,10].According to Dickerson [11], the response to selection is stronger in mating systems with complete sibling mating than in systems with random mating.To date, the presence of purging has been studied in various experiments, including self-fertilization in plants [12], sibling mating in animals [13][14][15][16], by mainly examining traits related to life history [17], or in small captive populations [18,19].For example, in a previous breeding experiment with quails, it was shown that populations with inbreeding in previous generations were less prone to inbreeding depression than populations without inbreeding history.Furthermore, the reproductive performance of populations with intensive inbreeding was better after a few generations than that of populations with random mating [13,20].However, the intensity of purging depends on the extent to which a trait is affected by deleterious alleles and how these alleles affect fertility and survival.Lethal or semi-lethal alleles are more easily eliminated than less deleterious ones [8,9,17].To analyze the effect of purging based on pedigree data, ancestral inbreeding coefficients have to be calculated [18,21,22].Only few studies reported ancestral coefficients for different Holstein dairy cattle populations.The effects of the ancestral inbreeding coefficients on production, fertility, birth traits, and survival traits were not consistent [6,7,[23][24][25].For example, in Irish and Dutch Holstein evidence of the presence of purging effects for production traits could be shown [6,23], whereas results for Canadian and Iranian Holstein were not significant or even contradictory [24,25].As the extent of inbreeding depression as well as purging may depend on different factors such as the population structure, the selection history, and the considered trait [8,10], it is crucial to analyze and monitor these effects in every population to adopt appropriate measures.For the German Brown population, classical and ancestral inbreeding coefficients have recently been reported and increasing trends with birth years were shown for classical and ancestral inbreeding coefficients [26].The breed proportion of US Brown Swiss is increasing, but length of life and productive life were not positively correlated with increasing breed proportions of US Brown Swiss; therefore, this follow-up study in German Brown has been initiated with the objective to analyze inbreeding depression and purging effects on longevity traits including herd life (HL), length of productive life (LPL), lifetime milk production, and survival and culling incidences.We used an uncensored data set of Brown German cows born in the years 1990 to 2001.In order to account for the US Brown Swiss breed percentage, heterosis and recombination effects were considered simultaneously in animal models.
Materials and Methods
The data were obtained from the official milk recording organization of Bavaria (Landeskuratorium der Erzeugerringe für tierische Veredelung in Bayern e.V., LKV).This data set comprised all German Brown with their available pedigrees and lifetime milk performance since 1990 [26].For the analysis of longevity traits, all first-calving heifers born between 1990 and 2001 were regarded.All these animals had known parents as well as registered dates and reasons why they left the herd and, therefore, they had complete lifetime records.The final data set contained 480,440 cows with known parents and a corresponding pedigree file of 820,558 individuals.The data set was used to analyze measures of longevity, defined as functional, as they were corrected for the relative fat and protein yield of the single cow within their herd [27,28].As suggested by Schuster et al. (2020) [29], herd life (HL) was defined as the time period from birth until culling.Length of productive life (LPL) included the time period from first calving to culling.Number of lactations (NC) was the sum of lactation numbers per lifetime which the respective cow reached.Lifetime milk yield (LMY), fat yield (LFY), and protein yield (LPY) were defined as total yield across all lactations.Lifetime efficiency for milk yield, (EffLMY), fat yield (EffLFY), and protein yield (EffLPY) were calculated as the ratio of lifetime production to herd life in days.Survival to lactation two (Surv1), four (Surv3), six (Surv5), eight (Surv7), and ten (Surv9) was calculated as the proportion of cows that reached the respective lactation number, e.g., cows that survived the first lactation and had at least a second calving date were encoded with Surv1 = 1 and all other cows with 0. Results are displayed as relative frequencies.The frequencies of the three most common culling reasons, namely culling due to foot and leg problems (Cul CL ), infertility (Cul INF ), and udder problems (Cul UD ), were analyzed as binary traits, indicating that the cow did leave the herd due to this respective specific reason.Classical and ancestral inbreeding coefficients were available from our previous study and merged with the present data set including 448, 440 cows [26].
To account for the effect of crossbreeding with US Brown Swiss bulls starting in 1966, the coefficients of heterosis (HET = P S (1 − P D ) + P D (1 − P S )) and recombination loss (REC = P D (1 − P D ) + P S (1 − P S )) were defined based on the maternal (P D ) and paternal (P S ) breed proportion of Brown Swiss genes [30].
Statistical Analysis
The effect of the different inbreeding coefficients was calculated by regressing phenotypic values on the inbreeding coefficient and accounting for animal and environmental effects.The following linear univariate animal model, parameterized for non-genetic effects according to Punsmann, Duda and Distl [28], was employed where The expected values for F, HET, and REC in the current population were calculated based on the observed estimates for the regression coefficients obtained from model 1 and the average coefficients of F, HET, and REC that were 0.018, 0.441, and 0.369, respectively.As F a_Bal does not consider the probability that an individual is identical by descent (IBD) itself, we included the interaction of F a_Bal with F as suggested by Ballou (1997) [18] in model 2 where fixed and random effects were defined as above and F is the coefficient of inbreeding, F×F a_Bal = interaction of the coefficient of inbreeding with the ancestral inbreeding coefficient according to Ballou [18], and b 4 and b 5 are the corresponding linear regression coefficients.Model 3 included the effects of either Ahc (ancestral history coefficient according to Baumung, Farkas, Boichard, Mészáros, Sölkner and Curik [22]) or F a_Bal (ancestral inbreeding coefficient according to Ballou [18]) as F ANC together with F simultaneously: In model 4 the effect of the inbreeding coefficients according to Kalinowski, Hedrick and Miller [21] were analyzed by considering F a_Kal and F New simultaneously in the model: Variance components were estimated using VCE 6.0.2 [31], and were used for estimation of random and fixed effects as well as linear regressions using PEST, version 4.2.6.Further statistical analyses were performed in SAS, version 9.4 (Statistical Analysis System, Cary, NC, USA, 2023).The model employed for the estimation of heritabilities included all fixed and random effects models as the above-mentioned models, and the proportion of US Brown Swiss in classes of 10% (<31%, 31-40%, 41-50%, 51-60%, 61-70%, 71-80%, >80%) [21].Animal models with a binomial distribution function for 0/1-traits yielded very similar estimates for heritabilities, and thus, we employed linear animal models.
Trait
The results from model 1 revealed a significant unfavorable effect of F on all longevity traits studied, except for Cul CL and Cul UD (Table 3).All estimates from linear regression coefficients refer to 100% inbreeding and consider the difference between a non-inbred and a fully inbred animal.A 1% increase in inbreeding reduces HL and LPL by 7.3 and 7.7 days, respectively.Regarding survival, inbreeding depression was higher for cows with less lactation numbers than for cows with more lactation numbers and was highest for Surv3 (Table 3).When expressed in percentages of genetic standard deviations, lifetime production traits were more influenced by inbreeding than HL, LPL, NC, and survival to the following lactation number (Surv1 to Surv9), which had decreasing inbreeding depression when cows reached higher lactation numbers (Supplementary Table S3).Inbreeding significantly increased the frequency of Cul INF , with an increase by 0.399 per 100% inbreeding.Significant positive heterosis effects were observed for most traits with 100% heterosis increasing HL by 161 days and LMY by 1700 kg, and increasing Surv3 and Surv5 by 6.8% and 5.9%, respectively.Recombination effects were also significantly positive for HL, LPL, NC, and Surv1 to Surv7.No significant heterosis and recombination effects were found for effective lifetime production traits (Table 3).Regarding the expected cumulated effects of inbreeding, heterosis, and recombination for the current population, the combined effects were favorable for all traits apart from effective lifetime production and culling due to infertility.Projecting these effects to the birth year 2014, based on coefficients from 2014, heterosis and recombination effects decrease, while inbreeding effects increase, leading to expected decreasing combined effects and even to more unfavorable negative effects for LPY and effective lifetime production (Supplementary Table S4).
Results for the effects of ancestral inbreeding coefficients are presented in Tables 4 and 5. Using model 2, the regression coefficients of the interaction of F×F a_Bal were significantly negative for HL, LPL, NC, lifetime, and effective lifetime production traits and the corresponding effects of F were slightly reduced compared with the results of model 1.Significant negative effects of F a_Kal were found for HL, LPL, NC, Surv3, and lifetime production traits using model 4, whereas the effect of F New was close to zero and not significantly different from zero for these traits (Table 4).
Considering Ahc, negative regression coefficients were observed for HL, LPL, NC, lifetime production, and Surv3 and 5. Estimates for regression coefficients from model 4 were similar to the estimates obtained from model 1 (Table 5).Moreover, the regression coefficients for Ahc equaled those of F a_Bal in the corresponding models and, therefore, only the results for Ahc are presented.
The expected phenotypes of highly (95% percentile) and lowly (5% percentile) inbred cows were calculated to further illustrate the effect of changing the inbreeding coefficients.For classical inbreeding, the differences corresponded to 33 days for HL and 873 kg for LMY.Differences in ancestral inbreeding levels were lower with 7 and 11 days of HL and 125 and 174 kg of LMY for F a_Kal and Ahc, respectively (Supplementary Table S5).
Discussion
Inbreeding depression was obvious for all traits analyzed but for Cul CL and Cul UD .On the other hand, all longevity traits and lifetime performance traits but lifetime efficiency showed significantly positive heterosis effects through crossbreeding with US Brown Swiss bulls.Recombination effects were also positive but smaller than heterosis effects and had importance for longevity traits only.Models regarding ancestral inbreeding coefficients revealed significant negative effects, and thus we were not able to demonstrate purging effects.Under purging, ancestral inbreeding would have been expected to exert positive effects on the traits.
Lifetime production traits were less negatively affected by inbreeding in Austrian Simmental, Simmental x Red Friesian, and Brown Swiss compared to this study.In the latter breeds, fat-corrected milk was reduced by −136.7,−109.9, and −165.0 kg per 1% inbreeding, respectively [4].Similarly, US Holsteins showed a decrease due to 1% inbreeding of −177.17 kg, −6.01 kg, and −5.45 kg lifetime milk, and fat and protein yield, respectively [33].Nevertheless, lifetime was limited in the Austrian study to 10 lactations [4] and in the US study to 84 months after first calving [33], whereas our study included all records until the cows left the herd.
The higher impact of inbreeding on lifetime performance compared to longevity traits, HL, LPL, and NC, becomes obvious when expressed in percentages of their phenotypic and genetic standard deviations.This may indicate that the reduced lifetime production in German Brown cattle in this study is not only due to the shortened LPL, but the production traits themselves may be negatively influenced by inbreeding.This is in agreement with previous studies on different dairy cattle populations that reported inbreeding depression for milk performance in the first three lactations [6,[23][24][25].The highest effects due to inbreeding depression, expressed in percentages of the genetic standard deviations, were demonstrated for LMY, Surv1, and EffLFY.Therefore, inbreeding depression may have the largest negative effects on the expected breeding progress of these traits.
In order to evaluate contributions from US Brown Swiss genes, we also employed a model with F and its interaction with classes of US Brown Swiss genes.These analyses showed, for all traits under analysis, that cows with less than 50% US Brown Swiss genes had less inbreeding depression and inbreeding depression increased up to 70-80% US Brown Swiss genes and then decreased with even higher proportions of US Brown Swiss genes.This may indicate that negative effects of inbreeding seem related with proportions of US Brown Swiss genes and the strategy of using bulls with different proportions of US Brown Swiss genes.Introducing US Brown Swiss bulls may be associated with less inbreeding depression, because these US Brown Swiss bulls may not be so closely related with the German Brown population in comparison to German Brown bulls with less than 100% US Brown Swiss genes.
Inbreeding depression may have caused a higher risk for younger cows to be culled.Cows which survived more lactations may be assumed to be prone to a lower risk to be culled due to the effects on inbreeding depression.Thus, cows surviving to higher lactation numbers are expected to show decreasing inbreeding coefficients (Supplementary Tables S5c and S6).The decreasing inbreeding coefficients in cows reaching a higher lactation number reduce the size of inbreeding depression in these cows.Selection of cows for following lactation numbers may also lower the increase in inbreeding and the amount of inbreeding depression in the next generation.Survivors to higher lactations may have more progeny than cows leaving the herd early and therefore, increase in inbreeding and inbreeding depression may be smaller compared to a situation when all cows have the same chance to produce progeny.The regression coefficients were highest for Surv3, implying that an inbred cow is at highest risk to leave the herd before the fourth lactation.As this is also the period of time when cows leave the herd on average, i.e., with 3.51 calvings and an LPL of 3.46 years in the studied population, this further supports the lifetimelimiting effect of inbreeding.Nevertheless, the effect of inbreeding on survival was small, as it would explain a reduction of 0.8% (−0.8% = (0.018) × (−0.457) ×100) for Surv3, in relation to the average degree of inbreeding in the population studied.With respect to survival to the second lactation, a comparable regression coefficient of −0.3 was reported for Irish Holstein based on 42,723 survival records [23].The mean level of inbreeding was higher in Irish Holstein with F = 0.0268 than in German Brown with 0.018 though.A decrease in survival from the second through fifth and sixth lactation in comparison to lactation one was observed in American Holstein and Jersey, respectively, with increasing inbreeding [34,35].When considering the stayability to 48 months of age, the effects of inbreeding in US Ayrshire, Guernsey, Holstein, Jersey, and Brown Swiss ranged from −0.011 to −0.002 in a linear sire model and were closer to zero than the estimates for Surv1 and Surv3 in this study [36].Because of the high standard errors of 0.0017, the authors concluded that inbreeding had no negative effect on stayability to 48 months [36].However, it has to be considered that the degree of inbreeding in this study was as low as 0.009.When longevity was expressed as relative culling risk, small but significant effects were found in Canadian Holstein, Jersey, and Ayrshire compared with non-inbred cows.The effect was greater for cows in inbreeding classes beyond 12.5% [37].In US Jersey cows that calved for the first time between 1981 and 2000, a slightly higher culling risk was observed in animals with an inbreeding coefficient greater than 10% than in animals with an inbreeding coefficient less than 5% [38].Regarding the reasons why cows left the herd, the increase in the frequency of culling due to infertility was related with inbreeding depression.Infertility is the most common reason in German Brown cows for leaving the herd and is one of the most common reasons in dairy cattle populations [39][40][41].The proportion of cows leaving the herd due to infertility has already been shown to be highest in lower lactation numbers in German Brown [42] and Australian dairy cattle [41].Cows surviving fewer lactations were more inbred in this study than cows surviving higher lactations; thus, it can be concluded that German Brown cows with higher levels of inbreeding are more likely to be affected by infertility, which is in agreement with the results on Spanish Holsteins, where cows with high inbreeding coefficients (6.25-12.5%)have a 1.68% lower pregnancy rate than non-inbred cows [43].In general, negative effects of inbreeding on fertility have already been reported for different dairy cattle populations [6,23,[43][44][45].Previous studies reported positive correlations between fertility and longevity traits, further explaining the link between inbreeding, longevity, and fertility [46,47].Thus, preventing inbreeding may improve fertility and reduce culling due to infertility, which would lead to an increasing HL and LPL [42].
We concurrently considered heterosis and recombination effects because the German Brown is a crossbred population of Original Brown and US Brown Swiss with an increasing trend of breed proportions of US Brown Swiss [26].Considering the effects in relation to the mean heterosis and inbreeding coefficients of the population under study, heterosis effects are positive and are counteracting the negative impact of inbreeding.Using population averages, heterosis is expected to increase HL by 70 days, while inbreeding decreases HL by 13 days.Similar expectations were found for lifetime production traits.Heterosis estimates for Surv1-5 in this study range from 3.4% to 6.8% and are in the range of the results from studies of crossbred populations involving various dairy cattle breeds from New Zealand [48], Sweden [49], and Denmark [50].The introduction of US Brown Swiss genes was performed with only a small number of sires that were mated frequently within the population.Thus, the high breed proportion of the today's German Brown population stems from a small gene pool of US Brown Swiss genetics [26].This means that crossing with US Brown Swiss sires led to increasing levels of inbreeding, as co-ancestries between cows rapidly increased in the first crossbred generations [26].This may furthermore explain the positive and significant recombination effects for HL, LPL, NC, and survival.When summarizing the effects of heterosis, recombination, and inbreeding, the positive combined effects suggest a balanced contribution of inbreeding and the introduction of new genes of US Brown Swiss.When projecting these results to the 2014 birth cohort, based on the data of our previous study [26], the combined effects for all traits decrease, mainly due to the increasing trend in inbreeding, and the increasing breed proportion of US Brown Swiss is expected to lead to decreasing heterosis effects.Nevertheless, it has to be considered that estimates of inbreeding effects may change over time, e.g., due to purging; thus, the effects for future generations may deviate from the extrapolation of the current estimates.
The breeding history of the German Brown is also important for the interpretation of ancestral inbreeding effects and purging.So far, there are only a few studies having analyzed the effects of ancestral inbreeding coefficients in dairy cattle populations, whereby longevity traits have rarely been under study (Supplementary Table S7).In German Brown cattle, the effects of ancestral inbreeding coefficients were negative and significant for lifetime and lifetime performance traits.None of the models accounting for ancestral inbreeding coefficients revealed evidence of purging.Considering the definition of Ballou (1997) [18], a significant favorable regression coefficient of the interaction of F with F a_Bal would provide evidence for purging, while this study revealed significant negative effects for HL, LPL, NC, and lifetime production.Moreover, the effects of F a_Bal were also negative when considered in the model as a regression coefficient alone.In Irish Holsteins, a positive, but not significant interaction of F with F a_Bal was observed for survival from first to second lactation and 305-day production traits, while the effect of the simple regression model with F a_Bal was significantly positive only for milk and protein yield [23].
The approach with Ahc assumes that alleles that have met more frequently in the past are less likely to have a detrimental effect than alleles that have never or only a few times before been identical-by-descent (IBD) [22].Since the inbreeding coefficient indicates how often a randomly selected allele was IBD during pedigree segregation, an increasing Ahc could be associated with a beneficial effect on phenotype [6,22].In Dutch Holsteins, positive effects for Ahc were reported for first lactation protein yield [6], whereas the current study showed only negative effects for Ahc on lifetime and lifetime production traits.In addition, inclusion of Ahc alone or simultaneously with F resulted in similar estimates for the regression coefficients of Ahc.We also tested the interaction of Ahc and F using model 2, which revealed comparable results to the interaction of F a_Bal and F. In general, results for F a_Bal and Ahc were very similar in this study with slightly higher estimates of F a_Bal for some of the traits when included in the different models, which may indicate equal meaning of these ancestral inbreeding measures, at least for the current pedigree structure of German Brown cows, where correlation of F a_Bal and Ahc was close to one [26,51].
According to Kalinowski et al. (2000) [21], successful purging would be present if F New had a more detrimental effect on the trait compared to F a_Kal .In contrast, in this study the effects of F New were close to zero and not significant, while F a_Kal had significantly negative effects on HL, LPL, NC, and lifetime production.These results indicate that the detrimental effects of inbreeding for longevity traits are mainly due to ancestral generations.In Irish Holsteins, a significantly negative effect of F a_Kal was reported for survival to second lactation, while F New was not significant [23].For 305-day production traits, F New was responsible for greater losses than F a_Kal in Irish Holsteins [23].Also, in Dutch Holsteins, F New was significantly negatively associated with first lactation yield [6].No significant effects of F a_Kal and F New were observed in Canadian Holsteins for lactation yield, but for milk and protein yield the effects of F New were unfavorable while those from F a_Kal were favorable [24].
Our results for German Brown using different models to account for ancestral inbreeding were consistent and did not show favorable effects of inbreeding from more ancient generations for lifetime and lifetime performance traits.Therefore, we assume that purging effects are very unlikely.However, the informative value of the different models used strongly depends on the structure of the pedigree.Deeper pedigrees are likely to reveal more inbreeding, which could also affect the results in terms of inbreeding depression and purging.The depth of the pedigree in this study was comparable to other studies, as we have discussed previously [19].
There may be a few reasons why purging was not detected in the German Brown population for longevity traits, but instead negative genetic contributions of the ancestral generations were discovered.First, the efficiency of purging depends on the structure of inbreeding within the population and the rate at which inbreeding increases [8] implying differences in the success due to different breeding and selecting histories.The overall level of inbreeding and ancestral inbreeding in the current population was not high due to introgression of US Brown Swiss bulls; thus, purging might possibly occur in future generations, where a further increase in inbreeding has been observed [26].Additional analysis of survival traits using birth year cohorts from 2002 to 2008 and records through 2018 for F a_Kal and F New gave estimates for Surv1 of 0.0998 ± 0.3412 and −0.1168 ± 0.0522 with p-Values of 0.7698 and 0.0251, respectively, and for Surv3 estimates of 0.0159 ± 0.4229 and −0.11515 ± 0.0646 with p-Values of 0.9699 and 0.07488, respectively.Mean values and standard deviations for F a_Kal and F New were 0.00183 ± 0.00264 and 0.01569 ± 0.01695, respectively.These preliminary data suggest that significant purging effects are not yet present, even though positive effects for F a_Kal were observed.On the other hand, estimates for F (Surv1: −0.2790 ± 0.0438, p-Value < 0.001 and Surv3: −0.5821 ± 0.0548, p-Value < 0.001) and F New indicated larger negative effects due to inbreeding depression.A similar picture emerged for effective lifetime performance for milk, fat, and protein yield in this data set with positive non-significant estimates for F a_Kal and negative non-significant estimates for F New .Purging may not even counterbalance the negative effects of new inbreeding in these data from 2002 to 2008.Other opportunities for including survival in more recent data would be multiple trait animal models that only regard survival up to the first four lactation numbers, but split the first or each lactation into three different periods to account for the distribution of reasons for culling in the different lactation periods.These multiple trait animal models need to be tested to see whether they can be extended to cows surviving more than seven or nine lactations if records from the younger birth cohorts are incomplete [52][53][54].
Increase in inbreeding in the ancestral generations was mainly due to multiple use of a limited number of bulls.In the German Brown population introgression with US Brown Swiss bulls started in 1966 resulting in higher milk production at the cost of a steeper increase in inbreeding [26] and reduced lifetime [28].Thus, selecting towards higher milk production favored the more intense use of US Brown Swiss genetics.Sufficient purging is characterized through an elimination of deleterious genes.Through the introduction of US Brown Swiss, more genes were introduced that negatively influenced longevity and inbreeding depression did not decline through purging and elimination of unfavorable genes associated with longer lifetime in more inbred animals.Thus, it is likely that the negative ancestral effects on longevity traits might predominantly stem from ancestors representing a higher breed proportion of US Brown Swiss.
Furthermore, the greatest success of purging has been found under high selection pressure [9].In the German Brown, the productivity traits fat and protein yield and protein percentage were economically weighted with 48% until 2015, resulting in a high selection pressure for these traits.In contrast, economical weights for functional traits such as longevity and fertility were only 16.1% and 8.6%, indicating less intensive selection.Estimation of breeding values for longevity started in 1995, so these traits had less time to express purging.In addition, purging depends on the genetic structure of the trait [7].Lower heritabilities for functional traits compared to production traits make selection less successful than for productivity traits.To date, ancestral inbreeding effects on fertility were mostly not significant or at least less conclusive considering purging [6,23,24].Thus, assuming that both fertility and longevity are functional traits subject to lower selection pressure, our results are consistent with these previous findings.
Conclusions
Significant inbreeding depression was shown in German Brown cows for lifetime, lifetime production, and effective lifetime production traits.However, compared to the current level of inbreeding, the negative effects through inbreeding depression are counterbalanced by positive heterosis and recombination effects, leading to positive combined effects.We were not able to demonstrate purging as all models accounting for ancestral inbreeding did not reveal positive effects on lifetime, lifetime production, survival, and main culling reasons for F a_Kal, F a_Bal, Ahc, and F×F a_Bal .Inbreeding depression was predominantly caused by ancestors from more ancient generations than inbreeding in newer generations.The intensive use of US Brown Swiss bulls in the first generations after immigration started may also have contributed these outcomes.In addition, longevity traits have been under less selection pressure at this time, thus positive effects due to selection against inbreeding depression are not likely to appear in the data of the present study.Thus, it is worth considering the dynamics of the different ancestral inbreeding coefficients in future generations.As ancestral inbreeding is increasing more strongly than new inbreeding in German Brown cows to recent birth years, purging effects for longevity can possibly appear in future generations.Our results indicate that a further decline of lifetime and lifetime production can be prevented when measures can be implemented to slow down the increase in ancestral inbreeding and the further increase in new inbreeding through US Brown Swiss genes.At present, our data have some limitations in terms of the depth of the pedigree and the endpoint of the data (2001), future studies should possibly use censored data and survival analysis methods or multiple trait animal models for survival to different parities and periods within the respective lactations to gain insights into the actual development of longevity traits.
|
2023-09-02T15:11:38.738Z
|
2023-08-30T00:00:00.000
|
{
"year": 2023,
"sha1": "68009c8c206be6bde9f141a608c48aca28dbf530",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/13/17/2765/pdf?version=1693449078",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "917fc8284e7c7ff4d187ff7ef61751a1bfa05954",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
54707307
|
pes2o/s2orc
|
v3-fos-license
|
A Cross-Cultural Study about Positive and Negative Emotions and Well-being in Infertile Women
Well-being, global/cognitive component, positive emotion, negative emotion, infertility Cross-cultural studies have shown that different patterns of positive and negative emotional responses exist across different cultures. For instance, there is evidence to suggest that, in pleasant situations, Easterners have a dialectical emotionality and report more mixed emotions than Westerners, whereas there are no cultural differences in unpleasant and mixed situations [1]. There are also studies that indicate that the large emotional differences observed between Western and Asian cultures concern positive affect rather than negative affect [1,2]. Thus, Asian and Western cultures have more similarities in negative emotions than in positive emotions. In some previous research, it has been assumed that people in an individualisticbased Western culture adapt a contradictory unidirectionality perspective of happiness in which positive and negative emotions are regarded as opposite ends of a bipolar continuum. It is therefore assumed that people in Western cultures cannot feel both positive and negative emotions simultaneously and that you cannot be happy if you are experiencing unhappiness [3,4]. Such a view emphasizes the maximization of happiness and the minimization of unhappiness. In contrast, in Eastern cultures, the emphasis is on a dialectically experienced emotion, where there is a co-existence of positive and negative emotions [5]. In their research, Willams and Aaker [5] illustrate this notion by showing that collectivist-based Asian Americans prefer advertisements that evoke mixed emotions (e.g., happy and sad) more than individualistic-based European Americans.
Introduction
Well-being, global/cognitive component, positive emotion, negative emotion, infertility Cross-cultural studies have shown that different patterns of positive and negative emotional responses exist across different cultures. For instance, there is evidence to suggest that, in pleasant situations, Easterners have a dialectical emotionality and report more mixed emotions than Westerners, whereas there are no cultural differences in unpleasant and mixed situations [1]. There are also studies that indicate that the large emotional differences observed between Western and Asian cultures concern positive affect rather than negative affect [1,2]. Thus, Asian and Western cultures have more similarities in negative emotions than in positive emotions. In some previous research, it has been assumed that people in an individualisticbased Western culture adapt a contradictory unidirectionality perspective of happiness in which positive and negative emotions are regarded as opposite ends of a bipolar continuum. It is therefore assumed that people in Western cultures cannot feel both positive and negative emotions simultaneously and that you cannot be happy if you are experiencing unhappiness [3,4]. Such a view emphasizes the maximization of happiness and the minimization of unhappiness. In contrast, in Eastern cultures, the emphasis is on a dialectically experienced emotion, where there is a co-existence of positive and negative emotions [5]. In their research, Willams and Aaker [5] illustrate this notion by showing that collectivist-based Asian Americans prefer advertisements that evoke mixed emotions (e.g., happy and sad) more than individualistic-based European Americans.
In a recent study of well-being and positive and negative emotions Kormi-Nouri, Farahani, and Trost [6] compared Swedish
A Cross-Cultural Study about Positive and Negative Emotions and Well-being in
Infertile Women had more depression before and after infertility treatment and rated IVF as being very stressful. Infertile women become the focus of infertility treatment regardless of what is causing the infertility [31]. For infertile women, pregnancy and motherhood are highly emphasized, and a traditional gender role is strongly identified [32]. Infertile women reported more psychological distress when they were under strong social pressure towards motherhood [32]. Childless women perceive their condition negatively depending to a negative attribute of the public to their involuntary childlessness [33]. For infertile women, social sanctions and social control are shown to be relevant to an understanding of the experience of involuntary childlessness [34].
Social and cultural factors such as norms, values and role expectations are considered as important factors affecting the meaning of infertility among infertile inviduals [15,20]. However, norms and standards can be valued differently in collectivistic and individualistic cultures [35,36]. In a collectivistic culture, social organizations such as family and community and importance of the group are highly emphasized [37]. Family formation can increase social and economic status of people in collectivistic cultures [38]. Voluntary childlessness is not socially acceptable in such a culture [19]. On the other hand, in an individual culture, there is a high value on the freedom and happiness of an individual, self and autonomous individual are highly emphasized [39]. In individualistic cultures, family formation may be less valued, and choosing voluntary childlessness is more respected, as family formation is not as equally strong norms as it is in collectivistic cultures [40].
Thus, stigma or being marginalized may have more negative social and psychological consequences especially for infertile women in collectivistic cultures in which social life and family formation is the center of most human interactions [32,41]. Especially in collectivistic cultures, this may be partly due to still prevailing ideas that infertility is a woman's fault or the denial of the existence of male infertility [42]. This stigmatization can result in negative community effects (e.g., isolation and exclusion) and marriage effects for childless women in collectivistic cultures, whereas positive marriage effects have been reported in studies of infertile women in western or individualistic cultures [43].
The aim of the present study was to examine Iranian and Swedish women's cognitive and emotional well-being while they underwent fertility treatment. The level of psychological distress caused by infertility may be affected by culturally shaped norms about family formation, leading to cultural differences in stigmatization. This may produce a cultural difference in wellbeing and its components, with infertile Iranian women being more stigmatized than Swedish women and displaying different emotional patterns of well-being.
The current study intended to investigate if there were any differences in the degree of positive and negative emotions in Iranian and Swedish women who were undergoing fertility treatment and how these emotions may have affected wellbeing differently based on culture. Based on previous research on cultural differences in the strive for emotional moderation or emotional maximization [44,6], a cultural difference would be expected to be found in the present study, with the well-being of university students, who were representative of an individualistic Western culture [7], and Iranian university students, who were representative of a collectivistic Asian culture [8,9]. These authors used a new well-being measurement that was designed by Diener et al. [10] to distinguish between cognitive/global well-being (flourishing) and emotional well-being (positive versus negative), but they found no difference between Swedish and Iranian participants in their flourishing scores. However, they found different emotional patterns in these two cultures.
Whereas Swedish students showed more positive emotions, Iranian participants showed more negative emotions. Further, whereas positive affect and flourishing were positively correlated in the Swedish sample, they were negatively correlated in the Iranian sample. It was also found that, in the Swedish sample, the factor most predictive of flourishing was positive affect. However, in the Iranian sample, the most predictive factor was the balance affect (a combination of both positive and negative affects together). In line with previous research [11][12][13], it was concluded that there is a need to distinguish between the cognitive and emotional components of well-being, especially at the cultural level. Whereas culture has no impact on the cognitive component of subjective well-being, it can selectively influence different emotional components of subjective well-being. The present research was designed to follow up the study by Kormi-Nouri et al. [6] in the same two cultures (Sweden and Iran), but in a different population, namely infertile women, who are in an unpleasant situation and might experience a significant amount of stress and negative emotions.
Infertility is the inability of a sexually active, non-contraceptive couple to achieve pregnancy in at least one year [14]. In primary infertility, pregnancy has never occurred. In secondary infertility, one or both members of the couple have previously conceived but are unable to conceive again after a full year of trying. The term infertility may be perceived clinically as a medical condition with no inclusion of psychological and social aspects [15]. However, there are studies indicating that infertility is experienced as a social and psychological phenomenon as well [16,17] and affects both social functioning [18,19] and psychological well-being [20]. Parenthood is a major transition in adult life for both men and women. Today, children are valued as a source of fulfillments and happiness [21]. The stress of the non-fulfilment of a wish for a child has been associated with emotional problems such as depression, anger, guilt, anxiety and feeling of worthlessness [20,22,23]. There are indeed findings to reflect a much higher prevalence and levels of psychological distress in the sample of infertility patients compared to a normative sample [24,25]. Distress has been seen both as a cause of infertility [26,27] and as a consequence of infertility [21,28]. In a review of research about consequences of infertility, Griel [29] reported that the majority of studies have demonstrated that infertile couples are moderately different from fertile norms on some indices such as depression and interpersonal sensitivity.
Infertile women, compared to infertile men, experience more depression and distress, feel less satisfied with life and happiness, pin the blame for this problem often on themselves, and seek more treatment for their infertility. Leiblum, Kemman and Lane [30] reported that infertile women, compared to infertile men,
ACTA PSYCHOPATHOLOGICA ISSN 2469-6676
infertile Iranian women not being affected by negative emotions as in infertile Swedish women. On the contrary, research indicates that these cultural differences are often observed in "pleasant" situations and may not be present in "unpleasant" situations [1,2], which may result in small or no differences in emotions between Iranian and Swedish women who are undergoing fertility treatment.
Participants
Participants who were included in the study were women who had received some kind of infertility-related help at fertility clinics in Iran and Sweden. The study only involved women, as women are the main focus of fertility treatment. Because the Iranian culture is regarded as a collectivistic Asian culture, and Sweden is regarded as a highly individualistic Western culture, the use of an Iranian and a Swedish sample for comparisons on cultural dimensions such as collectivism and individualism was appropriate [6,7,45].
Sample characteristics: Demographic information for the two samples is presented in Table 1. It should be noted that seven participants from the Swedish sample were excluded, due to not having a diagnosis of primary infertility or having a previous history of hospitalization for psychiatric treatment. One case was excluded from the Iranian sample, due to not having a diagnosis of primary infertility.
The emotional distress of the subjects in the two samples was tested with the Hospital Anxiety and Depression Scale (HADS) [46]. The participants in both samples reported low sub-clinical levels of emotional distress (less than 11 in each subscale of depression and anxiety), but they scored higher than normative subjects in a non-clinical sample (mean (M)=9.82, standard deviation (SD)=5.98) [46]. The Iranian women exhibited distress scores that were 1.29 SD higher than the results from a nonclinical female sample, and the Swedish women exhibited scores that were 0.33 SD higher compared to normative data. As was expected, the Iranian participants reported higher levels than the Swedish participants of both depression (10.12 vs. 6.84) and anxiety (6.28 vs. 5.59).
Criteria for inclusion and exclusion:
Inclusion criteria were a confirmed diagnosis of primary infertility (i.e., active attempts to achieve pregnancy without success, and no previous biological children) and the participation in fertility treatment at a fertility clinic. Because the study focused on the investigation of infertilityrelated emotional symptoms in childless women who wanted to have children, these inclusion criteria were established. There is research showing that this group experiences significant psychological distress compared to normative data [25].
Exclusion criteria for participation in the study were a level of education lower than high school and a previous history of hospitalization for psychiatric treatment. Because the study was based on data obtained from questionnaires, it was crucial that the participants fully understood all of the written instructions to correctly fill in the questionnaires. Subjects with a severe psychological disorder were excluded on the basis of risks of disruptive third variables regarding the relationship between infertility and emotional patterns.
Recruitment
In all, 212 participants were recruited from fertility clinics in Sweden and Iran. Staff at the clinics asked patients about their interest in participating in the study, and patients who were interested in participating received an information sheet with easily comprehensible information about the study. The information also stated that the patients' care at the clinic would not be affected by their choice to participate. Participation was anonymous and voluntarily. The participants received one movie ticket for responding to the survey.
Ethical considerations
The study was approved by ethical review boards both in Sweden and in Iran.
Data analysis
The homogeneity of variance between the Swedish and the Iranian data was analyzed by performing Levene's test. The assumption of homogeneity was supported on well-being, positive emotions, negative emotions and anxiety scales at a 0.05 significance level.
The assumption of homogeneity was not supported on distress or depression at a 0.05 significance level.
Normality was assessed using Kolmogorov-Smirnov's test of normality and by the visual inspection of normal q-q-plots and histograms, which showed a mostly negatively skewed distribution of data for both the Swedish and the Iranian samples. Normality transformations were performed using reversed score transformations on Log transformations as well as Square root transformations [47]. Normality analyses were performed on both transformed and non-transformed data, in addition to Levene's test of homogeneity of variance regarding the Swedish and the Iranian data. Because the transformed and the nontransformed data did not noticeably differ, the results presented in this study are from the non-transformed data.
Demographic questionnaire
The questionnaire provided information on background variables and included questions regarding ethnicity, relationship status, education, occupation, income, social support, infertility diagnosis, medical treatment and medical history.
Flourishing scale (FS)
The term flourishing refers to a subjective experience of life going well, with an emphasis on effective functioning in combination with feeling good. By adding the construct of flourishing to the measurement of well-being, Diener has enriched the concept of well-being to comprise more than mere emotions [48]. The FS [10] includes eight items that were designed to measure subjective well-being on the basis of different important areas of human life, such as relationships, engagement, competence, optimism, self-esteem, purpose and contribution to the wellbeing of others. The participants responded to eight different positively phrased statements on a 7-point Likert scale ("strongly disagree"; "disagree"; "slightly agree"; "neither agree nor disagree"; "slightly agree"; "agree"; "strongly agree").
The FS strongly correlates with other scales on well-being and shows good psychometric characteristics. Cronbach's alpha of the scale is good at 0.87, and the temporal reliabilities are moderately good. A principal axis factor analysis showed that the scale is characterized by one single strong factor [10]. Swedish and Persian versions of the FS showed good reliability (α for the Swedish version: 0.87; α for the Persian version: 0.85) [6].
Scale of positive and negative experience (SPANE)
The SPANE [10] measures subjective emotions and consists of 12 items that are divided into scores for positive (six items) and negative (six items) emotions. Both the negative and the positive items are divided into three general items (e.g., negative, positive) and three specific items (e.g., sad, joyful). The inclusion of general items in the SPANE undermines the possibility of cultural biases due to cultural differences in specific expressions of emotions, which enables a better cultural comparison. The SPANE assesses a wide range of negative and positive experiences and emotions, the results converge well with other measures of emotions and well-being and are consistent in different cultures [10].
The 12 items are rated on a 5-point Likert scale, ranging from one ("very rarely or never") to five ("very often or always"), and the respondents are asked to base their answer on the amount of emotions experienced in the last month.
In the present study, the Cronbach's alpha of the scale was 0.87 (positive feelings) and 0.81 (negative feelings). Previous research also showed good reliability for the Swedish and Iranian versions of this scale, for both positive feeling (0.86 for the Iranian version and 0.82 for the Swedish version) and negative feeling (0.85 for the Iranian version and 0.84 for the Swedish version) [6].
Results
The descriptive findings are shown in Table 2.
A multivariate analysis for national effect was conducted, and the results are shown in Table 3. Table 3, there were significant differences between dependent variables (positive, negative and balance affects and also flourishing) when the data were combined with respect to the two cultures. A univariate analysis of variance test was also conducted, and nationality was considered as an independent variable of interest. The results are shown in Table 4.
As shown in
The results in Table 4 show that there was no significant difference between Iranian and Swedish participants in terms of flourishing and balance affect (the difference between positive and negative emotion), but there were significant differences in both positive and negative affects separately.
It should be mentioned that the invariance hypothesis was examined by Levene's test for the equality of variances rather than Box's M test on the flourishing scale and the SPANE scale because some cells showed fewer than two nonsingular cell covariance matrices. The homogeneity of covariance across the two groups is shown in In the case of flourishing, the zero order correlations of positive, negative and balance affects for Iranian and Swedish samples are shown in Table 6.
As shown in Table 6, the directions of the relationships between positive, negative, and balance affect with flourishing did not differ between the two groups, although the relationship was stronger for the Swedish sample.
In order to test whether culture was influenced by positive affect, negative affect and balance affect on flourishing, two stepwise regression models were tested. It was found that the most predictive affect with regard to flourishing was the balance affect among Swedish samples (b=0.69,SE=0.09,p<0.0001) and Iranian samples (b=0.31,SE=0.13,p<0.0001). For the two groups, an increase in balance affect, a decrease in negative affect and an increase in positive affect were associated with greater flourishing. Tables 7 and 8 show the stepwise regression analyses in Iranian and Swedish samples respectively.
As discussed by Stevens [49], multicollinearity can be a problem for multiple regressions for at least three reasons: a) it reduces the size of the multiple regression, b) it confounds the results because of high intercorrelations between the independent variables, and c) it increases the regression coefficient variance and results in a more unstable regression equation. Hence, the multicollinearity issue was tested. A tolerance level less than 1 was found for both samples, which ruled out the problem of multicollinearity for our results.
Discussion
In line with the findings of Diener et al. [10], Kormi-Nouri et al. [6] found different patterns of cognitive and emotional well-being at the cultural level. Whereas there was no difference between Swedish university students (as members of an individualistic culture) and Iranian university students (as members of a collectivistic culture) concerning the cognitive component of well-being (flourishing), the subjects differed concerning the emotional components of well-being (i.e., positive and negative affect). Swedish students reported higher levels of positive emotions, and positive emotions predicted their flourishing to a larger extent. However, in Iranian students, there were more negative emotions, and the negative emotions more strongly predicted their flourishing. The main aim of this study was to investigate whether similar cultural patterns could also be observed in infertile women, thus extending earlier findings to subjects in more unpleasant and stressful situations.
The main important finding of the present study, in line with the findings of the Kormi-Nouri et al. [6] study, was that there was no difference between the two cultures concerning the cognitive components of well-being. That is, once again, Swedish and Iranian participants, who belong to individualistic and collectivistic cultures, respectively, were similar with respect to the general evaluation of their life satisfaction. This similarity was therefore not affected by being in an unpleasant and stressful situation. Infertility has usually been considered a powerful stressor that involves emotional changes [50], and infertile women experience strong psychological distress and negative feelings related to infertility [32]. There is also research showing that social pressure towards family formation and having a child [35,38] and stigmatization in infertility [32,41] are observed to a greater extent in collectivistic cultures than in individualistic cultures. However, the results of the present study showed that this unpleasant and stressful situation had no effect on flourishing at a cultural level. Interestingly, the scores of flourishing for the infertile women in the present study and for university students in the Kormi-Nouri et al. [6] study were comparable, and both studies showed high general life satisfaction in these two cultures.
However, like the Kormi-Nouri et al. [6] study, cultural differences were observable in the emotional components of well-being, namely positive and negative emotions. Infertile Swedish women reported higher levels of positive affect than infertile Iranian women. However, unlike the university student population in the Kormi-Nouri et al. [6] study, infertile Swedish women in this sample also reported higher levels of negative emotions than infertile Iranian women. Moreover, in both infertile groups, the same pattern of prediction was observed: the most predictive affect with regard to flourishing was the balance affect, and negative and positive effects were in the second and third places, respectively. Thus, it appears that, under a stressful and unpleasant situation like infertility, negative emotions act differently in these two cultures: they become more noticeable in the Swedish population than in the Iranian population.
A comparison between the two studies showed that while the balance affect was the most predictive variable for flourishing in the Iranian group in the Kormi-Nouri et al. [6] study, this was the case for both cultural groups in the present study. Additionally, whereas the results of the Swedish groups in terms of positive and negative emotions were comparable in these two studies, the results were different for Iranian groups: infertile Iranian women, compared to Iranian university students, scored higher in positive emotions but scored lower in negative emotions. That is, infertility as a stressful and negative situation can change the emotional pattern at a cultural level. First, in a stressful and unpleasant situation such as infertility, the balance affect (where both positive and negative emotions are taken into account) becomes an important factor even for individuals in an individualistic culture like Sweden, although this factor was still important for individuals in a collectivistic culture like Iran. Thus, it is important to take into consideration the combination of these two types of emotions especially under stressful and negative situations. Second, in contrast with our expectation, infertility was not associated with a more negative outcome in Iran than in Sweden. There are studies that show that infertility (especially for women) is more stressful and is more negatively experienced in a collectivistic culture than in an individualistic culture [38,41]. Thus, the expectation was to see more negative emotions and/or fewer positive emotions in infertile Iranian women than in infertile Swedish women. However, the between-studies comparison showed the opposite results. Compared to Iranian university students, infertile Iranian women experienced more positive emotions and, even more notably, less negative emotions. In addition, the within-study comparison that is presented in the present study showed that infertile Swedish women experienced more emotions (both positive and negative) compared to their counterparts in the Iranian group. This result can be explained by findings showing that social support (especially from family and friends) is more observable in a collectivistic culture than in an individualistic culture, and social support is a stronger predictor of well-being in collectivistic cultures than in individualistic cultures [51,52,2]. In an infertility situation, women might be in need of more emotional support, which can be provided more by family and friends and not by medical care services alone. Such women may express more negative feelings with significant others and receive more attention from close family members and friends, which may influence their well-being. More specifically, when infertile women are undergoing in vitro fertilization (IVF) treatment, they may experience additional stress and emotional disappointments because IVF treatments are highly technological and can be difficult both physically and emotionally [53,54]. To examine the differential effects of social support in the two cultures, we analyzed data related to social support that had been collected in the demographic information for the present study.
Although the results demonstrated no significant differences between Iranian and Swedish participants with respect to social support received from their family, friends and significant others, social support had different meanings for the Iranian and Swedish women. Social support (in general) significantly predicted distress (i.e., more social support was associated with less distress) in the Iranian subjects and explained a significant proportion of the variance in their well-being. However, in the Swedish subjects, social support did not significantly predict well-being or distress. The finding that social support is a buffer against distress and a mechanism that influences well-being in a collectivistic context indicates that social support can have a beneficial effect on a stressful condition such as infertility, and it therefore should be included in the treatment of infertility in such contexts. Further research is needed on how to implement such elements in psychological treatment; there are examples in the literature of psychological interventions that include training in validation for partners [55,56]. Similar interventions have been implemented in medical contexts, in which people who were suffering from medical conditions such as long-term pain showed more favorable emotional outcomes when their partners received training in validation [57]. Hence, training partners in emotional communication and possibly implementing communication training to an extended social network and even medical professionals might be helpful for infertile women and might influence their sense of social support.
Regardless of culture, the present study indicates that infertility is a stressor that results in psychological suffering. This finding emphasizes the importance of psychological interventions as complements to the conventional medical treatment of infertility in order to minimize infertility-related distress and prevent the discontinuation of treatment due to treatment-induced strain.
There were some limitations in the present study. First, the measurement of subjective well-being is linked to several difficulties. For example, the level of satisfaction with one's life in individualistic cultures is, to a greater extent, determined by the individual's emotions and moods, while in collectivistic cultures, the level of satisfaction with one's life is determined by the individual's social life [58,59]. Since most well-being measures are designed in individualistic Western societies, the meaning of well-being in other cultures may not be properly captured [60]. Additionally, there may be technical biases due to culture-specific social norms about a condition or circumstance, such as infertility, that may affect the individual's responses on questions regarding this condition and the individual's well-being [7,14]. These issues were considered in the process of choosing instruments to measure well-being and emotions in the present study.
Second, there is a need for a better control group for infertile participants other than the university participants included in the Kormi-Nouri et al. [6] study. Although in the aforementioned study, the majority of students were females, and no significant gender differences were found, there were still age differences between these two studies, and the subjects in the student population were not asked about their fertility status. Thus, the comparison of these two studies should be considered with caution. In spite of these limitations, this work provides an important message that is worth developing further: infertility is indeed related to psychological distress, and the findings help to identify approaches that we can and should take in terms of supporting women who are suffering from the negative emotionality related to infertility.
|
2019-05-11T13:06:31.143Z
|
2017-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "bd0fbb4a86ce00fcae2a0af654cd3764c69fbae3",
"oa_license": "CCBY",
"oa_url": "http://psychopathology.imedpub.com/reversal-of-the-norm-pressure-discrepancies-on-the-two-sides-of-the-septum-in-chronic-kidney-disease-patients-with-congestive-hear.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "47c64e47bff9dbf0470372a50ec3e740295e4f10",
"s2fieldsofstudy": [
"Psychology",
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
245020504
|
pes2o/s2orc
|
v3-fos-license
|
Restricted cement augmentation in unstable geriatric midthoracic fractures treated by long-segmental posterior stabilization leads to a comparable construct stability
The goal of this study is to compare the construct stability of long segmental dorsal stabilization in unstable midthoracic osteoporotic fractures with complete pedicle screw cement augmentation (ComPSCA) versus restricted pedicle screw cement augmentation (ResPSCA) of the most cranial and caudal pedicle screws under cyclic loading. Twelve fresh frozen human cadaveric specimens (Th4–Th10) from individuals aged 65 years and older were tested in a biomechanical cadaver study. All specimens received a DEXA scan and computer tomography (CT) scan prior to testing. All specimens were matched into pairs. These pairs were randomized into the ComPSCA group and ResPSCA group. An unstable Th7 fracture was simulated. Periodic bending in flexion direction with a torque of 2.5 Nm and 25,000 cycles was applied. Markers were applied to the vertebral bodies to measure segmental movement. After testing, a CT scan of all specimens was performed. The mean age of the specimens was 87.8 years (range 74–101). The mean T-score was − 3.6 (range − 1.2 to − 5.3). Implant failure was visible in three specimens, two of the ComPSCA group and one of the ResPSCA group, affecting only one pedicle screw in each case. Slightly higher segmental movement could be evaluated in these three specimens. No further statistically significant differences were observed between the study groups. The construct stability under cyclic loading in flexion direction of long segmental posterior stabilization of an unstable osteoporotic midthoracic fracture using ResPSCA seems to be comparable to ComPSCA.
. Comparison groups with donor characteristics. ResPSCA: cement augmentation of pedicle screws at cranial (Th5) and caudal (Th9) level only; ComPSCA: cement augmentation of all pedicle screws; Mb Bech: Morbus Bechterew; Th6 Fract: Consolidated fracture of Th 6; SD: Standard Deviation. *For a pairwise comparison between the groups, specimen pairs were assigned the same specimen number. # Statistical evaluation of mean value differences between the groups, p values < 0.05 stating significant difference. www.nature.com/scientificreports/ The specimens were compressed axially and eccentrically by 20 mm. In a subsequent CT evaluation, no screw loosening and no damage to the spinal column structure was found 5 . All specimens were wrapped in plastic foil, cooled and shock frozen at − 80 °C to minimize ice crystal growth 6 .
For the current study, the specimens were then gently thawed. The temperature was gradually increased, for at least 2 days, to − 20 °C, then for one day to − 2 °C, then transitioned to room temperature within 16 h prior to testing. This is intended to reduce the temperature gradient within the specimen during thawing and to protect the tissues.
Experimental procedure. The non-instrumented vertebrae Th4 and Th10 were embedded with a polyurethane casting resin (RenCast; Huntsman Advanced Materials, Basel, Switzerland). Additional screws were inserted into the vertebral bodies to improve the bond between the bone and embedding. The vertebral endplate of Th7 was positioned horizontally to ensure an upright alignment of the spine.
The specimens were clamped in a test stand developed in-house (Fig. 1a). The major component is a swivel arm that is driven by a motor, generating a defined torque. The specimens were fixed with the lower embedding on a slide, while the upper embedding was connected to the swivel arm. The rotation axis of the swivel arm was set to the center of the fracture gap in Th7 (Fig. 1b). In order to generate torque as straight as possible into the spinal column, the specimen was not fully constrained (Fig. 1a). The slide allowed lateral movements while forward and backward movements were suppressed. The upper embedding was connected via a bearing rod to a linear bearing in the swivel arm. This enabled rotation and axial compensatory movements of the spinal column. The linear bearing was, in turn, pivotally mounted in the swivel arm. Thereby, mainly torque in the flexion/extension direction was introduced, whereas pure transverse forces were minimized.
Markers with speckle patterns were pinned to the instrumented vertebral bodies (Fig. 1). The pins were pinned into the vertebral body. Care was taken to ensure that they were far away from the screw tips and the surrounding cement, which was introduced after carefully evaluating the CT scan after compressive testing. Markers were also attached to the swivel arm, as well at an independent reference point.
The specimens were periodically bent in the direction of flexion. A torque of 2.5 Nm was applied, as recommended in osteoporotic thoracic spines 7 . A total of 25,000 load cycles were applied, which corresponds to the expected motion within the first 3-4 weeks after surgery in a geriatric patient population 8 . Tests were carried out at a frequency of 1.2 Hz. During a load cycle, the load was applied in the first half and released in the second half. The specimens were kept moist throughout the testing period, being wrapped in moist gauzes that were regularly moistened 7 . The rotation of the swivel arm was measured with an angle sensor (Incremental encoder 5821, Fritz Kübler GmbH, Germany). Furthermore, the positions of the markers were recorded with a digital image correlation system with a three-camera setup (Q400, LIMESS Messtechnik und Software GmbH, Krefeld, Germany) 9 . These measurements were taken at the beginning (10 cycles), every 500 cycles and at the end (24,990 cycles) for continuous monitoring. Two cycles were recorded with a frame rate of 15 Hz for each individual measurement time-point.
Evaluation.
After cyclic loading, CT was performed in order to detect any signs of implant failure, screw loosening or subsequent vertebral fractures. These were evaluated independently by two of the authors, one spine surgeon (U.J.S.) and one radiologist (M.R.).
As the markers are anchored into vertebral bodies, it is assumed that they represent the movement of the vertebral bodies during loading 9 . The marker positions measured with the digital image correlation system were www.nature.com/scientificreports/ correlated and exported into a coordinate system corresponding to a person standing upright. An evaluation routine was developed to calculate the relative movement between two markers. Since torque was introduced, the evaluation was limited to the relative rotation of the vertebral bodies. In order to calculate these rotational components about a respective axis of the coordinate system, one vector defined by two speckle pattern points on each marker, respectively, was regarded. When selecting these points, it was ensured that the resulting vector was preferably perpendicular to the respective rotation axis prior to loading. For all three axes of the coordinate system, the projection of the respective vector into the plane perpendicular to the respective axis was regarded.
The angles between two vector projections of different markers were calculated for each time step. Thus, relative rotation depending on the regarded axis could be calculated for any pair of markers. The calculation was done using MATLAB (MathWorks and Simulink, USA). The relative rotations between the swivel arm and the reference marker were compared with the data from the angle sensor to check the continuity of the measurements (Supplement). The relative rotations between the adjacent vertebrae Th5/Th6 and Th8/Th9 and between the vertebra pairs Th5/Th9 and Th6/Th8 were evaluated. For each time interval of a series of measurements, the peak-to-peak amplitude and the zero offset to the rest position were determined. In the course of the measurement, the part of the movement characterized by the peakto-peak amplitude was regarded as reversible. A non-reversible part was indicated by the difference between the zero offset and the rest position of the first time interval. This was subsequently defined as permanent deflection (Supplement). The determined permanent deflections and peak-to-peak amplitudes were considered separately and examined in the course of the 25,000 cycles.
The statistical analysis was performed with SPSS 24.0 (IBM, USA). The Shapiro-Wilk test was used to verify normal distribution. Mean differences were checked with the Student t-test for normally distributed data pairs, otherwise the Mann-Whitney test was used. A value of p < 0.05 was considered significant.
Results
Evaluation of the CT images showed loosening of pedicle screws in three specimens, including screw loosening in one specimen of the study group ( Fig. 2a-c). Thereby, a cut-out of the right pedicle screw in Th8 and some signs of loosening of the right augmented pedicle screw in Th9 of specimen ResPSCA 1 were visible (Fig. 2b). In the control group, screw loosening was observed in two specimens. Screw cuts out of the right pedicle screws in Th9 could be seen in ComPSCA 2 and ComPSCA 5 (Fig. 2c).
In all specimens, mainly relative rotations around the transverse axis were observed, on which the assessment focuses. The marker in Th5 of the specimen ComPSCA 2 protruded slightly into the disc. Based on a potential effect on the measurements, this marker was not taken into account in the assessment. During loading, no pronounced periodic movement between the adjacent vertebral bodies was observed. In the course of the measurements over all specimens, no abrupt changes were detected, which would have indicated premature failure. For this reason, only cycles 10, 5,000, 10,000, 15,000, 20,000 and 24,990 were evaluated.
In Fig. 3, box plots of the peak-to-peak amplitudes between Th6 and Th8, at the beginning and end of the measurements, of both study groups are shown. Statistically, there were no differences in the mean values of the peak-to-peak amplitudes between the beginning and the end of testing in between the groups (p = 0.67 for ResPSCA, p = 0.83 for ComPSCA), as well as between both groups (p = 0.73 for cycle 10 between ResPSCA and ComPSCA, p = 0.53 for cycle 24,990 between ResPSCA and ComPSCA). Figure 4 compares the mean values of the calculated permanent deflections and peak-to-peak amplitudes for both test groups with complete (ComPSCA) and restricted cement augmentation (ResPSCA). For each of Figure 2. CT scans after cyclic loading are illustrating one of five cases with ResPSCA, without any signs of screw loosening or implant failure (a), one specimen with cut-out of the right pedicle screw in Th8 (big arrow) and signs of loosening of the right cement-augmented pedicle screw in Th9 (b, small arrows). In (c), one of two specimens is shown with a cut-out of the right cement-augmented pedicle screw of Th9 after ComPSCA (arrow). www.nature.com/scientificreports/ the vertebral body pairs the course of the measured values between the two comparison groups appeared to be qualitatively and quantitatively similar. This finding is supported by the fact that, for each data pair between the ResPSCA and ComPSCA groups, the mean values were examined and no statistically significant differences were found (Table 2). Figure 5 compares the two test groups. In each case, the permanent deflections and peak-to-peak amplitudes of the comparison pairs Th5/Th6, Th6/Th8 and Th8/Th9 are considered. In most cases, the permanent deflections of Th5/Th6 and Th8/Th9 were small and comparatively smaller than those between Th6/8, with the exception of ResPSCA 1, ComPSCA 2 and ComPSCA 5, respectively. In all of those three specimens, implant failure was visible. The peak-to-peak amplitudes of Th5/Th6 and Th8/Th9 were significantly smaller than those of Th6/Th8, but the differences were less obvious in the specimens ResPSCA 1, ResPSCA 6, ComPSCA 4, and ComPSCA 5.
Discussion
The most important finding of this article is the comparable construct stability between ComPSCA and ResPSCA with two cases of cut-outs in the ComPSCA group and only one in the ResPSCA group under cyclic testing, despite the fact that biomechanical testing under axial loading was done previously in all specimens. The dynamic testing results confirm these three cases of implant failure. Hereby, the orientation of two vertebrae has changed permanently during the course of cyclic loading, which can be interpreted as a sign of screw loosening. However, based on the fact that only a one-sided screw cut-out was seen in all three cases, with signs of implant failure and macroscopically uneventful contralateral screw positioning, no higher grades of instability can be expected. This is in accordance with our results, with consistent but only subtle differences in segmental movement between the three specimens with implant failure in comparison to the others.
Otherwise the peak-to-peak amplitudes of movement were in accordance with the expected results. Minimal to low peak-to-peak amplitudes were recorded in the stabilized healthy segments Th5/Th6 and Th8/Th9. In contrast, moderate to high peak-to-peak amplitudes were seen between Th6/Th8, which represents the stabilized unstable fracture region. Generally, the ranges of peak-to-peak amplitudes between the specimens were large without any significant differences between the study groups. This seems to be not very surprising considering the rather small study group and the big range of ages in patients and the morphological differences between www.nature.com/scientificreports/ the spines. However, both study groups were matched regarding patient age, bone density and gender in order to minimize the differences between the groups. Interestingly, two of the specimens with implant failure were highly osteoporotic, with T-scores of less than -4 (two cases). The third specimen had spondylitis ancylosans. Several authors recommend long segmental stabilization with pedicle screw implantation of three levels above and below the fracture in patients with spondylitis ancylosans 10,11 . This can partially explain the implant failure. However, all implant failure happened to be below the fracture. This is somewhat surprising, as in daily practice screw cut-outs seem to occur more frequently in the instrumented vertebral bodies above the fracture in correspondence to the data reported by Banno et al. 12 . In contrast, other studies reported observed higher rates of screw loosening at the lowest level of instrumentation 13 . Generally, the huge majority of implant failure occurs at the lowest or highest level of instrumentation 12,13 . Additionally, all cut-outs were one-sided. This can be explained by the fact that the cascade of implant failure has just begun. This might end in screw cut-outs of both pedicle screws, leading to higher instabilities in the further course.
Generally, specimens tended to adopt a proceeding kyphotic malposition during the course of testing due to cyclic loading of 25,000 cycles predominantly in the flexion direction. The permanent deflection was expected based on cyclic loading without protective interactions of the muscle and rib cage due to permanent strain on the connective tissues. Generally, 25,000 cycles represent the average load during all day activities over a period of 3-4 weeks for elderly people 8 . This number of cycles was chosen to simulate this very important period of bony healing. In correspondence to that, increased in vivo stiffness has been observed to begin 3 weeks after osteotomy in an ostoporotic sheep model 14 . In addition, fatigue tests should be conducted in follow-up studies to evaluate the long-term behavior of the stabilization. Thus, the load acting on the material can be supposed to be higher as compared to clinical practice. The selected bending moment of 2.5 Nm is based on a literature Figure 4. Comparison of the mean values of the test groups with complete (ComPSCA) and restricted cement augmentation (ResPSCA) with regard to permanent deflection (above) and peak-to-peak amplitude (below). No statistically significant differences were found when comparing any pair of data for the ResPSCA and ComPSCA groups. To make the error bars more visible, the dots have been slightly shifted. However, the measured values refer to the cycle indicated on the abscissa. www.nature.com/scientificreports/ recommendation for range of motion tests on osteoporotic thoracic spines as maximal loads in order not to destroy tissues 7 . In vivo tests of the more heavily loaded lumbar spine measured 3.5 (± 1.5) Nm when bending the upper body and 4.2 (± 1.7) Nm when lifting a weight from the floor 15 . On the one hand, significantly lower loads are assumed to be in the area of the middle thoracic spine. On the other hand, upper body flexion and weight lifting are extreme loads that should be avoided postoperatively. By performing cyclic testing over an estimated period of 3 to 4 weeks and applying high cyclic loads, a model was chosen that simulates an extreme situation without any stabilizing effect that would be expected in living patients as a part of the fracture healing process. When evaluating the relative movement between the individual vertebral bodies, indications of screw loosening were found, but there were no clear patterns. An indirect measuring method was chosen, which allows for continuous observation. In order to measure screw movement in the vertebral body directly, markers would have to be attached to the screw tip or shaft. This would require the removal of bone material, which would have a lasting effect on screw retention. This was not the intention of the study, but should be investigated in subsequent studies.
However, the study has several limitations. First of all, all specimens were previously tested in a load of failure manner by axial compression. Thereby, implant failure particularly screw cut-out or screw loosening could be excluded by CT examination after testing 5 . A large part of the deformation was elastically stored in the Table 2. Comparison of the mean values of the test groups with complete (ComPSCA) and restricted cement augmentation (ResPSCA) with regard to permanent deflection and peak-to-peak amplitude. ResPSCApedicle screws at most cranial (Th5) and most caudal (Th9) are cement augmented, ComPSCA-all pedicle screws are cement augmented. *Measured values given as mean value ± standard deviation (in degree). # Statistical analysis performed, stating significant difference between mean values for compared groups at p < 0.05. www.nature.com/scientificreports/ rod system through the fracture gap. Despite the fact that it is not possible to definitely exclude minor lesions, only a minority of specimens showed signs of implant failure. Generally, all specimens had a similar load history and were appropriate for a comparative study. Secondly, another freezing and thawing cycle can influence the mechanical properties of soft tissue negatively 16 . However, the influence on the mechanical properties of bone tissue seems to be not relevant 17,18 . Furthermore, only minor effects on the range of motion of functional spine units have been observed 19 . In a further study, several freezing and thawing cycles were examined. No www.nature.com/scientificreports/ significant alterations in the range of motion could be seen after the initial freezing during further freeze-thaw cycles 20 . In addition, the samples were frozen in a tissue-friendly manner 6 . As the samples have the same storage history, comparative studies are permissible. In addition, the study focuses on the screw anchorage in the bone. The relevant vertebral bodies are rigidly instrumented. The freely movable segments, on which alterations of the intervertebral discs and ligaments would have a greater impact, were not the focus of this study. For the reasons mentioned above, a comparative study with the specimens is permissible, even though they have already undergone initial testing. Since all specimens were always treated in the same way, comparability is ensured. In addition, the usual recommendations were followed for storage, test duration, moisture retention, load rates, etc. 7,21,22 . Additionally, the cyclic loading was performed in flexion only. In contrast, human spines are subjected to multiple different loadings in different directions, all of which contribute to the development of implant failure. Thereby, the midthoracic spine is particularly susceptible under flexion with lower flexion strength than compressive strength 23 . Furthermore, it was not possible to generate pure torque only. However, the test set-up applies a uniform torque in the direction of flexion in a reproducible manner, which ensures comparability. Additionally, our sample size was small (six spines in each group) and underpowered. A post-hoc analysis has shown that at least 80 speciment per group are necessary to gain a power of 80%. However, compared with related publications, our study had a similar number of specimens per group [24][25][26][27] and complies with the recommendations for in vitro testing with human donor material 20 . Thereby, the analysis of group differences can be misleading based on the low power. Nevertheless, there were two implant failures visible in the ComPSCA group and only one in the ResPSCA group. Additionally, matching of the groups was performed in accordance to the T-score, age, and gender of the specimen. Next, the anatomic model represents a simplified model not considering the rib cage (leading to a decrease in stiffness), the muscle, and the physiological body weight acting on the midthoracic cage 28,29 . Last but not least, we did not include a non-cemented group in order to prove that cement-augmented pedicle screw augmentation is superior in our testing scenario. This was done based on the moderate to good biomechanical evidence of the superiority cement-augmented screw hold in osteoporotic bone 30,31 . Based on this evidence and the clinical experience of the last decade the authors hardly ever perform posterior stabilization without cement-augmented pedicle screws in osteoporotic vertebral body fractures. Generally, only clinical studies are conclusive for the evaluation of screw loosening in everyday life. Therefore, clinical studies are warranted to compare implant failure and reduction loss between restricted and complete pedicle screw augmentation in long segmental posterior stabilization.
Conclusion
No statistically significant differences in both implant failure rate and peak-to-peak amplitudes of movement between the instrumented vertebral bodies could be seen between the ResPSCA and ComPSCA groups under cyclic loading. Thus, the construct stability of long segmental posterior stabilization of an unstable osteoporotic midthoracic fracture using ResPSCA seems to be comparable to ComPSCA.
|
2021-12-12T06:16:18.437Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e4b0b33206cfd7c7b3f9d4bf985939691232834f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-03336-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3b4c7d7c9eed378afa4f41d57c66f1b75dcc59a",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4899869
|
pes2o/s2orc
|
v3-fos-license
|
Emerging Roles of Immune Cells in Postoperative Cognitive Dysfunction
Postoperative cognitive dysfunction (POCD), a long-lasting cognitive decline after surgery, is currently a major clinical problem with no clear pathophysiological mechanism or effective therapy. Accumulating evidence suggests that neuroinflammation plays a critical role in POCD. After surgery, alarmins are leaked from the injury sites and proinflammatory cytokines are increased in the peripheral circulation. Neurons in the hippocampus, which is responsible for learning and memory, can be damaged by cytokines transmitted to the brain parenchyma. Microglia, bone marrow-derived macrophages, mast cells, and T cells in the central nervous system (CNS) can be activated to secrete more cytokines, further aggravating neuroinflammation after surgery. Conversely, blocking the inflammation network between these immune cells and related cytokines alleviates POCD in experimental animals. Thus, a deeper understanding of the roles of immune cells and the crosstalk between them in POCD may uncover promising therapeutic targets for POCD treatment and prevention. Here, we reviewed several major immune cells and discussed their functional roles in POCD.
Introduction
Postoperative cognitive dysfunction (POCD) refers to a longlasting cognitive decline after surgery, characterized by impaired concentration, memory, and learning, which can be detected by a battery of neuropsychological tests [1]. The incidence of POCD is 7 to 26% after major noncardiac surgery and even higher in patients older than 60 years [1,2]. POCD not only diminishes the patient's quality of life and imposes a serious burden on healthcare costs but also increases mortality [3]. Although several risk factors for POCD have been identified, the pathophysiological mechanism underlying POCD remains unclear and no effective therapies have been developed to date.
A large number of studies conducted in patients have revealed that POCD is associated with elevated levels of plasma inflammatory cytokines, including tumor necrosis factor-(TNF-) α and interleukin-(IL-) 6 [4][5][6][7]. IL-1β and IL-6 levels in the cerebrospinal fluid (CSF) of patients with POCD are higher than those of patients with normal cognitive function after surgery [8,9]. The learning and memory function was also impaired by surgery and anesthesia in experimental animals, accompanied by the upregulation of proinflammatory cytokine levels in both the blood and the brain [10][11][12][13]. Neuroinflammation, particularly in the hippocampus, has been proved to be one of the main causes of POCD [14][15][16][17]. The activation of microglia and other blood-derived immune cells orchestrates neuroinflammation and subsequent neuronal damage [14][15][16][17]. In this review, we discuss the main types of immune cells involved in POCD and their possible roles. We describe their functions in neuroinflammation, put forth a possible mechanism of their involvement in POCD, and point out the fields that need further exploration.
hematopoiesis. Primitive macrophage progenitors in the yolk sac colonize the CNS and differentiate into mature microglia, confined behind the blood-brain barrier (BBB) [19]. Unlike other tissue macrophages, such as Kupffer cells in the hepatic sinusoids, which need to be renewed from bone marrow progenitors, microglia are capable of local expansion and maintenance throughout life without reconstitution from the bone marrow [20].
In healthy brains, microglia are ramified and in a resting state, monitoring the local microenvironment and detecting CNS damage [21]. Danger signals, including pathogen invasion, injury, and abnormal protein accumulation, can trigger microglia transformation into an activated amoeboid shape. Activated microglia are both neuroprotective and neurotoxic. Studies in adult and neonatal hypoxic-ischemic injury models have shown that a complete blockade of microglial activity exacerbates brain damage [22,23]. However, activated microglia can also produce excessive proinflammatory cytokines, leading to neuronal dysfunction and death. Several neurodegenerative diseases, including Alzheimer's, Huntington's, and Parkinson's diseases, have been proved to be associated with the hyperactivation of microglia [24][25][26]. As a specific type of macrophages, activated microglia can have one of the two different phenotypes: classically activated M1 and alternatively activated M2 microglia. M1 microglia promote inflammation by secreting proinflammatory cytokines such as IL-1α, IL-1β, and TNF. M2 microglia elicit neuroprotective effects through the release of vascular endothelial growth factor and extracellular matrix proteins [27]. In Alzheimer's disease (AD), amyloid β (Aβ) sensitizes microglia to subsequent cytokine stimulation and M1 activation [28], whereas the induction of the M2 polarization of microglia by drugs or adeno-associated viral vectors can reduce Aβ deposition and relieve AD symptoms [29,30]. In other neurological diseases, such as Parkinson's disease (PD), chronic cerebral hypoperfusion, traumatic brain injury, and hepatic encephalopathy, the priming of microglial polarization towards the M1 phenotype plays a pivotal role in neuroinflammation [31][32][33][34].
After peripheral surgery, an immune challenge is transmitted to the brain via multiple humoral and neural routes. The integrity of the BBB can be disrupted by a systemic inflammatory response or anesthesia during and after surgery [11,35,36]. Adenosine triphosphate (ATP), alarmins, and cytokines, which are leaked from an injury site or increase in response to systemic inflammation, enter the brain and activate microglia [11,[36][37][38]. Activated microglia may impair learning and memory via the release of proinflammatory cytokines, among which IL-1β and TNF-α are particularly important [38,39]. Mild repeated stress or systemic endotoxin challenge can trigger microglia to secrete IL-1β and TNF-α [38,[40][41][42]. After surgery, aged rats and mice demonstrated significant deficits in memory and learning, concurrent with the activation of microglia and increased expression of TNF-α and IL-1β in the hippocampus [43,44]. Preemptively depleting microglia reduced surgery-induced hippocampal inflammatory cytokine secretion and attenuated the cognitive decline in mice [14]. IL-1β and TNF-α can cause neuronal cell death, reduction of acetylcholine release, and attenuation of glutamatergic transmission, resulting in learning and memory deficits [38,[40][41][42]. Neuroinflammation and POCD were mitigated in IL-1R knockout mice or mice pretreated with an IL-1 receptor (IL-1R) antagonist compared with control mice [10]. Furthermore, microglia can be activated by peripheral TNF-α [38]. Preemptive treatment of anti-TNF antibody is able to limit the release of IL-1 in the hippocampus and prevent cognitive decline in a mouse model of POCD [11]. Therefore, microglia may respond to peripheral TNF-α, secrete more TNF-α and IL-1β in the hippocampus, and amplify neuroinflammation in POCD. Additionally, a study also reported reduced infiltration of bone marrow-derived monocytes into the hippocampus after microglial depletion, suggesting crosstalk between microglia and bone marrowderived macrophages (BMDMs) in POCD [14].
No studies to date have reported the polarization of microglia in POCD. However, the main cytokines secreted by activated microglia in POCD are IL-1 and TNF-α [14,43,44], suggesting the predominance of the M1 state of microglia in POCD. Furthermore, the M2 response of microglia was impaired after brain ischemia in aged mice [45]. Because older patients are particularly susceptible to POCD, we speculate that the M1 phenotype of microglia plays a central role in neuroinflammation in POCD. Pharmacological approaches that have been successfully used to modulate microglia polarization in other neurological diseases may hold promise for developing POCD treatments [32,34].
In a synthesis of the existing microglia and POCD research, we can draw a picture of how microglia may orchestrate postoperative neuroinflammation in POCD. As the resident immune cells of the brain parenchyma, microglia are activated by proteins and other signals leaked from the injury sites. The cytokines secreted from the microglia can directly damage neurons and also recruit more immune cells from the blood penetrating into the brain parenchyma, further accelerating neuronal injury.
Bone Marrow-Derived Macrophages.
Macrophages are present in virtually all tissues. They differentiate from circulating peripheral-blood mononuclear cells, which migrate into tissues constitutively or in response to inflammation [46]. In a healthy CNS, BMDMs are divided into three classes according to their location: choroid plexus, meningeal, and perivascular macrophages [20]. These macrophages are exterior to the brain parenchyma, and their population homeostasis is achieved by replacement from blood-born monocytes. In disease states, BMDMs respond to inflammation and migrate into the brain parenchyma from the circulation.
BMDMs are a major component of the inflammatory immune response to CNS diseases. Similar to microglia, BMDMs have a proinflammatory M1 phenotype and an anti-inflammatory M2 phenotype. M2 macrophages can be beneficial for the healing of sterilized wounds, clearing necrotic debris or abnormal proteins. In a spinal cord injury model, macrophages played an anti-inflammatory role during recovery [47]. Furthermore, numerous studies have suggested that BMDMs can infiltrate the brain, reduce the Aβ plaque burden, and alleviate the cognitive decline in AD [48,49]. In a clinical study, transplantation of autologous M2 macrophages significantly improved motor and cognitive activities in patients with severe cerebral palsy [50]. Other reports, however, have indicated that macrophages mainly play a detrimental role in CNS pathology. Penetration of macrophages into the brain impaired spatial learning and memory after traumatic brain injury in mice [51,52]. In a model of intracerebral hemorrhage, mice exhibited improved motor function after the depletion of inflammatory monocytes [53]. In addition, circulating monocytes or macrophages have been implicated in the exacerbation and relapses of experimental autoimmune encephalitis (EAE) in mice [54,55].
BMDMs were found in the hippocampi of mice with POCD [56]. Depletion of BMDMs attenuated surgeryinduced increases of the IL-6 levels in serum and the hippocampus, reduced hippocampal macrophage infiltration, and prevented surgery-induced memory dysfunction [15]. Inhibiting the proinflammatory signaling pathway in BMDMs or preserving the integrity of the BBB can also reduce the infiltration of BMDMs in the hippocampus and prevent POCD [56]. Furthermore, mice deficient in IL-6 exhibited less IL-1β and TNF-α expression in the hippocampus and better working memory [57]. These findings indicate that, with the BBB integrity disrupted, BMDMs infiltrate into the hippocampus and secrete proinflammatory cytokines, exacerbating neuroinflammation in POCD.
The depletion of microglia has also been shown to prevent BMDMs infiltrating the hippocampus without impairing the capacity of monocytes to penetrate into the brain [14]. Monocyte chemotactic protein-1 (MCP-1), also known as CCL2, is a major chemoattractant to recruit BMDMs [58]. Postoperative hippocampal MCP-1 levels were reduced by the depletion of microglia [14] but not BMDMs [15], indicating that microglia are the major source of secreted MCP-1. Taken together, these studies show that microglia attract BMDMs into the brain via MCP-1 secretion after injury.
High-mobility group box 1 protein (HMGB1), a ubiquitous nucleosomal protein, is passively released into the circulation from damaged necrotic cells, and circulating HMGB1 levels increase after surgery [36,59]. Blocking the HMGB1 function with a monoclonal antibody reduced the hippocampal expression of MCP-1 and postoperative memory decline in mice [60]. Furthermore, the depletion of BMDMs prevented an HMGB1-mediated memory decline after surgery [60]. Together with the previous studies, these results indicate that HMGB1 may stimulate hippocampal microglia to secrete MCP-1, enabling monocyte recruitment. Similar to HMGB1, many cytokines can simulate microglia. In a model of peripheral organ inflammation, microglia were stimulated by peripheral TNF-α and attracted circulating monocytes into the brain [61]. Moreover, plasma TNF-α levels were upregulated early after aseptic surgery, and a blockade of TNF-α prevented POCD in mice [11]. However, whether the TNF-α/microglia/BMDM pathway is essential in the pathogenesis of POCD is still unknown.
In summary, the activation of microglia and BMDM recruitment play important roles in POCD. However, the relationship between microglia and BMDMs in POCD needs further investigation. The possibility of BMDM infiltration into the CNS after surgery through other microgliaindependent pathways also needs exploration.
Mast Cells.
Mast cells (MCs) are myeloid cells originating from CD34 + /CD117 + pluripotent progenitor cells [62]. MCs contain many cytoplasmic granules, which store a number of preformed mediators, including histamine, heparin, serotonin, chymase, tryptase, prostaglandins, and leukotrienes. MCs are best known for their roles in allergic disease and host defense. Crosslinking immunoglobulin E (IgE) receptors of MCs triggers the release of many allergic and inflammatory mediators [63]. MCs are abundant within tissues exposed to the external environment, such as the skin, gut, and the respiratory tract. MCs are also present in the CNS, mainly located in the perivascular spaces and along the leptomeninges [64,65]. Upon activation, MCs can release the mediators and infiltrate into the brain parenchyma, participating in the pathophysiological processes of various neurological diseases.
It is well established that MCs contribute to general vascular permeability through the production of vasodilators, such as histamine and serotonin. Ample evidence also exists that the vasodilatory and proinflammatory mediators released by MCs contribute to the impairment of the BBB integrity (reviewed in [66]). For instance, histamine can open the tight junctions between the endothelia in the BBB [67]. Proteinases secreted by MCs, including tryptase and gelatinase, can degrade protein constituents of the neurovascular matrix, thus damaging the BBB [67]. In recent decades, studies have demonstrated that MCs play critical roles in the disruption of the BBB and associated neurological diseases. Acute stress increased the permeability of BBB through the activation of MCs [68]. Furthermore, compared with wildtype mice, MC-deficient mice showed decreased BBB permeability, reduced T cell infiltration, and, consequently, less severe EAE [69]. In addition, in a mouse model of brain ischemia, animals that were deficient in MCs or treated with the MC stabilizer Cromolyn exhibited improved BBB integrity and reduced brain edema [70].
Studies have suggested that MCs are the predominant cells that initiate glial activation. In a model of perinatal hypoxia-ischemia, MCs were found to be the "first responders," with their activation preceding that of microglia [71]. In addition, the clinical conditions of depression and mild neurocognitive disorders are closely related to the malfunction of the MC-glia crosstalk [72]. Microglia express a large variety of proteins/receptors that can be activated by MC-secreted mediators. For instance, tryptase can trigger microglia activation through the proteinase-activated receptor 2 (PAR2) [73]. Furthermore, microglia express all four histamine receptors (HRs) and can be activated by MCs via HRs [74,75]. Astrocytes also express PAR2 and HRs and can be activated by MCs [76,77]. The interactions between MCs and glial cells are not restricted to the receptors mentioned above (reviewed in [78]), and accumulating evidence indicates that MCs and glial cells work in concert to promote neuroinflammation [78]. While numerous studies in rodents have explored the role of MCs in neurological diseases, relatively few have focused on MC function in POCD. Surgery was found to induce MC degranulation in mice [79]. Rats treated with the MC stabilizer Cromolyn showed less severe cognitive deficits after surgery, accompanied by increased BBB stability [16] and reduced microglia and astrocyte activation [79,80]. Therefore, via disrupting BBB and activating microglia, MCs promote neuroinflammation in POCD. In the studies of MCs in POCD, Cromolyn was administered intracerebroventricularly [16,79,80]; the therapeutic efficacy of Cromolyn administered via other routes remains to be established. Masitinib, an oral selective tyrosine-kinase inhibitor, can effectively inhibit the survival, migration, and activity of MCs. In a clinical trial, masitinib slowed the cognitive decline in patients with AD [81]. The effectiveness of masitinib in the treatment and prevention of POCD also needs further investigation.
T Cells.
The thymus-derived T cells constitute key players in antigen-specific immune responses. T cells are divided into three main functional subsets: CD8 cells, also known as cytotoxic T cells; helper CD4 cells (Th cells); and regulatory CD4 cells (Treg cells). In healthy noninflamed CSF, 90% of the total cells are T cells, predominantly CD4 cells [82]. In a pathological state, T cells can penetrate into the brain parenchyma. Multiple studies have shown the importance of T cells in autoimmune and virus infectious neurological diseases, such as multiple sclerosis and herpes simplex virus encephalitis [83]. Recently, the roles of T cells in neurodegenerative diseases have also received much attention. The activation of Th cells enhances the loss of dopaminergic neurons in a mouse model of PD [84], while Treg cells provide neuroprotection through the attenuation of microglial activation in this disorder [85].
There is no direct evidence of T cells participating in the pathological process of POCD. One study demonstrated that surgery-induced cognitive impairment in mice was accompanied by upregulation of IL-17 and downregulation of IL-10 expression, mainly in Th17 (a subset of Th cells) and Treg cells, respectively [17]. This study proposed the possibility that a T cell-subtype imbalance may contribute to POCD. More evidence is needed to uncover the role of T cells in POCD.
Conclusion
While a plethora of studies have suggested that immune cells trigger neuroinflammation in response to surgery leading to POCD, the neurobiological basis of POCD remains unknown ( Figure 1). As the major resident immune cells in the CNS, microglia are activated by proteins released from the injury sites and circulating cytokines upregulated by surgery. The activation of microglia results in neuronal damage via the release of proinflammatory cytokines. Circulating BMDMs are recruited into the brain in response to surgery, a process that may be initiated by microglia-secreted MCP-1. The degranulation of MCs contributes to BBB disruption and the activation of microglia, further aggravating POCD. T cells may also be involved in POCD.
These immune cells interact with one another in the pathogenesis of POCD. Different elements of the resulting network of neuroinflammation may serve as targets in the prevention and treatment of POCD. First, cytokines leaking from the injury site are the primary trigger of the immune response in the CNS. Thus, approaches that inhibit cytokine release may prevent POCD. Second, microglia occupy the central position of the inflammatory network; hence, drugs that stabilize microglia or promote their transition to the M2 state may have beneficial effects. Third, other circulating immune cells penetrating into the brain parenchyma and secreting inflammatory cytokines exacerbate neuroinflammation. Therefore, therapies that reduce cytokine secretion by these immune cells may also be effective for treating POCD. Studies in rodents using blocking antibodies and other agents interfering with the neuroinflammation network have provided proof of concept for these strategies as POCD treatments [10,11,15,17,56,60,79,80]. However, their feasibility in humans still needs to be validated. Further research on the mechanisms of immune cell involvement in POCD is urgently required to identify other potential targets for POCD treatment and prophylaxis.
|
2018-04-27T04:32:21.546Z
|
2018-02-18T00:00:00.000
|
{
"year": 2018,
"sha1": "6f0366cb7b7c99498b764de7cf2a71e5a5c0be2f",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mi/2018/6215350.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "370a0b81d58d0e2ccb80072a8de2d91fbfc87ae2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259306812
|
pes2o/s2orc
|
v3-fos-license
|
Coordinated Mitigation Control for Wideband Harmonic of the Photovoltaic Grid-Connected Inverter
: Under the current trend of power electronics in energy systems, a high percentage of renewable energy transports clean energy to the grid through grid-connected inverters. The pulse-width modulation (PWM) technique brings high-order harmonics near to the switching frequency, and LCL filters with low-pass characteristics become the common choice for grid-connected inverters. However, the low-order harmonics caused by nonideal switching characteristics are difficult to filter out, and the new resonance point introduced by the LCL filter causes a security problem for the energy systems. Firstly, the generation mechanism of the 6 k ± 1 order harmonic and high-frequency resonance from a PV grid-connected inverter is analyzed. Then, a virtual resistor is constructed by the active damping method to absorb the resonant component. Meanwhile, this paper also presents an adaptive modulation voltage compensation method to decrease the low-order harmonics. Finally, the actual measured data of user photovoltaic (PV) and multiple comparative simulations verify these theories. Simulation results show that the proposed coordinated control algorithm reduces the peak of the resonance point, and the rate of low-order harmonics mitigation is more than 50%. The proposed method is suitable for various operating conditions.
Introduction
With the exhaustion of traditional sources and the rapid development of power electronics technology, many distributed renewable energy sources are widely connected to the distributed system [1]. Besides improving the energy supply structure, the high proportion of power electronics also brings serious harmonic pollution to the distribution system, seriously harming its safety and stability [2]. Renewable energy connects to the grid through a grid-connected inverter, including photovoltaics and wind turbines. On the one hand, the switching frequency of power electronics varies from several hundred Hz to several tens of kHz, leading to significant high-order harmonics carried in the supply current generated by such power sources [3,4]. On the other hand, power electronic devices show the decentralization trend in the distribution network, and low-order harmonic components and the risk of parallel resonance exist simultaneously [5,6]. The continued growth of harmonic content causes problems such as resonance, distributed power off-grid, energy supply shortage, and increased equipment losses to the distribution system [7][8][9]. These typical power quality problems have attracted widespread attention from scholars all over the world [10][11][12].
The power systems' short circuit ratio (SCR) is reduced to a certain extent due to the increasing number of distributed PVs. The system stability problems caused by harmonics in weak conditions can no longer be ignored [13][14][15]. LCL filters with small sizes and high-frequency attenuation capability are widely used to effectively mitigate the poor power quality of grid-connected inverters. As a high-order filter, the LCL filter has inherent resonance points [16]. The following two categories comprise the research on resonance suppression of grid-connected inverters.
The first type is the passive suppression method. Adding impedance at the resonance frequency can prevent the resonance of the same frequency from occurring again. This method is easy to implement and can suppress resonance peaks well by incorporating passive components in the filter without any modification of the control strategy, so it is widely used in engineering practice [17]. However, passive damping changes the system's overall amplitude and frequency characteristics and generates additional resonance points.
The second type is the active prevention method, which establishes an accurate inverter model to locate the resonance point and reduces the resonance amplitude by increasing the damping of the corresponding frequency. In the literature [18,19], we can see the development of the LCL-type grid-connected inverters' Norton model. These researchers analyzed the influence of the system and control structures on the resonance characteristics. The literature [20] adds specific active damping based on the accurate calculation of resonance points to effectively reduce the peak resonance of grid-connected inverters. Compared with the passive approach, the active approach is more economical and has been proven to be robust in engineering. A similar method to suppress the high-order harmonics without adding any hardware changes is also used in this paper.
High-frequency resonance mitigation enhances the control system's stability [21]. The system's total frequency domain characteristics can be improved with the suppression of low-frequency harmonics caused by the dead zone. The suppression methods for the 6 k ± 1 order harmonics can be divided into three categories. The first type reconstructs the modulating voltage reference to change the IGBT conduction time, thus reducing the harmonic current caused by the dead zone delay, which is also one of the most widely used harmonic suppression methods. In [22], the ripple caused by the 6 k ± 1 order phase current harmonics to the system are analyzed. In [23], the output voltage error and the dynamics of the control system are investigated separately. The literature [24] proposed a compensation method with a square wave for steady-state application, and low-order harmonics can be reduced under most operating conditions. However, the error voltage variation under low-current operating conditions is not considered, which may cause overcompensation. In [25], the trapezoidal method is introduced to improve the adaptability of the compensation voltage model based on phase currents. The amplitude of compensation voltage becomes smaller and more effective in near zero current due to the slope change in the trapezoid waveform. The accuracy of the compensated voltage model determines the validity of such methods. Otherwise, the power quality will deteriorate due to the overcompensation voltage.
Many scholars have adopted the second method of current injection to obtain the nonideal voltage error under various operating conditions. This method achieves better suppression of current harmonics by improving the compensation voltage accuracy. In [26], a look-up table (LUT) method is proposed to enhance compensation accuracy, but this method has specific requirements for controller storage space. In [27] introduces a plan of fitting functions is introduced to construct compensating voltage commands, but universality is hard to guarantee.
In addition, some trapping algorithms, observers, and controller parameter optimization can also suppress the corresponding harmonic components [28][29][30][31][32]. In [33], the state space matrix explains the participation degree of the control loops, and a controller design method is proposed considering the current harmonic distortion rate. Virtual impedance is another effective filtering method, and the virtual impedance constructed in [34] effectively reduces the fifth and seventh harmonics, but such methods have high requirements for frequency component extraction.
In summary, scholars have produced extensive achievements in resonance and harmonic analysis and suppression, separately. However, the resonance point is still difficult to capture, and the dead zone or other nonideal switching states are complex and variable. Thus the suppression of high-order or low-order harmonics alone will be difficult to adapt to the wideband properties. It also makes research which combines the two aspects remaining as a gap in large-scale promotion in engineering. Therefore, this paper proposes a resonance and dead zone compensation cooperative control strategy for LCL inverters, and its contributions are as follows: (1) Firstly, this paper derives the control transfer function of the closed-loop system and analyzes the key reason for the resonance of LCL inverters by plotting a Bode diagram. In addition, the paper explains the mechanism by which the dead zone generates low-order harmonic current by analyzing the switching state of modulation voltage. (2) Then, a high-frequency resonance mitigation strategy is proposed to improve the power quality of the PVs. Feeding the extracted capacitor current into the modulated voltage output can solve the problem of the PV resonance effectively. (3) Finally, considering the influence of parasitic capacitor charging and discharging on the dead zone voltage error under different operating conditions, an adaptive compensation strategy is proposed to suit multiple working conditions from the measured data.
The layout of this paper is organized as follows. Section 2 introduces the topology and control structure of the PV grid-connected inverter, including the analysis of resonance and harmonic generation mechanisms. Section 3 shows the active damping control strategy. Section 4 presents the measured PV data and proposes an adaptive dead zone voltage compensation method. Section 5 contains a study of comparative simulation experiments. Section 6 is the summary of the conclusions. Figure 1 shows the typical topology of the PV grid-connected inverter. The DC side comprises photovoltaic panels, boost circuits, and DC bus capacitance. The maximum power point tracking (MPPT) technology ensures that the renewable sources export peak power. The grid-connected inverter usually uses PQ or DC voltage control, turning the DC energy into clean AC signals. The inverter's output port is linked to a common coupling point (PCC) via the line impedance. Since the switching process of IGBT will cause high-frequency harmonic components, an LCL filter is needed to reduce these harmonic components due to the switching characteristics. The power network is represented by a three-phase infinity power whose valid value is 220 V. quirements for frequency component extraction.
Topology and Control Structure
In summary, scholars have produced extensive achievements in resonance and harmonic analysis and suppression, separately. However, the resonance point is still difficult to capture, and the dead zone or other nonideal switching states are complex and variable. Thus the suppression of high-order or low-order harmonics alone will be difficult to adapt to the wideband properties. It also makes research which combines the two aspects remaining as a gap in large-scale promotion in engineering. Therefore, this paper proposes a resonance and dead zone compensation cooperative control strategy for LCL inverters, and its contributions are as follows: (1) Firstly, this paper derives the control transfer function of the closed-loop system and analyzes the key reason for the resonance of LCL inverters by plotting a Bode diagram. In addition, the paper explains the mechanism by which the dead zone generates low-order harmonic current by analyzing the switching state of modulation voltage. (2) Then, a high-frequency resonance mitigation strategy is proposed to improve the power quality of the PVs. Feeding the extracted capacitor current into the modulated voltage output can solve the problem of the PV resonance effectively. (3) Finally, considering the influence of parasitic capacitor charging and discharging on the dead zone voltage error under different operating conditions, an adaptive compensation strategy is proposed to suit multiple working conditions from the measured data.
The layout of this paper is organized as follows. Section 2 introduces the topology and control structure of the PV grid-connected inverter, including the analysis of resonance and harmonic generation mechanisms. Section 3 shows the active damping control strategy. Section 4 presents the measured PV data and proposes an adaptive dead zone voltage compensation method. Section 5 contains a study of comparative simulation experiments. Section 6 is the summary of the conclusions. Figure 1 shows the typical topology of the PV grid-connected inverter. The DC side comprises photovoltaic panels, boost circuits, and DC bus capacitance. The maximum power point tracking (MPPT) technology ensures that the renewable sources export peak power. The grid-connected inverter usually uses PQ or DC voltage control, turning the DC energy into clean AC signals. The inverter's output port is linked to a common coupling point (PCC) via the line impedance. Since the switching process of IGBT will cause high-frequency harmonic components, an LCL filter is needed to reduce these harmonic components due to the switching characteristics. The power network is represented by a three-phase infinity power whose valid value is 220V. Figure 1. PV grid-connected system topology.
Topology and Control Structure
Generally, the inverter controller can be designed by deriving the transfer function of the closed-loop control system, thus the precise control of PV output current can be Generally, the inverter controller can be designed by deriving the transfer function of the closed-loop control system, thus the precise control of PV output current can be achieved. Figure 2 shows the equivalent closed-loop control transfer function. The outer voltage control loop uses a proportional-integral (PI) controller to eliminate the static error for accurate tracking of the target bus voltage, and its output serves as the reference signal for the internal current controller. The PI controller is also used in the current loop, generating the IGBT switch signal through a sinusoidal pulse width modulation (SPWM).
achieved. Figure 2 shows the equivalent closed-loop control transfer function. The outer voltage control loop uses a proportional-integral (PI) controller to eliminate the static error for accurate tracking of the target bus voltage, and its output serves as the reference signal for the internal current controller. The PI controller is also used in the current loop, generating the IGBT switch signal through a sinusoidal pulse width modulation (SPWM).
where Kpv and Kiv are the ratio and integral coefficient of the voltage loop controller, and Kpc and Kic are the ratio and integral coefficient of the current loop controller.
Analysis of Resonance Generation Mechanism of LCL Filter
The LCL filter can effectively eliminate high harmonic components in the inverter output current, but it is not an ideal low-pass filter. The current loop usually has a very high control bandwidth, so the current can be considered as following the reference, ideally, in the analysis of LCL frequency domain characteristics. Meanwhile, assuming that the voltage outer loop parameters are reasonable and therefore the DC side voltage is stable, Figure 3 describes the equivalent function between the inverter's output current and the voltage, where C denotes the filter capacitor, L1 and L2 represent the filter inductors on both sides of the capacitor, u is the output voltage, and ig is the output current supplied to the grid by the inverter. The LCL filter can be formulated as: where L1 and L2 are the filter inductances, and C is the filter capacitance. By setting L1 = 2.0 mH, L2 = 0.5 mH, and C = 100 µF, the frequency domain features of the LCL filter are shown in Figure 4. It can be seen that a resonant point exists at 1160 Hz, via which, even if the harmonic current amplitude of the corresponding frequency is small, the control stability of the current loop will be very poor due to the high gain from the LCL filter. The dual-loop controller transfer function can be formulated as: where K pv and K iv are the ratio and integral coefficient of the voltage loop controller, and K pc and K ic are the ratio and integral coefficient of the current loop controller.
Analysis of Resonance Generation Mechanism of LCL Filter
The LCL filter can effectively eliminate high harmonic components in the inverter output current, but it is not an ideal low-pass filter. The current loop usually has a very high control bandwidth, so the current can be considered as following the reference, ideally, in the analysis of LCL frequency domain characteristics. Meanwhile, assuming that the voltage outer loop parameters are reasonable and therefore the DC side voltage is stable, Figure 3 describes the equivalent function between the inverter's output current and the voltage, where C denotes the filter capacitor, L 1 and L 2 represent the filter inductors on both sides of the capacitor, u is the output voltage, and i g is the output current supplied to the grid by the inverter.
for the internal current controller. The PI controller is also used in the current loop, generating the IGBT switch signal through a sinusoidal pulse width modulation (SPWM). The dual-loop controller transfer function can be formulated as: where Kpv and Kiv are the ratio and integral coefficient of the voltage loop controller, and Kpc and Kic are the ratio and integral coefficient of the current loop controller.
Analysis of Resonance Generation Mechanism of LCL Filter
The LCL filter can effectively eliminate high harmonic components in the inverter output current, but it is not an ideal low-pass filter. The current loop usually has a very high control bandwidth, so the current can be considered as following the reference, ideally, in the analysis of LCL frequency domain characteristics. Meanwhile, assuming that the voltage outer loop parameters are reasonable and therefore the DC side voltage is stable, Figure 3 describes the equivalent function between the inverter's output current and the voltage, where C denotes the filter capacitor, L1 and L2 represent the filter inductors on both sides of the capacitor, u is the output voltage, and ig is the output current supplied to the grid by the inverter. The LCL filter can be formulated as: where L 1 and L 2 are the filter inductances, and C is the filter capacitance. By setting L1 = 2.0 mH, L2 = 0.5 mH, and C = 100 µF, the frequency domain features of the LCL filter are shown in Figure 4. It can be seen that a resonant point exists at 1160 Hz, via which, even if the harmonic current amplitude of the corresponding frequency is small, the control stability of the current loop will be very poor due to the high gain from the LCL filter. The LCL filter can be formulated as: where L 1 and L 2 are the filter inductances, and C is the filter capacitance. By setting L 1 = 2.0 mH, L 2 = 0.5 mH, and C = 100 µF, the frequency domain features of the LCL filter are shown in Figure 4. It can be seen that a resonant point exists at 1160 Hz, via which, even if the harmonic current amplitude of the corresponding frequency is small, the control stability of the current loop will be very poor due to the high gain from the LCL filter.
Compared with Equation (2) and the ideal low-pass filter transfer function, it is not hard to find that the third-order pole caused by the filter capacitor C is responsible for the LCL filter's nonideal frequency domain characteristics. Therefore, reducing the effect of the high-order pole or constructing other closed-loop poles by changing the transfer function is an effective method for solving the resonance of the LCL filter. There are several locations in dual-loop control where additional control branches can be added. The method changing the location of the zero or pole in the characteristic equations of the control system is expressed in the next section. hard to find that the third-order pole caused by the filter capacitor C is responsible for the LCL filter's nonideal frequency domain characteristics. Therefore, reducing the effect of the high-order pole or constructing other closed-loop poles by changing the transfer function is an effective method for solving the resonance of the LCL filter. There are several locations in dual-loop control where additional control branches can be added. The method changing the location of the zero or pole in the characteristic equations of the control system is expressed in the next section.
Analysis of Dead Zone Influence
Due to the influence of conductivity modulation and current tailing, a dead zone delay is usually added to IGBT signals to avoid short circuits. The inverter's output voltage contains low-order odd harmonic components, which increase the distortion of the network connection current and directly affect the power quality. Figure 5 shows a state schematic of the A-phase IGBT turn-on and turn-off in one control cycle.
Analysis of Dead Zone Influence
Due to the influence of conductivity modulation and current tailing, a dead zone delay is usually added to IGBT signals to avoid short circuits. The inverter's output voltage contains low-order odd harmonic components, which increase the distortion of the network connection current and directly affect the power quality. Figure 5 shows a state schematic of the A-phase IGBT turn-on and turn-off in one control cycle.
LCL filter's nonideal frequency domain characteristics. Therefore, reducing the effect of the high-order pole or constructing other closed-loop poles by changing the transfer function is an effective method for solving the resonance of the LCL filter. There are several locations in dual-loop control where additional control branches can be added. The method changing the location of the zero or pole in the characteristic equations of the control system is expressed in the next section.
Analysis of Dead Zone Influence
Due to the influence of conductivity modulation and current tailing, a dead zone delay is usually added to IGBT signals to avoid short circuits. The inverter's output voltage contains low-order odd harmonic components, which increase the distortion of the network connection current and directly affect the power quality. Figure 5 shows a state schematic of the A-phase IGBT turn-on and turn-off in one control cycle. From Figure 5a,b, without consideration of IGBT conductive pressure drop and loss in the line, when I a > 0 and Q 1 turn on, the output current in phase A passes through Q 1 , midpoint voltage V A = U dc . When Q 1 turns off but Q 2 has not turned on, the output current passes through the inverse parallel diode, midpoint voltage V A = 0. Since the output current of the bridge arm does not pass through Q 2 , the dead zone of Q 2 does not affect the midpoint voltage. Therefore, the error voltage amplitude of the inverter is positive during the half-cycle in which the upper bridge arm opens.
Similarly, in Figure 5c,d, if I a < 0, the dead zone of Q 1 also does not make any difference to the grid-connected voltage. When I a < 0 and Q 2 turns on, midpoint voltage V A = −U dc . When Q 2 turns off but Q 1 has not turned on, midpoint voltage V A = 0. The error voltage amplitude of the inverter is negative during the half-cycle in which the lower bridge arm opens.
During an entire control cycle, IGBT turns on and off once, respectively. The inverter modulation voltage contains an error whose amplitude value is U dc and its length is T d . The PV output current's direction determines the error voltage amplitude. Therefore, the error voltage in any phase caused by the dead zone is modeled as a square wave, and its manifestation in a whole control cycle is shown in Figure 6, where T s represents the control and sample period, T on represents the ideal switch time during the current PWM control cycle, T d represents the time of dead zone, and U dc represents the DC bus voltage of the PV inverter. midpoint voltage VA = Udc. When Q1 turns off but Q2 has not turned on, the output current passes through the inverse parallel diode, midpoint voltage VA = 0. Since the output current of the bridge arm does not pass through Q2, the dead zone of Q2 does not affect the midpoint voltage. Therefore, the error voltage amplitude of the inverter is positive during the half-cycle in which the upper bridge arm opens.
Similarly, in Figure 5c,d, if Ia < 0, the dead zone of Q1 also does not make any difference to the grid-connected voltage. When Ia < 0 and Q2 turns on, midpoint voltage VA = −Udc. When Q2 turns off but Q1 has not turned on, midpoint voltage VA = 0. The error voltage amplitude of the inverter is negative during the half-cycle in which the lower bridge arm opens.
During an entire control cycle, IGBT turns on and off once, respectively. The inverter modulation voltage contains an error whose amplitude value is Udc and its length is Td. The PV output current's direction determines the error voltage amplitude. Therefore, the error voltage in any phase caused by the dead zone is modeled as a square wave, and its manifestation in a whole control cycle is shown in Figure 6, where Ts represents the control and sample period, Ton represents the ideal switch time during the current PWM control cycle, Td represents the time of dead zone, and Udc represents the DC bus voltage of the PV inverter.
Upper Bridge
Lower Bridge Actual State
Resonance Suppression Method of LCL filter
This section uses the capacitive current to construct a feedforward control branch. This measure redefines the LCL-type filter equivalent function. The feedforward loop increases active damping at the high-order resonant frequency. It reduces the amplitude gain of the specific resonance point without missing the control system's dynamic characteristics and stability.
Active Damping Feedforward Control Method
Due to the existence of the LCL filter, a traditional inner loop is not adequate to maintain the control system's stability. Thus, an additional loop with the capacitor current feedforward is used to improve the ability to suppress resonance. It is difficult to quantify the influence on the subsequent control loop after integration through the PI controller. Therefore, the feedforward signal is directly added to the output signal of the modulation voltage. Figure 7 shows the active damping feedforward control method, and Kc is the capacitive current feedforward gain factor.
Resonance Suppression Method of LCL Filter
This section uses the capacitive current to construct a feedforward control branch. This measure redefines the LCL-type filter equivalent function. The feedforward loop increases active damping at the high-order resonant frequency. It reduces the amplitude gain of the specific resonance point without missing the control system's dynamic characteristics and stability.
Active Damping Feedforward Control Method
Due to the existence of the LCL filter, a traditional inner loop is not adequate to maintain the control system's stability. Thus, an additional loop with the capacitor current feedforward is used to improve the ability to suppress resonance. It is difficult to quantify the influence on the subsequent control loop after integration through the PI controller. Therefore, the feedforward signal is directly added to the output signal of the modulation voltage. Figure 7 shows the active damping feedforward control method, and K c is the capacitive current feedforward gain factor. Where GPWM(s) is equivalent to the production of PWM gain with control and calculation delay, and its transfer function can be shown as Equation (3). Approximation of the pure delay link can be carried out by using the three-order Pade formula. Where G PWM (s) is equivalent to the production of PWM gain with control and calculation delay, and its transfer function can be shown as Equation (3). Approximation of the pure delay link can be carried out by using the three-order Pade formula.
where K pwm = 1 represents PWM modulation gain. The LCL filter's equivalent transfer function after capacitive current feedforward is shown in Figure 8. Where GPWM(s) is equivalent to the production of PWM gain with control and calculation delay, and its transfer function can be shown as Equation (3) where Kpwm = 1 represents PWM modulation gain.
The LCL filter's equivalent transfer function after capacitive current feedforward is shown in Figure 8. The transfer function of the LCL filter can be rewritten as Equation (4); compared with Equation (2), the constructed loop of capacitive current feedforward adds a secondorder pole to the closed-loop characteristic equation, which changes the LCL filter's closed-loop poles distribution.
Assuming that the constructed new pole can be equivalent to the shunt resistance with the filter capacitor, the topological equivalent diagram is shown in Figure 9. The transfer function of the LCL filter can be rewritten as Equation (4); compared with Equation (2), the constructed loop of capacitive current feedforward adds a second-order pole to the closed-loop characteristic equation, which changes the LCL filter's closed-loop poles distribution.
LCL Fliter
Assuming that the constructed new pole can be equivalent to the shunt resistance with the filter capacitor, the topological equivalent diagram is shown in Figure 9. Where GPWM(s) is equivalent to the production of PWM gain with control and calculation delay, and its transfer function can be shown as Equation (3) where Kpwm = 1 represents PWM modulation gain.
The LCL filter's equivalent transfer function after capacitive current feedforward is shown in Figure 8. The transfer function of the LCL filter can be rewritten as Equation (4); compared with Equation (2), the constructed loop of capacitive current feedforward adds a secondorder pole to the closed-loop characteristic equation, which changes the LCL filter's closed-loop poles distribution.
Assuming that the constructed new pole can be equivalent to the shunt resistance with the filter capacitor, the topological equivalent diagram is shown in Figure 9. Figure 9. Active damping equivalent model. Figure 9. Active damping equivalent model.
LCL Fliter
The equation for the grid-connected current and the inverter's output voltage can be constructed with Kirchhoff's voltage law as shown in Equation (5).
Combining Equation (4) with Equation (5), active damping can establish the expression relationship with K c and G PWM (s) in Equation (6).
The harmonics of a specific resonant frequency passes through the active shunt resistance, so the grid current quality can be improved significantly. The response characteristics of other frequencies are not affected due to virtual resistance only affecting the frequency Appl. Sci. 2023, 13, 7441 8 of 20 segment near the resonance point. This conclusion is verified in the next section through the Bode diagram. Meanwhile, the proposed active resistance reduces the input of passive resistance, making many contributions to the economical operation of the power distribution network.
Analysis of Frequency Domain Characteristics of Active Damping Method
This paper set two group parameters of filter inductance and resistance to analyze the frequency domain characteristics of the active damping method, and these parameters are shown in Table 1. The Bode diagram of the equation between output current and voltage is obtained by substituting the parameters into Equations (2) and (4). Figure 10 shows the LCL filter's frequency characteristics from 10 Hz to 10 kHz.
The harmonics of a specific resonant frequency passes through the active shunt resistance, so the grid current quality can be improved significantly. The response characteristics of other frequencies are not affected due to virtual resistance only affecting the frequency segment near the resonance point. This conclusion is verified in the next section through the Bode diagram. Meanwhile, the proposed active resistance reduces the input of passive resistance, making many contributions to the economical operation of the power distribution network.
Analysis of Frequency Domain Characteristics of Active Damping Method
This paper set two group parameters of filter inductance and resistance to analyze the frequency domain characteristics of the active damping method, and these parameters are shown in Table 1. The Bode diagram of the equation between output current and voltage is obtained by substituting the parameters into Equations (2) and (4). Figure 10 shows the LCL filter's frequency characteristics from 10 Hz to 10 kHz.
Resonance Point 824Hz From the above diagrams, if the inductance parameter is unchanged, the capacitance parameter determines the resonance point of the PV inverter. Larger capacitance corresponds to a lower resonance frequency point. Instead, smaller capacitance brings a higher resonance frequency point. Two group experiments with different parameters show that the active resistance can adaptively reduce amplitude gain at the resonant frequency, no From the above diagrams, if the inductance parameter is unchanged, the capacitance parameter determines the resonance point of the PV inverter. Larger capacitance corresponds to a lower resonance frequency point. Instead, smaller capacitance brings a higher resonance frequency point. Two group experiments with different parameters show that the active resistance can adaptively reduce amplitude gain at the resonant frequency, no matter how the resonance point changes. With the K c increasing, the amplitude gain at the resonant point decreases. When the K c is massive enough, the adverse effect is that the LCL amplitude gain has a small degree of decay, but the attenuation magnitude can be neglected. Meanwhile, it has little impact on other frequency response characteristics, so the LCL filter with active resistance can be seen as an ideal low-pass filter.
Dead Zone Compensation Method of PV Inverter
A voltage adaptive compensation method for low-order harmonics is introduced in this section. The segmented error modeling method is used to quantify compensation values for different current operating conditions. By sampling the IGBT switching status and adding the error voltage on the modulation signal evenly, the nonideal waveform caused by the dead zone can be greatly decreased. Therefore, the harmonic current of the PV inverter can be effectively suppressed. There is no need to change any hardware and switching state.
Modeling of the Compensation Voltage
The photovoltaic inverter's output current amplitude is affected by light, temperature, and other factors. The required modulation voltage under various current amplitudes is also different. Harmonic suppression technology must be able to adapt to a change in environment. This section analyzes the harmonic emission level and the error voltage variation under different working conditions. This paper divides the segmentation interval with the measured data and proposes a segmentation adaptive harmonic suppression method. Figure 11 shows the actual measurement data curve of grid voltage and A-phase current waveform from a user's PV grid-connected inverter in Taian, Shandong Province on 10 August 2022. The data recorded the distribution range of photovoltaic current for a whole day. Data acquisition is completed with the help of the NI data acquisition board. The equipment records 25,600 data points every second. Appl From the above figure, the photovoltaic works from 5:30 a.m. to 6:00 p.m. Its gridconnected current fluctuation range is mainly between 13 A and 41 A. Within this current range, the current trailing effect endures only for quite a short time, and the parasitic capacitor charges or discharges very quickly. The voltage error can be approximately equivalent to a square wave with a width of Td. An important objective of this paper is to make the dead zone error voltage model adapt to more operating conditions especially, for small current working conditions in bad weather. Figure 12 shows the influence of current trailing to error voltage under different current conditions. Figure 11. Photovoltaic data curve in one day.
Ideal
From the above figure, the photovoltaic works from 5:30 a.m. to 6:00 p.m. Its gridconnected current fluctuation range is mainly between 13 A and 41 A. Within this current range, the current trailing effect endures only for quite a short time, and the parasitic capacitor charges or discharges very quickly. The voltage error can be approximately equivalent to a square wave with a width of T d . An important objective of this paper is to make the dead zone error voltage model adapt to more operating conditions especially, for small current working conditions in bad weather. Figure 12 shows the influence of current trailing to error voltage under different current conditions. Figure 11. Photovoltaic data curve in one day.
From the above figure, the photovoltaic works from 5:30 a.m. to 6:00 p.m. Its gridconnected current fluctuation range is mainly between 13 A and 41 A. Within this current range, the current trailing effect endures only for quite a short time, and the parasitic capacitor charges or discharges very quickly. The voltage error can be approximately equivalent to a square wave with a width of Td. An important objective of this paper is to make the dead zone error voltage model adapt to more operating conditions especially, for small current working conditions in bad weather. Figure 12 shows the influence of current trailing to error voltage under different current conditions. From the above figures, the actual nonideal voltage is no longer a fixed error if the effect of current trailing is considered. The actual dead zone under Iac is exactly fifty percent of the set value. The relationship between error voltage and the grid-connected current of the PV inverter is positive. A larger current amplitude makes the error voltage closer to the square wave. Instead, a smaller current amplitude makes the error voltage closer to zero. Therefore, assuming a linear relationship between the discharge speed of parasitic capacitance and the size of the grid-connected current, the error voltage can be modeled as follows: From the above figures, the actual nonideal voltage is no longer a fixed error if the effect of current trailing is considered. The actual dead zone under I ac is exactly fifty percent of the set value. The relationship between error voltage and the grid-connected current of the PV inverter is positive. A larger current amplitude makes the error voltage closer to the square wave. Instead, a smaller current amplitude makes the error voltage closer to zero. Therefore, assuming a linear relationship between the discharge speed of parasitic capacitance and the size of the grid-connected current, the error voltage can be modeled as follows: where U dead represents actual error voltage caused by the dead zone, K represents slope coefficient, and T e represents the end time of the current trailing.
Dead Zone Compensation Method
In the fluctuation range of grid-connected current from Figure 11, the duration of the current tailing effect is almost zero, so the compensated voltage can be approximately calculated by Equation (8).
Due to the fact that the nonideal switching state is unavoidable in practical applications, a narrow pulse compensation strategy is adopted to achieve an accurate compensation result. The voltage which needs to be compensated is distributed evenly into the modulated signal of IGBT by area equivalence method. Therefore, the actual modulated voltage is added on a narrow pulse component equal to the error voltage. Figure 13 shows the schematic of modulation voltage compensation.
Due to the fact that the nonideal switching state is unavoidable in practical applications, a narrow pulse compensation strategy is adopted to achieve an accurate compensation result. The voltage which needs to be compensated is distributed evenly into the modulated signal of IGBT by area equivalence method. Therefore, the actual modulated voltage is added on a narrow pulse component equal to the error voltage. Figure 13 shows the schematic of modulation voltage compensation.
Modulation voltage(V) Figure 13. Schematic of modulation voltage compensation.
Since directly deleting the dead zone is impossible, the modulation voltage is slightly over-modulated before generating the PWM signal, which can offset the low-order harmonics caused by the error voltage to a certain extent. Therefore, the proportion of nonideal effects in the actual inverter output voltage can be reduced. Equation (9) shows the superimposed narrow pulse single in the proposed theory. The PWM signal is generated after the modulated session, so the over-modulated part of the voltage extends the turn-on time Ton of IGBT. Meanwhile, the extra turn-on time offsets the unideal delay. The performance of the proposed adaptive low-order harmonic mitigation strategy in the PWM signal is shown in Figure 14. Since directly deleting the dead zone is impossible, the modulation voltage is slightly over-modulated before generating the PWM signal, which can offset the low-order harmonics caused by the error voltage to a certain extent. Therefore, the proportion of nonideal effects in the actual inverter output voltage can be reduced. Equation (9) shows the superimposed narrow pulse single in the proposed theory.
The PWM signal is generated after the modulated session, so the over-modulated part of the voltage extends the turn-on time T on of IGBT. Meanwhile, the extra turn-on time offsets the unideal delay. The performance of the proposed adaptive low-order harmonic mitigation strategy in the PWM signal is shown in Figure 14. Without Compensation In this figure, Ucom represents the extra turn-on time caused by the proposed narrow pulse signal. When Ia > 0, Q1 remains on, and the narrow pulse compensates for the error voltage generated by the upper bridge. After Q1 cuts off, the narrow pulse compensates for the error voltage caused by the lower bridge. Similarly, when Ia < 0, it complements the process of Ia > 0.
However, the PV's output current is low in cloudy weather, low temperature, or other severe conditions. In the above cases, the quality of the power generated by the PVs becomes worse, and the effect of the current trailing can no longer be ignored. The compensation for minor current working conditions is consistent with the above method. However, unlike the preceding, the modulation voltage error can no longer be expressed in Equation (8) but should be expressed in Equation (7). Accordingly, the narrow pulse modulated voltage should also be represented in Equation (10). In this figure, U com represents the extra turn-on time caused by the proposed narrow pulse signal. When I a > 0, Q 1 remains on, and the narrow pulse compensates for the error voltage generated by the upper bridge. After Q 1 cuts off, the narrow pulse compensates for the error voltage caused by the lower bridge. Similarly, when I a < 0, it complements the process of I a > 0.
However, the PV's output current is low in cloudy weather, low temperature, or other severe conditions. In the above cases, the quality of the power generated by the PVs becomes worse, and the effect of the current trailing can no longer be ignored. The compensation for minor current working conditions is consistent with the above method.
However, unlike the preceding, the modulation voltage error can no longer be expressed in Equation (8) but should be expressed in Equation (7). Accordingly, the narrow pulse modulated voltage should also be represented in Equation (10).
Since the slope coefficient K in the Equation (7) is unknown, many experiments with different K values must be performed to update the proper initial value. The slope coefficient accuracy can be guaranteed based on the THD results of these experiments. The harmonic current caused by the dead zone voltage is mainly 6 k ± 1 order, (k = 1, 2, 3, . . . ). The high-order component can be ignored due to the existence of an LCL filter. Therefore, only the fifth and seventh harmonic currents under ideal power supply conditions should be analyzed. The required flow diagram of parameter correction is shown in Figure 15.
Simulation Model and Parameters
A simulation model has been built to verify the resonance suppression and the dead zone compensation methods. The main topology of the simulation is shown in Figure 1, including a PV grid-connected inverter operating at maximum power point (MPP), LCL filter, line impedance, and three-phase ideal supply power. Figure 2 shows the inverter control system. The PV board and line impedance parameters are shown in Table 2, and Table 3 shows the control parameters. The value of K depends mainly on the type of IGBT, so different PV inverters all need the search process of K shown in the above figure. From the flowchart, the relationship between K and I a can be established. Once the value of K is determined, there is no need to configure any extra hardware or software, such as the observers or filters. Due to the accurate initial value, the proposed narrow pulse compensation method can satisfy the low-order harmonic suppression requirements under different current output conditions. In addition, in some cases where the light is stable or the PVs are concentrated, the compensation Equation (9) can be used separately to simplify the project's complexity.
Simulation Model and Parameters
A simulation model has been built to verify the resonance suppression and the dead zone compensation methods. The main topology of the simulation is shown in Figure 1, including a PV grid-connected inverter operating at maximum power point (MPP), LCL filter, line impedance, and three-phase ideal supply power. Figure 2 shows the inverter control system. The PV board and line impedance parameters are shown in Table 2, and Table 3 shows the control parameters.
Verification of Resonance Point of LCL Filter
In this section, two groups of the LCL filter parameters in Table 1 are set to verify the time domain response of dual closed-loop control of the PV inverter. Under ideal power supply conditions, the PV's grid-connected current without capacitive current feedforward control contains a large number of resonant frequency components, and Figure 16 shows the THD analysis result of the grid-connected current.
Verification of Resonance Point of LCL Filter
In this section, two groups of the LCL filter parameters in Table 1 are set to verify the time domain response of dual closed-loop control of the PV inverter. Under ideal power supply conditions, the PV's grid-connected current without capacitive current feedforward control contains a large number of resonant frequency components, and Figure 16 shows the THD analysis result of the grid-connected current. Table 1.
In this condition, the resonance point is 795 Hz, and the distortion current at the resonance frequency exceeds 250% of the current at the fundamental frequency. Meanwhile, there is also a small cluster of distorted current near the resonance point. Although the amplitude of current at these frequencies is lower than the current at resonance points, they also affect the power quality of the energy system. The time domain waveform in a steady state is shown in Figure 17. Table 1.
In this condition, the resonance point is 795 Hz, and the distortion current at the resonance frequency exceeds 250% of the current at the fundamental frequency. Meanwhile, there is also a small cluster of distorted current near the resonance point. Although the amplitude of current at these frequencies is lower than the current at resonance points, they also affect the power quality of the energy system. The time domain waveform in a steady state is shown in Figure 17.
In this condition, the resonance point is 795 Hz, and the distortion current at the resonance frequency exceeds 250% of the current at the fundamental frequency. Meanwhile, there is also a small cluster of distorted current near the resonance point. Although the amplitude of current at these frequencies is lower than the current at resonance points, they also affect the power quality of the energy system. The time domain waveform in a steady state is shown in Figure 17. As shown in this figure, the control system has lost the ability to achieve stable current tracking, and the resonant current amplitude is close to 3000 A. Obviously, the resonance point of the LCL filter greatly threatens the operation safety of the PV grid-connected inverter. The resonance frequency with another group parameter is 1160 Hz, and the FFT result is shown in Figure 18. Except for the component at the resonant frequency increasing substantially, other frequency components within the low-frequency band are also amplified to some extent. FFT results demonstrate the accuracy of the LCL filter's resonance point, which is calculated by the closed-loop transfer function method. Appl As shown in this figure, the control system has lost the ability to achieve stable current tracking, and the resonant current amplitude is close to 3000 A. Obviously, the resonance point of the LCL filter greatly threatens the operation safety of the PV grid-connected inverter. The resonance frequency with another group parameter is 1160 Hz, and the FFT result is shown in Figure 18. Except for the component at the resonant frequency increasing substantially, other frequency components within the low-frequency band are also amplified to some extent. FFT results demonstrate the accuracy of the LCL filter's resonance point, which is calculated by the closed-loop transfer function method.
These experiments show that, no matter where the LCL filter's resonant point is, the system loses stability as long as the grid current contains resonant frequency components. In the simulation experiment, the outer loop is used in the constant voltage method, and the reference voltage is 750 V. The next section verifies the influence of the capacitive current feedforward method on resonance suppression. Table 1.
Verification of Resonance Suppression Method of LCL Filter
In Figure 10, active damping constructed by the capacitive current feedforward method effectively reduces the resonant point gain of the LCL filter. Setting Kc = 15 and using the second group parameters in Table 1, the time domain waveform of the threephase grid-connected current of PV is shown in Figure 19a. Under the condition that temperature and sun irradiance are set in Table 2, the maximum grid-connected current amplitude of the PV is about 45 A.
The experimental results show that the resonant frequency component is greatly reduced after the capacitance current feedforward. The active shunt resistance makes the Table 1. These experiments show that, no matter where the LCL filter's resonant point is, the system loses stability as long as the grid current contains resonant frequency components. In the simulation experiment, the outer loop is used in the constant voltage method, and the reference voltage is 750 V. The next section verifies the influence of the capacitive current feedforward method on resonance suppression.
Verification of Resonance Suppression Method of LCL Filter
In Figure 10, active damping constructed by the capacitive current feedforward method effectively reduces the resonant point gain of the LCL filter. Setting K c = 15 and using the second group parameters in Table 1, the time domain waveform of the three-phase grid-connected current of PV is shown in Figure 19a. Under the condition that temperature and sun irradiance are set in Table 2, the maximum grid-connected current amplitude of the PV is about 45 A. Due to the positive correlation between the modulated voltage and the inverter output current, the modulation error voltage can be considered as approximately invariant. The time domain waveform of the PV's three-phase current is shown in Figure 20a, and Figure 20b shows the FFT result. The percentage of the fifth and seventh harmonic currents are 4.243% and 3.052%, respectively. The THD of the PV's grid-connected current is over 5%. What is worse is that the lower current corresponds to even higher harmonics. This breaks the photovoltaic grid connection standard, which severely restricts the PVs' connection to the distributed power system.
From Figures 19 and 20, the validity of active damping equivalent to a capacitive current feeder is verified for resonance suppression. The PV grid-connected inverters used in engineering mostly have LCL filters, so this method should be part of the general control structure of PV grid-connected inverters. In addition to resonance limiting the grid connection of new energy sources, the output current harmonic content also affects the supply power quality. Therefore, it is still necessary to verify the adaptive compensation strategy, comparing the current harmonic content before and after compensation. This part of the work is analyzed in the next section. The experimental results show that the resonant frequency component is greatly reduced after the capacitance current feedforward. The active shunt resistance makes the character of the LCL filter present as the ideal low-pass filter. However, the waveform still has a certain degree of distortion, especially near the zone where the grid-connected current passes zero or reaches the peak. Figure 19b shows the FFT result of the PV's current. Obviously, the harmonic components at high frequencies are suppressed by the LCL filter, but the fifth and seventh harmonic currents caused by dead zone still exist, and harmonic amplitude decreases gradually with the increase of frequency. The fifth harmonic percentage is 2.436%, and the seventh harmonic percentage is 2.19%. Any other harmonic percentage is less than 1% of the fundamental amplitude, containing the 11th and 13th components also caused by the dead zones. The large amplitude of the current causes a small proportion between the error voltage and the modulated voltage. Thus, the harmonic content still meets IEEE grid-connected standards.
Due to the positive correlation between the modulated voltage and the inverter output current, the modulation error voltage can be considered as approximately invariant. The time domain waveform of the PV's three-phase current is shown in Figure 20a,b shows the FFT result. The percentage of the fifth and seventh harmonic currents are 4.243% and 3.052%, respectively. The THD of the PV's grid-connected current is over 5%. What is worse is that the lower current corresponds to even higher harmonics. This breaks the photovoltaic grid connection standard, which severely restricts the PVs' connection to the distributed power system.
Verification of Dead Zone Compensation Method
The harmonic content of PV output current under the two different conditions mentioned in the previous section is compared in this section. The experiment uses a threephase ideal power supply, and the experiments verify the validity and correctness of the low-order harmonic mitigation algorithm.
After the above compensation of the low-order harmonics caused by nonideal switching, Figure 21a shows via the experimental waveform that the inverter's output current is about 45 A. Figure 21b shows the FFT result. Compared with the data before compensation, the fifth harmonic current is reduced from 2.436% to 0.481%, and the seventh harmonic current decreases from 2.19% to 0.844%. In addition to the significant reduction of harmonic content, the sinusoidal degree of the current waveform is greatly increased after the compensation to the dead zone. There is a significant reduction in the distortion when the current is over zero, and the voltage avoids being compensated in- From Figures 19 and 20, the validity of active damping equivalent to a capacitive current feeder is verified for resonance suppression. The PV grid-connected inverters used in engineering mostly have LCL filters, so this method should be part of the general control structure of PV grid-connected inverters. In addition to resonance limiting the grid connection of new energy sources, the output current harmonic content also affects the supply power quality. Therefore, it is still necessary to verify the adaptive compensation strategy, comparing the current harmonic content before and after compensation. This part of the work is analyzed in the next section.
Verification of Dead Zone Compensation Method
The harmonic content of PV output current under the two different conditions mentioned in the previous section is compared in this section. The experiment uses a three-phase ideal power supply, and the experiments verify the validity and correctness of the low-order harmonic mitigation algorithm.
After the above compensation of the low-order harmonics caused by nonideal switching, Figure 21a shows via the experimental waveform that the inverter's output current is about 45 A. Figure 21b shows the FFT result. Compared with the data before compensation, the fifth harmonic current is reduced from 2.436% to 0.481%, and the seventh harmonic current decreases from 2.19% to 0.844%. In addition to the significant reduction of harmonic content, the sinusoidal degree of the current waveform is greatly increased after the compensation to the dead zone. There is a significant reduction in the distortion when the current is over zero, and the voltage avoids being compensated incorrectly with the precise extraction of the current sequence. Figure 22 shows the modulated voltage waveforms before and after the compensation. The indirect control of the inverter achieves the effect of output current by establishing a equation between the modulated and the grid side voltages. Experimental results show that the compensated voltage accurately catches the modulation voltage polarity, so the distortion at the over-zero point is significantly reduced after compensation.
The paper also demonstrates the proposed compensation method's effectiveness under the condition that the current is about 20 A. Figure 23 shows the FFT results, and the total current distortion rate is reduced from 5.98% to 3.46%. Among these, the fifth har- 11th=0.254% 13th=0.911% 17th=0.886% Grid-Connected Current THD i = 3.46%
Conclusions
Under the background of a power electronic distribution system, this paper conducts detailed research into wideband harmonic mitigation with PV inverters as the research object. The article first analyzes the reason for high-frequency resonance caused by the LCL filter due to its own control structure and the harmonic current generation mechanism of the nonideal switching state. The proposed active damping control strategy for resonance suppression and the dead zone voltage compensation method for low-frequency harmonic suppression are two innovations in this paper. The capacitor current is fed forward into the current loop output reference signal, and the constructed virtual damping absorbs the resonant harmonic component. The narrow pulse compensation voltage expands the open time and thus reduce the nonideal voltage error due to the dead zone. Multiple Simulink comparative experiments verify the effectiveness and robustness of the collaboration algorithm resonance and harmonic. This new software innovation does not change any hardware or add additional controllers and can effectively suppress the wideband harmonic currents of the PVs. It provides a new choice for the power electronic energy system with a higher percentage of new energy sources. The robustness of the proposed algorithm still needs to be verified in future grids with more severe The paper also demonstrates the proposed compensation method's effectiveness under the condition that the current is about 20 A. Figure 23 shows the FFT results, and the total current distortion rate is reduced from 5.98% to 3.46%. Among these, the fifth harmonic decreased significantly from 4.243% to 0.71%, and the proportion of the decline is more than 83%. The seventh harmonic current is also reduced from 3.052% to 1.867%. Except for the relatively high content components, the harmonics of other frequencies are each reduced to a certain extent, since the author mainly focuses on the 6 k ± 1 order harmonics under the premise that the harmonics of other frequencies are not significantly higher. The comparative experiments show that the dead zone compensation method suits multiple conditions.
Conclusions
Under the background of a power electronic distribution system, this paper conducts detailed research into wideband harmonic mitigation with PV inverters as the research object. The article first analyzes the reason for high-frequency resonance caused by the LCL filter due to its own control structure and the harmonic current generation mechanism of the nonideal switching state. The proposed active damping control strategy for resonance suppression and the dead zone voltage compensation method for low-frequency harmonic suppression are two innovations in this paper. The capacitor current is fed forward into the current loop output reference signal, and the constructed virtual
Conclusions
Under the background of a power electronic distribution system, this paper conducts detailed research into wideband harmonic mitigation with PV inverters as the research object. The article first analyzes the reason for high-frequency resonance caused by the LCL filter due to its own control structure and the harmonic current generation mechanism of the nonideal switching state. The proposed active damping control strategy for resonance suppression and the dead zone voltage compensation method for low-frequency harmonic suppression are two innovations in this paper. The capacitor current is fed forward into the current loop output reference signal, and the constructed virtual damping absorbs the resonant harmonic component. The narrow pulse compensation voltage expands the open time and thus reduce the nonideal voltage error due to the dead zone. Multiple Simulink comparative experiments verify the effectiveness and robustness of the collaboration algorithm resonance and harmonic. This new software innovation does not change any hardware or add additional controllers and can effectively suppress the wideband harmonic currents of the PVs. It provides a new choice for the power electronic energy system with a higher percentage of new energy sources. The robustness of the proposed algorithm still needs to be verified in future grids with more severe background harmonics, and suppression techniques of new harmonic components excited by background harmonics also need to be investigated. Funding: This work was supported by the science and technology project of "Research and application of power quality assessment and improvement technology for regional distribution network with large-scale distributed energy access" (Grant No. 52062622000V).
Institutional Review Board Statement:
The study did not require ethical approval.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-07-02T15:04:08.459Z
|
2023-06-23T00:00:00.000
|
{
"year": 2023,
"sha1": "b655153cf785493e140e8ab086769ed57a409e42",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/13/13/7441/pdf?version=1687518746",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "76d436d80e266e01d39390c02981c2976278d413",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
256832646
|
pes2o/s2orc
|
v3-fos-license
|
Contractibility of homogeneous Kenmotsu manifolds
We prove that every homogeneous Kenmotsu manifold is a contractible space.
Introduction
Kenmotsu manifolds constitute a relevant class of almost contact metric manifolds, introduced in 1972 in [5]; the original motivation was Tanno's classification of almost contact metric manifolds having largest automorphism group [12]. In Tanno's list some warped products of the complex Euclidean space and the real line appear, being particular cases of Kenmotsu manifolds. Recall that an almost contact metric structure (ϕ, ξ, η, g) on an odd dimensional manifold M consists in a (1, 1)-tensor field ϕ, a vector field ξ , a one form η and a Riemannian metric g satisfying: where X , Y are arbitrary vector fields. For more information and background on this notion we refer the reader to Blair's monograph [2].
In terms of these structure tensors, the analytic condition by which the class in object is defined is the following: where ∇ is the Levi-Civita connection.
B Antonio Lotta antonio.lotta@uniba.it Kenmotsu elucidated the structure of these manifolds, proving that locally they are a warped product of a Kähler manifold with the real line, with warping function of the form f (t) = ce t , and obtained some fundamental curvature properties. Accordingly, there is a wide literature focusing on results of local nature, concerning for instance generalizations of the Einstein condition (e.g. η-Einstein manifolds, almost η-Ricci solitons) and the differential geometry of relevant kind of submanifolds (e.g. like warped product submanifolds); see for instance [4,8,10,13]. The interested reader can find a discussion of recent results about Kenmotsu manifolds in [1, Section 2], including a rich bibliography on the subject.
On the other hand, to the author's knowledge, up to now no results of global nature have been discussed and no relevant information emerges from the literature about the topological structure of Kenmotsu manifolds (apart from the fact that such a manifold cannot be compact, already treated in [5]).
In particular, a global classification of the homogeneous models is missing, with the exception of the 3-dimensional case, which has been examined recently by Wang [14]. It turns out that the unique simply connected, three-dimensional homogeneous Kenmotsu manifold is the 3-dimensional hyperbolic space form H 3 . Of course, homogeneity means that the automorphism group Aut(M) of M acts transitively; this is the closed Lie subgroup of the isometry group of (M, g), consisting of those isometries which preserve the structure tensors ϕ and η.
It is an open question if this rigidity result can be extended in higher dimension. In this note we make a first step in approaching this problem, proving the following fact concerning the topological structure of homogeneous Kenmotsu manifolds: Clearly, this implies that Wang's classification can be refined as follows:
Corollary 1.2 Up to equivalence, the 3-dimensional hyperbolic space form H 3 is the unique homogeneous 3-dimensional Kenmotsu manifold.
We also remark that our result might be useful if combined with the recent characterization of contractible homogeneous Kähler manifolds obtained by Loi and Mossa in [7, Theorem 1.2], in order to investigate further the classification of homogeneous Kenmotsu manifolds.
Proof of the result
We start by recalling the following relevant fact, which we state as a lemma:
Lemma 2.1 For each point p of a non-contractible homogeneous Riemannian manifold M there exists a periodic geodesic starting from p.
Even though this fact is well-known, we sketch here the proof for convenience. By a general result of Serre, given a point p of a complete, non-contractible Riemannian manifold M, there always exists a geodesic loop γ : [0, 1] → M such that γ (0) = γ (1) = p (see [11] or [3]); extend γ to a geodesic γ : R → M. Then, if M is homogeneous, γ must be periodic, being non-injective (cf. e.g. [9, p. 321]). Now we proceed with the proof of our result. Let (M, ϕ, ξ, η, g) be a homogeneous Kenmotsu manifold. Recall that, as a consequence of (1.1), one has for every vector field X . We first show that each integral curve γ : R → M of ξ is not periodic. Let p = γ (0) and fix a tangent vector v ∈ T p M orthogonal to ξ p , v = 0. Then, by homogeneity, v can be extended to a Killing vector field V such that [V , ξ] = 0. Indeed, denoting by g the Lie algebra of the automorphism group Aut(M) of M, since the action of Aut(M) is transitive the mapping Here Z * denotes the fundamental vector field generated by Z by means of the action, cf. e.g. [6,Chapter I]. So it suffices to take V = Z * , where Z ∈ g is chosen so that Z * p = v. Since by construction the flow of V preserves ξ , we have We remark that, since γ is a geodesic, V remains orthogonal to ξ along γ , i.e. η(V γ (t) ) = 0 for all t. Consider the function Then, taking (2.1) into account, we have If γ were periodic, F would also be periodic, and this would force F = 0, leading to a contradiction since F(0) = g(v, v) = 0. Now, assume that M is non-contractible and consider a periodic geodesic γ : R → M, parametrized by arc length (this is possible in accordance with the above lemma). Set p . . = γ (0). Let G : R → R be defined by η(γ (t)).
Clearly, G is periodic. Again by (2.1), we have G = g(γ (t) − η(γ (t)) ξ,γ (t)), Since |G| 1, it follows that G 0, therefore G must be constant. In particular, But this is impossible, since the integral curves of ξ and of −ξ starting from p are nonperiodic geodesics, according to the above discussion. This contradiction concludes our proof.
|
2023-02-14T15:32:58.657Z
|
2023-02-13T00:00:00.000
|
{
"year": 2023,
"sha1": "2acfdf41f24ea0facd2fcb1f58a47656d2b29af2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40879-023-00604-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "97a95d7ab1c95ee8919df7b71a682e753b0478e8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
204953006
|
pes2o/s2orc
|
v3-fos-license
|
Sorting through the impact of familiarity when processing vocal identity: Results from a voice sorting task
The present article reports on one experiment designed to examine the importance of familiarity when processing vocal identity. A voice sorting task was used with participants who were either personally familiar or unfamiliar with three speakers. The results suggested that familiarity supported both an ability to tell different instances of the same voice together, and to tell similar instances of different voices apart. In addition, the results suggested differences between the three speakers in terms of the extent to which they were confusable, underlining the importance of vocal characteristics and stimulus selection within behavioural tasks. The results are discussed with reference to existing debates regarding the nature of stored representations as familiarity develops, and the difficulty when processing voices over faces more generally.
2017). Finally, performance in a familiar voice recognition task shows no association with performance in an unfamiliar voice discrimination task (see Cook & Wilding, 1997;van Lancker & Kreiman, 1987, Supplementary Materials). While stimuli and task demands differ in these tasks, and suggest caution in overinterpreting these data, the lack of association also suggests that familiar and unfamiliar voice processing may differ because they rely on quite different mechanisms.
These differences have fuelled a number of studies which have highlighted a fundamental distinction in the processing of familiar and unfamiliar voices. This distinction echoes a similar discussion in the domain of faces, in which participants' capacity for familiar and unfamiliar face recognition has also been determined to be independent of one another (Megreya & Burton, 2006). Of more relevance, a distinction between familiar and unfamiliar voice processing is supported by evidence of neural separation: Familiar voice recognition depends on activation of anterior parts of the superior temporal sulcus (STS) and superior temporal gyrus (STG), whereas unfamiliar voice recognition depends on activation of more posterior parts of these regions (Belin & Zatorre, 2003;Bethmann, Scheich, & Brechmann, 2012;von Kriegstein, Eger, Kleinschmidt, & Giraud, 2003;von Kriegstein & Giraud, 2004;Warren, Scott, Price, & Griffiths, 2006).
All differences highlighted so far may be explained by the fact that familiar voices have a preexisting mental representation. Without this, a voice may only be processed on the basis of a piecemeal analysis of rather superficial vocal characteristics (Kreiman & Sidtis, 2011) leading to an impoverished manner of processing (Lavan, Burton, Scott, & McGettigan, 2019). In contrast, the existence of a preexisting mental representation serves as a point of comparison when processing a familiar voice, and may enable a listener to solve two particular problems-to map together different instances of the same person, and to map apart similar instances of two different people (see Young & Burton, 2017, for an overview of this discussion in the face domain). This pattern of performance implies that the mental representation for a familiar voice may capture information not only about the differences between two speakers but also about the natural variation within a single speaker. It would also predict that familiarity with a speaker may serve as a protective factor when recognising voice clips that vary naturally rather than being too constrained or controlled.
In this regard, a number of studies now exist which contribute to this issue. First, Lavan, Scott, and McGettigan (2016) presented two studies in which speaker discrimination was tested across clips depicting vowel sounds, volitional laughter, and spontaneous laughter. The results suggested that a change in vocalisation (speech to laughter) and a change from volitional to more spontaneous vocalisations, both led to difficulty during a speaker discrimination task. In other words, it was hard to generalise identity cues across the different types of speaker clips, suggesting that vocal variability represented a challenge. Importantly, although personally familiar listeners performed better than unfamiliar listeners (Experiment 2), both listener groups were equally affected by vocal variety suggesting that familiarity did not confer an advantage as might have been predicted.
In contrast, the results of a sorting task suggested a different picture. Borrowing from the face domain, a sorting task involves the presentation of instances of two or more identities, with these instances allowed to vary naturally through the use of what Jenkins, White, van Montfort, and Burton (2011) called "ambient stimuli." The participant's task is to sort the ambient stimuli into clusters to reflect the number of identities they perceive. Correctly grouping the instances for one identity into a single cluster represents the ability to successfully map different instances of the same person together. Conversely, mixing identities within a single cluster represents a failure when mapping similar instances of two different people apart. In this way, the sorting task presents a simple yet powerful methodology which successfully separates out these two aspects of recognition. Moreover, it is a task which can be undertaken regardless of one's familiarity with the stimuli, allowing task demands to be held constant.
When the sorting task was used with faces (Jenkins et al., 2011), the results suggested a difference in the processing of familiar and unfamiliar perceivers. Both groups were able to tell faces apart (producing few mixed-identity clusters). However, familiarity significantly enabled participants to tell faces together such that the familiar perceivers accurately sorted the instances into fewer identity clusters than the unfamiliar perceivers. These results have since been replicated (Andrews, Jenkins, Cursiter, & Burton, 2015;Redfern & Benton, 2017;Zhou & Mondloch, 2016) suggesting a robust benefit of familiarity when coping with natural variation to tell different instances of the same person together.
When the same task was applied in the voice domain, the results were remarkably consistent to those obtained with faces. Lavan et al. (2018) used three pairs of female voices from the TV series "Orange is the New Black." Familiarity was manipulated by recruiting participants who had either watched the TV series or who had not. As with faces, the task was to sort speaker instances into clusters to reflect the number of identities that the participant perceived, and performance was evaluated in terms of the ability to tell speakers together and to tell speakers apart. Overall, the results replicated the pattern obtained with faces. Familiarity better enabled listeners to tell voices together without affecting their ability to tell voices apart. Moreover, the results were replicated using a different set of stimuli from the TV series Breaking Bad , both when stimuli portrayed low expressiveness (i.e., neutral speech clips) and when they portrayed high expressiveness (i.e., shouting or strained speech clips). As such, the sorting task has elicited a robust and consistent pattern of performance across (faces and) voices suggesting that familiarity helps listeners to tell instances from the same identity together.
Two aspects of the current findings warrant some consideration. First, putting the results of Lavan et al. (2016) alongside those of the voice sorting task highlights a contradiction. The results of the voice discrimination task suggested that familiar listeners performed better overall but were as affected as unfamiliar listeners by vocal variability. In contrast, the results of the sorting task suggested that familiar listeners were less affected than unfamiliar listeners by vocal variability and thus were better able to tell instances of the same voice together. This contradiction may have arisen through the use of stimuli which differed both in their nature, and in their basis for familiarity. Specifically, the voice discrimination task used stimuli that consisted of vowels or laughter clips, and that were personally familiar, whereas the sorting task used stimuli that were speech-based and were publicly familiar (famous celebrities). The change from vowels and laughter clips to speech-based clips arguably enabled the use of stimuli that were richer in both duration and vocal variety. Consequently, familiarly may have conferred more of an advantage when using speech clips because there was more vocal variety for listeners to cope with. Similarly, the change from personally familiar to celebrity voices may have reflected a change in the fundamental nature of the stored mental representation that a listener could draw on. Personally familiar stimuli may be represented by a richer representation capturing the vocal variety that a listener experiences through personal contact. On the contrary, celebrity stimuli may be represented by a weaker representation given either less exposure to celebrity voices than faces (see Barsics & Brédart, 2011;Brédart, Barsics, & Hanley, 2009), or less exposure to the full range of listening conditions, or intraspeaker variations that characterise human speech. In the face domain, this has led researchers to suggest that we may process celebrity stimuli in quite a different way to personally familiar stimuli (Carbon, 2008;Wiese et al., 2018) and there is no empirical reason to suggest that this may not also be the case in the voice domain. With this in mind, the natural next step is to use personally familiar speech-based stimuli within the sorting task to see whether the previous pattern of results is replicated. This is the primary purpose of the current study.
The second aspect that warrants consideration rests on the existence of subtle item effects within Lavan, Burston, and Garrido's (2018) sorting task. Specifically, one of the three pairs of voices tested (Set 1) could be grouped together with equal ease across familiar and unfamiliar listeners, as judged by the number of clusters created. Yet, this pair revealed more "telling apart" errors for unfamiliar than familiar listeners, as shown through a greater number of mistakes when grouping voices together. The authors tentatively suggested a role for either speaker distinctiveness or speaker variability in accounting for these item effects, but an analysis of vocal characteristics including valence, arousal, pitch, and apparent vocal tract length did not help in understanding the observed item effects. Given Burton's (2013) observation that the very pattern of variability within an identity may be a cue to identity in itself, a second aim of the present study was to provide a more detailed examination of potential item effects.
The present study used a voice sorting task with listeners who were either personally familiar or unfamiliar with a set of voices. Rather than using two voices as in previous studies, the use of three voices here provided a more ecologically valid sorting task by increasing the variability of the voice clips within the sorting set. The impact of familiarity was examined both when telling voices together and when telling voices apart. On the basis of previous evidence, it was predicted that familiarity would help listeners to tell clips of the same identity together. However, it was predicted that there would be no significant benefit when telling identities apart given a low likely incidence of confusion errors. In addition, an analysis of vocal characteristics was presented across the three speakers, with the prediction that performance on the sorting task may be linked to the degree to which each speaker in the set vocally stood out from the others.
Design
A voice sorting task was used in which participants were asked to sort a set of voice clips into identity clusters. Critically, and unbeknownst to them, participants were either personally familiar or unfamiliar with the speakers providing the clips. Dependent variables included the number of identity clusters following sorting, the number of "intrusion errors" within each cluster, and the self-rated confidence in the solution. In addition, measures of within-speaker clustering, and cross-speaker mistakes were examined, together with a misidentification index for each voice clip.
Participants
A total of 45 participants (26 females, 19 males) took part in the present study, either on a volunteer basis or in return for course credit. Of these, 22 participants (13 females) were "familiar" with the speakers, in that they had either been taught by all speakers, or were colleagues within the same research group. Teaching contact took the form of twice-weekly lectures across a minimum of 5 weeks within 1 to 3 months of participation, supplemented by tutorials, informal conversational exchanges, and access to audio recordings provided by the speakers as support for teaching. As such, the familiar group of participants were deemed to be familiar with the speakers in terms of recency, breadth, and depth of interaction. In contrast, the remaining 23 participants (13 females) were "unfamiliar" with the speakers in that they did not know, and had not been taught by, any of the speakers.
Participants varied in age from 18 to 29 years (M = 22.02, SD = 4.36) minimising the risk of age-related hearing loss. In addition, all were native English speakers, or had lived in the United Kingdom for at least 7 years, removing the potential for speech comprehension difficulties.
Materials
The stimuli for the present study comprised 52 speech clips as read by three female Caucasian members of the Psychology Teaching Staff. The speakers were aged 36, 44, and 49 years at the time of recording and all were nonsmokers. All spoke English as a first language and had a British accent which varied slightly in regional vowel sounds.
The speech clips consisted of excerpts from Mr Tickle© (Hargreaves, 1971) which were taken from a complete reading of the Mr Tickle extract as used in the British Library "Your Voices" project on regional and national accents (http://www.bl.uk/learning/langlit/sounds/yourvoices/your-accent/). All three speakers provided a complete recording of the extract, providing natural variation in intonation across the extract "as if reading to a small child." Speech was captured on an Olympus VN-541PC Digital Voice Recorder, with 4 GB flash memory, set to record in "memo" mode with a low-cut filter providing noise cancellation. Variation in ambient noise was minimised through all recordings being obtained in the same quiet recording room.
From these complete extracts, Audacity 2.1.0 was used to obtain 52 speech clips for each speaker reflecting selfcontained sentences or phrases. These ranged in length from 0.9 to 6.34 s. From these three sets of 52 speech clips, 19 clips were selected from Speaker A, 17 clips were selected from Speaker B, and 16 clips were selected from Speaker C such that each clip was spoken by only one speaker, and all clips together comprised the entire Mr Tickle extract. The use of an unequal number of clips from each speaker was purposeful, and sought to work against any task demands in which participants may assume a need to form identity clusters of equal sizes. The remaining speech clips were discarded.
In addition to these Mr Tickle speech clips, the voices of six unfamiliar male speakers provided 16 speech samples for use in a practice trial. These were all drawn from the SuperIdentity stimulus database, and each sample consisted of the speaker uttering one of several scripted phrases. The 16 samples comprised four phrases from Speaker 1, four phrases from Speaker 2, three phrases from Speaker 3, two phrases from Speaker 4, two phrases from Speaker 5, and one phrase from Speaker 6. Again, the use of an unequal number of clips from each speaker here sought to minimise any task demands to create identity clusters of equal sizes.
Finally, a pre-experimental questionnaire was prepared which took the form of a paper-based familiarity rating task. The names and faces of 18 University staff members were depicted alongside a familiarity rating scale ranging from 1 (not at all familiar) to 7 (highly familiar). The staff members included men and women drawn from psychology and nonpsychology staff. Critically, the three psychology lecturers who provided the speech clips were included in this questionnaire which thus served both as a familiarity rating task, and as a priming mechanism to reduce the risk of a "tip of the tongue" state.
Experimental stimuli were presented, and data were recorded using PowerPoint running in edit mode rather than slideshow mode so that the participants could interact with the stimuli on each slide. Written instructions were embedded within the PowerPoint slides and were displayed on the 13ʺ colour monitor attached to a MacBook Air laptop running OS X El Capitan (Version 10.11.6). Sound was presented via the computer speakers with the volume preset at a comfortable but adjustable level.
Procedure
Following the provision of informed consent, participants were tested individually within a quiet research cubicle. Before the voice sorting task began, participants were first asked to complete the familiarity rating questionnaire. This was presented as an unrelated task. In actual fact, the rating questionnaire allowed the experimenters to prime the participants to the identity of the three speakers and to obtain a familiarity rating for each speaker. On the basis of these ratings, the assignment of participants to the "familiar" and "unfamiliar" groups was verified.
Following the familiarity rating task, participants were introduced to the experimental task. This was described as a voice sorting task, with the method mimicking the freesort face task used by Jenkins et al. (2011) and Andrews et al. (2015) and used more recently with voices by Lavan, Burston, and Garrido (2018) and (see also . Participants were instructed that they would be presented with a set of voice clips, which appeared as loudspeaker icons on a PowerPoint slide. Clicking on each loudspeaker icon enabled each recording to be played. Their task was to listen to each voice clip, and then drag them to form an unspecified number of identity clusters such that all the clips within one cluster would represent one speaker, and all the clips within another cluster would represent another speaker. As such, the participants were instructed that the number of clusters left on the slide at the end of the process would reflect the number of speakers that they felt were present across the set of clips. Participants were encouraged to listen to each clip as many times as they wished, and were shown how to adjust the playback volume. Finally, they were asked to indicate their confidence in their final solution by dragging a number from 1 (not at all confident) to 7 (very confident indeed) from the onscreen display into a marked "confidence" box.
After the opportunity to ask any clarifying questions, participants completed one practice trial with the 16 unfamiliar male speakers. This enabled participants to get used to the format of the task, and feedback was given on their performance by revealing the true number of practice trial identities. The practice trial also enabled participants to appreciate that there could be an unequal number of instances of each speaker, and participants were able to reflect privately on their strategy and their accuracy prior to the main trial.
Following the practice trial, participants completed the main Mr Tickle trial which involved sorting the 52 clips that made up the Mr Tickle excerpt. These were initially arranged in a fixed-random order rather than in a sequential story order to minimise any perception that one clip may flow into the next either semantically, or in terms of speaker identity. Participants dragged the Mr Tickle clips to form identity clusters, and indicated their confidence in their solution as before (see Figure 1).
Finally, participants were asked whether they spontaneously recognised, and could name, any of the speakers in the Mr Tickle trial. If they could not spontaneously name the speakers, one final clip of each speaker was available, in which they all uttered the same scripted phrase. Participants were asked to type a name, or other identifying information, into a box beneath each loudspeaker icon to indicate the identity of the speakers. Together, the spontaneous naming task and the cued naming task served as a final test of familiarity with the speakers' voices. The entire procedure lasted approximately 45 min, after which participants were thanked and debriefed.
Results
Prior to analysis, participant familiarity with the three speakers was examined through both the pre-experimental rating task, and the post-experimental spontaneous or cued naming tasks. In terms of familiarity ratings, two participants indicated familiarity with only one of the three speakers. At the post-experimental stage, only that one speaker elicited either a name or unique identifying information either spontaneously or when cued with the additional clip as a prompt. These two listeners were quite different from the remaining participants in the familiar group who gave ratings of 4 or more for all speakers, and who gave a positive identification by name or by unique identifying information at the post-experimental stage. The two participants who failed to reach these strict criteria were dropped from all subsequent analyses, leaving 20 participants in the personally familiar group, and 23 participants in the unfamiliar group.
Given that the familiarity ratings were not normally distributed for both listener groups according to Shapiro-Wilk tests (both ps < .021), a Mann-Whitney U test was used to check the difference in speaker familiarity across the two groups. As anticipated, this confirmed that the familiar group (M = 5.95, SD = .85, Median = 6, Mode = 7) did indeed show a significantly higher rated familiarity with the speakers than the unfamiliar group (M = 1.54, SD = 1.23, Median = 1, Mode = 1; U = 5.59, p < .001).
With this established, performance on the voice sorting task was assessed by means of a number of dependent measures. First, the number of identity clusters and the number of "intrusion errors" were calculated as in Jenkins et al.'s (2011) face sorting task. These provided overall measures indicative of telling together and telling apart for the familiar and unfamiliar listeners alike. Second, the matrices of within-identity performance and the crossidentity performance were calculated as per Lavan et al.'s (2018; voice sorting tasks. These provided more nuanced measures of telling together and telling apart which could be separated by speaker identity. Third, a misidentification index was calculated as per voice sorting task. This provided a single score per voice clip which combined "telling together" and "telling apart" to represent the extent of confusability for each clip taken individually. Finally, self-rated confidence in the solution was examined, providing a metacognitive measure of performance alongside the behavioural measures above. Shapiro-Wilk tests for all measures within each listener group indicated that the data were not normally distributed (p < .05 for all measures). Consequently, nonparametric tests were used both when exploring familiarity effects and item effects.
Number of identity clusters
The perceived number of identity clusters was determined by examination of the spatial arrangement of loudspeaker icons on the PowerPoint slide at the end of the "Mr Tickle" trial. This represented the number of speakers that each participant thought was present across the 52 Mr Tickle clips and, in essence, this reflected the ability of the listener to tell different clips of the same speaker together. In this regard, the familiar group indicated between 3 and 4 identities, with a mean of 3.20 identities (SD = .41), and a modal value of 3 identities indicated by 16/20 participants. In contrast, the unfamiliar group indicated between 2 and 18 identities, with a mean of 6.87 identities (SD = 4.28) and a modal value of 3 identities indicated by 4/23 participants 1 (see Figure 2 and Table 1).
Two Bonferroni-corrected one-sample Wilcoxon signedrank tests revealed that the familiar group produced a solution which did not differ from the truth (three clusters) (W = 2.00, p > .025), whereas the unfamiliar group produced a solution which deviated significantly from the truth (W = 3.75, p < .001). Moreover, direct comparison using a Mann-Whitney U test showed that the familiar and unfamiliar groups differed significantly from one another in the perceived number of identity clusters (U = 4.07, p = .001). As such, the results indicated that familiarity with the speakers improved the ability to tell different clips of the same speaker together. These results thus confirmed the predictions based on results of the face sorting task by Andrews et al. (2015) and Jenkins et al. (2011) and the voice sorting task by Lavan et al. (2018;.
Number of intrusion errors
The number of intrusion errors was determined in the same way as Jenkins et al. (2011) and reflected the purity of the identity clusters in a participant's final solution. An intrusion error was defined as the presence of a clip belonging to one speaker within a cluster that predominantly contained clips of another speaker. In essence, this reflected the ability of the listener to tell similar clips from different speakers apart, with a higher number of intrusion errors indicating a poorer ability. Each intruder clip was counted once whether they reflected the same "intruder" or different "intruders." For instance, a cluster of clips belonging to Speaker A, with an intruder clip from Speaker B and two intruder clips from Speaker C would be classed as showing three intrusion errors. In clusters where there was no majority speaker, such as when one clip from Speaker A was paired with one clip from Speaker B, then the cluster was arbitrarily assigned to one identity, and the number of "intruder" clips counted relative to that identity (see Figure 2 and Table 1). Examination of the number of intrusion errors across the familiar and unfamiliar participant groups revealed a mean of 1.00 errors (SD = 1.81) in the familiar group and a mean of 10.13 errors (SD = 5.64) in the unfamiliar group. More specifically, 11 of the 20 familiar participants reached a perfect solution involving three pure clusters and no intrusion errors, and another six participants made only one intrusion error out of the entire set of 52 clips. In contrast, none of the unfamiliar participants reached a pure threecluster solution, and instead the unfamiliar group showed a modal value of 7 errors made by 4/23 participants.
Comparison by means of Mann-Whitney U test indicated that the two groups differed significantly in the number of intrusion errors (U = 5.28, p < .001), suggesting that familiarity with the speakers improved the ability to tell voices apart rather than confuse them into mixed-identity clusters. This pattern of errors contrasted with that in the face sorting task (Andrews et al., 2015;Jenkins et al., 2011) and the voice sorting task (Lavan et al., 2018; where the number of intrusion errors was very low for both listener groups. This perhaps reflected the relative difficulty of the face discrimination and voice discrimination tasks per se, but is a point that is considered further in the Discussion.
Within-identity clustering: telling together
To provide comparability with the results of Lavan et al. (2018;, individual participant response matrices were generated, representing the grouping of each of the 52 clips with each of the other 51 clips. Replicating the approach taken by Lavan et al., a coding of 1 indicated that two clips were sorted into the same cluster and a coding of 0 indicated that they were not. The resultant matrix for each individual thus illustrated the ability to tell clips of each identity together (in within-identity regions of the matrix) and the ability to tell them apart (in cross-identity regions of the matrix). Figure 3 shows the group averaged matrices, shaded for ease of inspection. A light-coloured cell is indicative of two clips being grouped together and thus is expected in within-identity regions. Conversely, a dark-coloured cell is indicative of two clips being grouped apart and thus is expected in cross-identity regions. By definition, the matrices are symmetrical along the diagonal, and the diagonal represents the (constant) grouping of each clip with itself and is thus meaningless.
Analysis was conducted on the overall within-identity score (combining the within-identity clusters across all three speakers). This revealed significantly better performance among familiar listeners than unfamiliar listeners (U = 5.29, p < .001). Moreover, this benefit held when each of the three speakers was taken separately (Speaker A: U = 5.09, p < .001; Speaker B: U = 4.72, p < .001; Speaker C: U = 3.40, p = .001). Thus, familiarity enabled better performance when mapping two clips of the same speaker together.
With data broken down for the three speakers, it was possible to determine whether the ability to map clips together was uniform across the three identities. A Friedman Two-Way Analysis of Variance (ANOVA) by Ranks was conducted for the familiar listeners and for the unfamiliar listeners taken separately. This revealed no significant difference across the three speakers when the listeners were familiar with them, FM (2) = 1.19, p = .552, suggesting that all three speakers could be clustered together with equal ease. In contrast, a significant difference emerged across speakers when the listeners were unfamiliar with them, FM (2) = 20.21, p < .001. Nonparametric and Bonferronicorrected pairwise comparisons indicated that Speaker C was clustered together better than both Speaker A (W = 3.88, p < .001) and Speaker B (W = 3.10, p = .002), and that Speaker A and Speaker B did not differ from one another (W = 1.95, p = .052). Thus, although Speaker C was better clustered together when familiar than when unfamiliar (above), it was nevertheless better clustered together by unfamiliar listeners compared with the other two speakers (see Figure 4 and Table 2).
Cross-identity confusion: telling apart
While the analysis above concentrated on clustering of clips with others of the same identity, the matrices also revealed the tendency to cluster clips with others of different identities. These cross-identity confusions represented a failure to tell different speakers apart and were revealed by high scores (light squares) within the cross-identity regions of the matrix. Analysis was conducted on the overall cross-identity scores (combining cross-identity clusters across all three speakers). This revealed significantly better performance for familiar listeners than for unfamiliar listeners in the form of lower scores for cross-identity clusters (U = 4.45, p < .001). As above, this benefit held when confusion of each speaker with each other speaker was analysed in turn (confusion of Speakers A and B: U = 4.31, p < .001; confusion of Speakers A and C: U = 3.49, p < .001; confusion of Speakers B and C: U = 3.71, p < .001). Thus, familiarity enabled a better performance when mapping two different speakers apart (see Figure 5 and Table 3). As with the number of intruders considered previously, the pattern here deviated from that with faces (Andrews et al., 2015;Jenkins et al., 2011) and from that with voices in previous studies (Lavan et al., 2018;. To determine whether any of the speakers was any more confusable than the others, analysis of the confusion between speakers was examined for each confusion pair in turn, within each of the listener groups. A Friedman Two-Way ANOVA by Ranks was again used. This revealed no significant difference in confusability for the three pairs of identities when listeners were familiar with the speakers, FM (2) = 4.67, p = .097. Indeed, the probability of speaker confusion for each of the pairs suggested that confusion was relatively infrequent. In contrast, analysis of cross-identity confusions among unfamiliar listeners revealed a significant difference in the confusability of the three pairs of identities, FM (2) = 23.88, p < .001. Nonparametric and Bonferroni-corrected pairwise comparisons revealed that confusion of either Speaker A or B with Speaker C was relatively rare, and was significantly less frequent than confusion of Speakers A and B (AB vs. AC: W = 3.36, p < .001; AB vs. BC: W = 3.82, p < .001; AC vs. BC: W = .196, p = .845). Thus, Speaker C was mistakenly clustered less often when listeners were familiar with the speaker than when unfamiliar, but nevertheless, Speaker C was mistakenly clustered less often than Speakers A and B even when listeners were unfamiliar with all voices.
Misidentification index and speaker confusability
The final measure of the accuracy of telling together and telling apart was the misidentification index as used by . This was calculated by subtracting the probability of a mistaken clustering from the probability of an accurate clustering for each voice clip (P(within-identity score) -P(cross-identity score)). Scores varied between 0 and 1, with a score of 1 indicating perfect clustering of a clip with other clips from the same speaker, and never with other clips from different speakers.
The misidentification index was calculated on a participant by participant basis for each and every clip (see Figure 6). When averaged across the clips associated with each speaker, this yielded a single score representing the misidentification index for that speaker.
Comparison across listener groups by means of three Mann-Whitney U tests revealed better performance for familiar than for unfamiliar listeners for each of the speakers (Speaker A: U = 5.22, p < .001; Speaker B: U = 5.13, p < .001; Speaker C: U = 4.07, p < .001), suggesting that familiar listeners were more able both when telling together clips from the same speaker and when telling apart clips from different speakers.
As above, a Friedman Two-Way ANOVA by Ranks was used to see whether the average misidentification index across sets of clips differed for the three speakers. For familiar listeners, no significant difference was evident, Figure 6. Misidentification index (P(within-identity match) -P(cross-identity match)) for each item. A high score indicated better clustering with clips of the same identity than with clips from different identities. Top Panel: Results averaged across familiar listeners. Bottom Panel: Results averaged across unfamiliar listeners. FW (2) = 1.56, p = .459, suggesting that all three speakers could be mapped together and mapped apart with equal ease. In contrast, and as with the previous results, unfamiliar listeners showed a significant difference across the three speakers, FW (2) = 22.52, p < .001. Nonparametric, Bonferroni-corrected pairwise comparisons suggested that the misidentification index showed significantly better performance for Speaker C compared with Speaker A (A vs. C: W = 4.11, p < .001) and compared with Speaker B (B vs. C: W = 3.38, p < .001). However, Speakers A and B did not differ (A vs. B: W = 1.74, p = .078).
Analysis of speaker characteristics
Analysis of the speaker characteristics associated with each speaker provided some evidence in support of the differences in speaker confusability noted above. PRAAT (Version 6.0.43) was used to extract a number of vocal characteristics (see Table 4). These focussed on measures associated with fundamental frequency (F0), and formant characteristics, given their prominence within the literature (see Baumann & Belin, 2010;Latinus & Belin, 2011). Characteristics were extracted from manually defined voiced segments of each speech clip, using settings appropriate for normal adult speakers (pitch range = 75-300 Hz, intensity range = 50-100 dB).
A series of between-items nonparametric Kruskal-Wallis ANOVAs were conducted to determine whether the three speakers differed on any of the extracted vocal characteristics. Correcting for the number of tests performed by adopting an alpha level of .005, these analyses revealed significant speaker differences in four of the 10 measures. Bonferroni-corrected pairwise comparisons were conducted to establish the pattern of differences across the three speakers. These typically revealed a difference between Speaker C and one or both of the other speakers (see Table 5 and Figure 7). This supported the findings above that Speaker C was least often confused based on the misidentification index (see Figure 6) because Speaker C stood apart from the other two speakers. In contrast, Speakers A and B were seen to differ on only one of the vocal characteristics (second formant characteristic, F2) supporting the observation of their high rate of confusability.
Surprisingly, there were no overall differences in any of the four measures connected to fundamental frequency (perhaps because all speakers were female), or in harmonics-to-noise ratio (HNR) and first formant characteristic (F1). As such, this analysis offered some insight into the vocal characteristics that contributed to the observed pattern of confusability while acknowledging that additional characteristics not captured here may also contribute to performance.
A consideration of particular items
Graphical examination of the misidentification index per clip averaged for each listener group ( Figure 6) suggested that some clips were harder to tell together and tell apart than others, as indicated by lower misidentification index scores. Visual analysis highlighted two clips for Speaker A (A4, A7), one clip for Speaker B (B2), and three clips for Speaker C (C6, C13, C14) which stood out from their respective sets. This was confirmed by their identification as outliers (according to the standard of 1.5 x interquartile range above the Q3 value or below the Q1 value). Somewhat surprisingly, an exploratory analysis of the acoustic properties associated with these clips suggested that they did not stand out as outliers from their respective identity sets on any of the extracted vocal measures. It was notable that A4 was particularly short (1.02 s of speech relative to a mean speech length of 2.42 s). The short duration may have made it difficult for the listener to extract key vocal characteristics, with a consequential rise in confusability. In addition, B2 had the one of the highest maximum pitch values in that speaker set, with an initial and sustained high pitch that better resembled Speaker A. Indeed, Speaker A (the first author) identified herself in this clip on first listening. Nevertheless, the current set of metrics make it difficult to attribute this confusion to a particular measurable characteristic. In this sense, it is clear that voice clips vary in multiple dimensions and this richness is undoubtedly not fully captured by the metrics selected here. Much more work would be required to untangle the speaker characteristics contributing to confusability within identity sets. However, the analysis above has presented some helpful contenders when examining vocal characteristics at the level of the speaker if not at the level of the individual clip.
Correlations between measures
One interesting question concerned the extent to which the capacity to tell different clips of the same speaker together was associated with the capacity to tell similar clips of different speakers apart. To this end, a Spearman's bivariate correlation was computed between the number of identity clusters, and the number of intrusion errors for the familiar group and the unfamiliar group separately. This revealed a strong and significant correlation between the two measures when familiar with the speakers, r (20) = .63, p = .003, but not when unfamiliar with the speakers, r (23) = -.11, p = .63. In addition, when the more nuanced averaged matrix scores were considered, the ability to tell voices together in withinidentity regions was strongly correlated with the ability to tell voices apart in cross-identity regions when familiar with the speakers, r (20) = .943, p < .001, but not when unfamiliar with the speakers, r (23) = .212, p = .333. This suggested that familiar listeners were able to both tell voices together and tell voices apart, whereas the unfamiliar listeners showed no association between these two capabilities.
Self-rated confidence
Finally, the current design permitted examination of participants' self-rated confidence in the final solution, and Figure 7. Scatterplot of the clips of each speaker set, organised according to the vocal characteristics that differentiated best between speakers (Formant Dispersion, F3, and F4). Within each plot, the dots associated with Speaker C tended to be differentiated from those associated with Speakers A and B.
this provided a metacognitive measure of performance alongside the behavioural measures above (see Table 1). Comparison by means of a Mann-Whitney U test showed a clear and significant difference (U = 4.18, p < .001) such that participants who were familiar with the speakers expressed far higher confidence in their final solution (M = 5.70, SD = 1.13) than participants who were unfamiliar with the speakers (M = 3.61, SD = 1.37).
Discussion
The current study explored the performance of familiar and unfamiliar listeners when processing vocal identity. The use of a voice sorting task with familiar and unfamiliar listeners allowed performance to be evaluated using a single set of stimuli and a common task, and this represented an improvement over previous approaches. With this cleaner methodology, the current results highlighted several notable findings.
First, familiar listeners were significantly more able than unfamiliar listeners to tell different instances of the same speaker together, as indicated by both the number of resultant identity clusters following sorting, and the withinidentity clustering scores. This outcome met with expectations following the use of the sorting task with faces (Andrews et al., 2015;Jenkins et al., 2011) and the use of the sorting task with voices (Lavan et al., 2018;. Furthermore, it cements the conclusion that familiarity with an identity helps the perceiver to map different instances of that identity together despite inherent variability between one instance and the next.
Second, familiar listeners performed significantly better than unfamiliar listeners when the misidentification index was considered. Given the advantage in withinidentity clustering discussed above, a benefit of familiarity when considering this overarching measure was perhaps to be expected. Added to this, familiar listeners were more confident in their sorting ability than unfamiliar listeners suggesting a benefit at a metacognitive level as well as at a behavioural level.
As an extension to previous work, the current study also enabled an examination of the variation in ability to sort both across speakers and across individual clips in each speaker set. In this regard, although familiar listeners could tell together and could tell apart instances of all three speakers, and were better than the unfamiliar listeners for all three, the unfamiliar listeners found some speakers easier to sort than others. Analysis of the vocal characteristics suggested some differentiation of the speakers on characteristics related to formant characteristics. In fact, one voice stood apart from the other two and may be regarded as distinctive. As a result, this distinctive speaker was easier to tell together (higher within-identity clustering) and was easier to tell apart from the other speakers (lower cross-identity clustering). Interestingly, the relative ease of sorting associated with this distinctive speaker was not sufficient to remove a benefit of familiarity with the speaker's voice here. Somewhat surprisingly, the analysis of vocal characteristics did not help to elucidate why some clips within each speaker set caused more problems than others. Nevertheless, the analysis suggested how vocal characteristics may be useful in spotting distinctiveness and thus in spotting ease of performance in a sorting task. The benefit of knowing this is that it serves as a reminder that items effects can represent an important consideration especially in studies which use few speakers as items.
In one respect, the results of the present sorting task were, however, surprising. Specifically, familiar listeners were significantly more able to tell similar instances of two different speakers apart, as indicated by fewer intrusion errors following sorting, and lower cross-identity clustering scores. This better performance with familiarity aligns well with the results discussed above. Nevertheless, this particular benefit when telling voices apart was not predicted given that all previous uses of the sorting task had suggested a low incidence of mixed clusters for familiar and unfamiliar participants alike, both when processing faces (Andrews et al., 2015;Jenkins et al., 2011) and voices (Lavan et al., 2018;. This difference in results may be explained by considering the differences in stimuli across studies. For instance, a difference between the results with faces and those here with voices may be explained by the fact that face processing is a somewhat easier task than voice processing (see Gainotti, 2011;Hanley et al., 1998). Consequently, when faces were considered, unfamiliar perceivers were able to complete one of the two aspects tested by the sorting paradigm (they could tell faces apart as effectively as familiar perceivers). In contrast, and given the relative difficulty of the voice processing task, unfamiliar listeners here struggled with both aspects tested by the sorting paradigm (both telling voices apart and telling voices together).
This said, the current voice sorting task provided a different pattern of performance compared with Lavan et al.'s (2018; voice sorting tasks and this warrants closer inspection. In this regard, several differences existed between the current and previous voice sorting studies. First, the current study used British listeners and British speakers which avoided any difficulties associated with processing an unfamiliar accent (see Stevenage, Clarke, & McNeill, 2012). By comparison, the studies used by Lavan et al. (2018; used speakers with American English accents and may have introduced an other-accent effect when testing a British University participant pool. The possibility of other-accent effects thus cannot be ruled out. This being the case though, it is difficult to see why the unfamiliar listeners used by Lavan et al. performed well and were equivalent to the familiar listeners when telling voices apart despite the possibility of an other-accent effect (while those tested here were worse than the familiar listeners). This suggests that although it may be best to avoid the introduction of an other-accent affect, it may not account for the difference in results across studies.
A second difference between the present study and those of Lavan et al. (2018; relates to the use of three speakers with unequal set sizes in the current study as opposed to two speakers with equal set sizes in previous studies. This is perhaps a trivial point but it may carry a perceptual consequence for the participants in that the use of three speakers will have resulted in a voice set displaying greater vocal variability than that associated with just two speakers. Familiar listeners could resolve this variability well. However, this variability may have contributed both to the perception of more identities (more clusters) but also to greater confusion between identities (more intrusion errors or cross-identity grouping) among unfamiliar listeners than is evident in previous voice sorting tasks. As such, the use of three identities within the current sorting task arguably may have provided a more realistic test of voice sorting ability which enabled confusion errors to emerge in the unfamiliar listeners.
A third difference between the present study and those of Lavan et al. (2018; is the use of scripted speech (here) versus spontaneous-yet-acted speech in the previous studies. In this regard, it is possible that the use of scripted speech here resulted in clips that were more uniform across speakers, with the result that telling voices apart was more difficult in the present study, especially for unfamiliar listeners. This may have removed any ceiling effects present in Lavan et al.'s studies, enabling a difference between familiar and unfamiliar listeners to emerge in cross-identity confusions as well as in the number of perceived identities. Without any vocal metrics associated with voice clips, the possibility of a difference in uniformity across scripted and spontaneous clips is difficult to evaluate. In contrast, the vocal metrics reported by Lavan et al. (2018) and the vocal metrics reported here did not readily account for the pattern of performance when sorting individual speaker clips. Nevertheless, variability of the clips used across studies certainly warrants further attention, with the current data suggesting that differences at the speaker level may be indicative of differences in sorting ability.
Finally, the current study used personally familiar voices as stimuli rather than publicly familiar (celebrity voices) as used by Lavan et al. (2018;. In the face domain, the basis for familiarity has been the focus of some discussion, with several authors questioning the equivalence of personally and publicly familiar stimuli (see Carbon, 2008;Ramon, Caharel, & Rossion, 2011;Tong & Nakayama, 1999;Wiese et al., 2018). In particular, a concern centred on the possibility that celebrity face processing may reflect item-specific processing relative to a particular stored iconic image rather than processing at the level of the identity itself. Indeed, celebrity recognition was shown to be relatively poor when presented with slightly modified or unfamiliar versions of celebrity faces (Carbon, 2008). In contrast, the processing of a personally familiar individual may rely on a stored mental representation that is richer, more representative, or what Tong and Nakayama (1999) referred to as more robust. In the context of the current study, there is no reason to suppose that the distinction between personally and publicly familiar stimuli cannot be generalised from the face domain to the voice domain. In this regard, the existence of a stronger mental representation for personally familiar voices here may have contributed to the processing differences across studies.
So what differs between the processing of familiar and unfamiliar listeners?
A discussion of the difference in stored representation across different levels of familiarity starts to enable a consideration of what may differ between the voice processing of an unfamiliar listener and the voice processing of a familiar listener. One appealing representational framework draws on the concept of a similarity space (see Leopold, O'Toole, Vetter, & Blanz, 2001;Valentine, 1991). When presented with an unfamiliar stimulus, a perceiver may locate that stimulus within the similarity space based on a superficial analysis of available characteristics, and this may support a temporary ability in a matching task or a discrimination task. However, true recognition arguably depends on the existence of a pre-existing and stored mental representation within the similarity space which is triggered upon presentation of a familiar and recognisable individual.
In the voice domain, the concept of a similarity space has been explored, and Baumann and Belin (2010) have identified two cardinal dimensions along which voices may be differentiated-fundamental frequency and formant characteristics. At its simplest level, each voice identity may thus be located as a point within this twodimensional voice space, with this point representing an average or a prototype extracted from all experienced instances (see Andics et al., 2010, for discussion of prototype extraction in the voice domain). The success of recognition depends upon the proximity of an instance to its stored prototype rather than to its next nearest neighbour. A point-based representation within similarity space is thus good at accounting for the ability to tell different individuals apart.
This said, it is plausible to consider that, with increasing familiarity comes a refinement of the stored mental representation for a known individual (Lavan, Burton, Scott & McGettigan, 2019). Indeed, Tong and Nakayama linked the formation of robust representations to the development of extensive experience or familiarity, with the most robust representations existing for the most familiar stimuli one experiences (such as self, partner or family members). Our contention given the results of the present study is that increasing familiarity may enable the development of a representation which captures both the variability within individuals as well as the separation between individuals. This would enable the perceiver to be able to map different instances of the same individual together as well as to map different individuals apart.
At a theoretical level, one way to conceptualise this refinement of the stored representation with increasing familiarity is to consider a shift from a representation as a point in similarity space to a representation as a region within similarity space (see Lewis & Johnston's, 1999, discussion of Voronoi Cells). Although a point-based representation may enable separation of different identities and may be laid down first, a regionbased representation captures the variability of different instances of that identity, and reflects what Vernon (1952, cited in Bruce, 1994 described as the "possible and permissible variations" within an identity. This concept of a representational region is far more than a mechanism to enable a perceiver to develop a tolerance band to cope with noisy or suboptimal presentations, as it may have historically been viewed (see Valentine, 1991). Instead, it is a representation of the meaningful variability that an individual may display across different moments in time. Logically, an appreciation of the variability of an individual may take time, experience, and familiarity to develop, but the consequence of this representational refinement is that a perceiver becomes able to tell different instances of the same person together, as well as being able to tell similar instances of two different people apart.
A consideration of an identity region provides a useful way of accounting for the current results. However, this sort of representational framework can also be extended to incorporate Burton's (2013) recent thinking on variability as a cue to identity in and of itself. Burton considered that, rather than variability merely reflecting noise, the pattern of variability that an individual displays may be a characteristic of that individual's identity in and of itself. In this regard, it is quite possible that one identity may display more or less variability than another, and this may be a meaningful element for a mental representation to capture. This can be accommodated into a region-based view of representations by assuming regions of different sizes for different identities. Accordingly, the success of recognition now depends upon the overlap between these identity regions and this itself depends upon both the proximity of the region to its nearest neighbour (to tell them apart), and the variability within the identity, or size of the region (to tell the instances of one identity together). In the voice domain, one study has recently begun to quantify variability across different instances within an identity using everyday speech sessions (Kreiman, Park, Keating, & Alwan, 2015). However, it would be interesting to explore whether there is any link between intra-speaker variability, vocal distinctiveness, and performance in a sorting or discrimination task along the lines discussed.
Conclusions and final thoughts
In summary, the present study has used a voice sorting task to show a difference in vocal identity processing across personally familiar listeners and unfamiliar listeners. Specifically, personally familiar listeners were better able to tell voices together as shown through the creation of fewer identity clusters and higher withinidentity clustering scores. In contrast to previous studies, personally familiar listeners were also better able to tell voices apart, as shown through fewer intrusion errors and lower cross-identity clustering scores. Interestingly, familiarity also led to higher metacognitive evaluations of performance. Although the performance of unfamiliar listeners was significantly worse than that of familiar listeners, it was notable that the performance of unfamiliar listeners was influenced by the voices themselves, with some voices being easier to sort than others. In accounting for these results, a theoretical framework has been discussed which links familiarity with the development of a robust representation capable of capturing both within-identity variation as well as between-identity separation. This may best be reflected by a region rather than a point within a representational space, and a shift to this type of thinking may enable new questions to be asked and answered.
In line with the thinking proposed here, a final observation is to note a similarity between the themes we summarise above, and those discussed with the categorical perception literature (see Bornstein, 1987;Liberman, Harris, Hoffman, & Griffith, 1957). Indeed, the categorical perception concepts of within-category compression and between-category separation map nicely onto the concepts of telling together and telling apart, respectively. An interesting literature has tracked the emergence of within-category compression and between-category separation as a consequence of category learning (see Goldstone, 1994;Livingston, Andrews, & Harnad, 1998;Schyns & Rodet, 1997), where categories can refer to different identities (see Beale & Keil, 1995;Stevenage, 1998). In this regard, the literatures relating to familiarity effects in (face and) voice processing may usefully be aligned to the very established literature on categorical perception, with the potential to take both our methodology and our theoretical understanding forward.
|
2019-10-30T13:04:41.013Z
|
2019-11-22T00:00:00.000
|
{
"year": 2019,
"sha1": "0830b48031743f3f872a049bc452edbc5d138dba",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1747021819888064",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "33a3890b2d229cc6e9a8f696db243a3099c37018",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6587746
|
pes2o/s2orc
|
v3-fos-license
|
Characteristics, Access, Utilization, Satisfaction, and Outcomes of Healthy Start Participants in Eight Sites
To describe the characteristics, access, utilization, satisfaction, and outcomes of Healthy Start participants in eight selected sites, a survey of Healthy Start participants with infants ages 6–12-months-old at time of interview was conducted between October 2006 and January 2007. The response rate was 66% (n = 646), ranging from 37% in one site to >70% in seven sites. Healthy Start participants’ outcomes were compared to two national benchmarks. Healthy Start participants reported that they were satisfied with the program (>90% on five measures). Level of unmet need was 6% or less for most services, except for dental appointments (11%), housing (13%), and child care (11%). Infants had significantly better access to medical care than did their mothers, with higher rates of insurance coverage, medical homes, and checkups, and fewer unmet needs for health care. Healthy Start participants’ rates of ever breastfeeding (72%) and putting infants to sleep on their backs (70%) were at or near the Healthy People 2010 objectives, and considerably higher than rates among low-income mothers in the ECLS. The high rate of health education (>90%) may have contributed to these outcomes. Elimination of smoking among Healthy Start participants (46%) fell short of the Healthy People 2010 objective (99%). The low-birth weight (LBW) rate among Black Healthy Start participants (14%) was three times higher than the rate for Whites and Hispanics (5% each). Overall, the LBW rate in the eight sites (7.5%) was similar to the rate for low-income mothers in the ECLS, but both rates were above the Healthy People 2010 objective (5%). Challenges remain in reducing disparities in maternal and child health outcomes. Further attention to risk factors associated with LBW (especially smoking) may help close the gaps. The life course theory suggests that improved outcomes may require longer-term investments. Healthy Start’s emerging focus on interconception care has the potential to address longer-term needs of participants.
death syndrome, maternal complications, and congenital malformations), as well as higher rates of perinatal risk factors [4].
The Institute of Medicine (IOM) proposes that the sources of racial/ethnic disparities in health are ''complex, rooted in historic and contemporary inequities'' [5]. The historical and social context in sources of racial/ethnic disparities can be seen at multiple levels. For example, individual-level factors include minorities' lower socioeconomic status and variation in patient knowledge, behavior, and attitudes toward health and health care. At the social and community level, racial/ethnic minorities are more likely to reside in poor neighborhoods and are disproportionately exposed to residential and environmental disease-producing factors [6,7]. Disparities in access to medical care, and cultural and language barriers within health systems, are also sources of health disparities [5][6][7]. As a result, the IOM promotes a comprehensive, multilevel approach to eliminating disparities that targets patients, providers, and health care systems.
The Maternal and Child Health Bureau's (MCHB) Healthy Start program is the largest multilevel initiative to address racial and ethnic disparities in infant mortality. The goal of this federally sponsored, community-based program is to improve maternal and child health outcomes by providing culturally and linguistically competent services, including outreach, health education, and case management, and by enhancing local perinatal health systems through increased collaboration and planning [8]. The nine required program components include five service components (outreach, health education, case management, perinatal depression screening and referral, and interconception care) and four systems components (consortium, collaboration, local health system action plan, and sustainability plan). The integration of efforts to improve both services and systems is a unique feature of the Healthy Start program. Each local project is designed to facilitate access to needed services and fill gaps in services not otherwise available. Thus, the services offered by each project are tailored to local community needs and infrastructure. The theory of change underlying the program suggests that local community involvement will lead to improved services and systems that are tailored to the cultural and linguistic needs of the community. In turn, participants' use of health care and other services is expected to expand, which will then bring about improved perinatal health outcomes and, ultimately, eliminate racial and ethnic disparities in infant mortality.
Interconception care is becoming widely recognized as an important component to improve maternal and child health outcomes [9]. Specifically, interconception care takes place between pregnancies and addresses not only risks indicated by a previous adverse pregnancy outcome, but is also designed to provide preventive health care and encourage birth spacing of at least 2 years between pregnancies. Healthy Start is the first national program to focus systematically on interconception care.
Measuring the progress of Healthy Start programs toward reducing disparities in infant mortality is challenging for two reasons. First, many factors (such as environment, nutrition, and stress) influence perinatal health outcomes over the course of a woman's lifetime [4,10]. As a result, reductions in infant mortality are not likely to be observable within a 1-2-year period. Second, it is not possible to attribute changes in outcomes directly to the Healthy Start program in the absence of a control or comparison group. Thus, alternative strategies were required to measure and interpret the outcomes of Healthy Start participants. To overcome these two challenges, we used (1) ''evidence-based'' measures of intermediate outcomes that are associated with reduced infant mortality to assess Healthy Start program performance over the short term and (2) national benchmarks, where possible, to help interpret Healthy Start participant outcomes. We conducted a comprehensive literature review of the risk and protective factors that are associated with racial/ethnic disparities in infant mortality to identify measures of effective prenatal, interconception, and infant care practices [4]. Among the factors found to be associated with improved infant outcomes were multivitamin use, smoking cessation, breastfeeding, infants put to sleep on their backs, and birth intervals of at least 2 years. These indicators, among others, were included in the analysis of Healthy Start participant outcomes.
To compensate for the absence of a control or comparison group, we developed two national benchmarks to place Healthy Start participant outcomes in perspective. First, we constructed a benchmark of low-income mothers based on a sample from the Early Childhood Longitudinal Study (ECLS), which provides a nationwide estimate for outcomes of interest. Second, we assessed Healthy Start participant outcomes in relation to Healthy People 2010 objectives [11]. Healthy People 2010 provides national targets for improving the health of all Americans, with specific objectives for maternal and child health. Even though this benchmark strategy helps interpret Healthy Start participant outcomes, it must be recognized that we cannot attribute any differences (positive or negative) to Healthy Start program impacts because we are unable to control for what might have happened in the absence of Healthy Start. This paper is one component of the national evaluation of the Healthy Start program. The evaluation used both qualitative and quantitative methods to assess program implementation, outcomes, and lessons learned. The study used a participatory evaluation strategy that involved ongoing collaboration with and input from the Healthy Start grantees as well as an advisory panel of experts in maternal and child health, evaluation methods, and health care disparities. This paper presents the results of a survey of Healthy Start participants in eight selected sites, which was designed to incorporate the consumer perspective into the evaluation. This paper addresses four questions: (1) What are the characteristics of Healthy Start participants at the eight sites (including their sociodemographic characteristics, health status, and risk factors)? (2) What services do Healthy Start participants receive and what is their level of access to and unmet need for services? (3) To what extent are participants satisfied with the Healthy Start program? (4) What are participants' perinatal health outcomes and how do they compare against a national benchmark? A companion paper presents the qualitative findings from site visits to these eight projects [12].
Although the analytic approach does not support conclusions related to the impact of Healthy Start, it does identify areas for improving health behaviors, service delivery, and participant outcomes. The remainder of this paper is organized as follows. First, we describe the data source and analytic methods. Next, we present the results related to demographic characteristics, health status and risk factors, access and utilization, satisfaction, and outcomes. Finally, we discuss the implications of these results for the Healthy Start program and for the continued improvement of perinatal outcomes more generally.
Site Selection
The participant survey was conducted in eight sites that were selected based on multiple criteria related to program implementation status and demographic variation [12]. To be considered eligible for selection, grantees had to have reported on the 2004 National Survey of Healthy Start Programs that they implemented all nine Healthy Start components required by HRSA, tracked referrals to providers within and outside of Healthy Start, and maintained electronic records. Of the 96 grantees, about onefourth (27) were eligible to be selected (Fig. 1). The final sample was designed to reflect the four U.S. census regions, urban/rural areas, racial/ethnic diversity, and small/medium/large program size as determined by funding level and number of live births. The sample was also designed to include a site that was close to the Mexico border and a site that served a predominantly indigenous (American Indian) population. The eight sites were located in Fresno, California; Tallahassee, Florida; Des Moines, Iowa; East Baton Rouge, Louisiana; Worcester, Massachusetts; Las Cruces, New Mexico; Pittsburgh, Pennsylvania; and Lac du Flambeau, Wisconsin. This subset of grantees was not intended to be nationally representative of all Healthy Start grantees. Rather, the sites were selected because they had implemented all nine Healthy Start components, and they captured the sociodemographic diversity of Healthy Start programs.
Survey Administration
The participant survey was conducted via computer-assisted telephone interviewing (CATI) between October 2006 and January 2007. The survey was translated from English into Spanish, and trained interviewers conducted the survey in both languages. In addition, professional health interpreters were on call to translate the survey into eight other languages spoken by Healthy Start participants: Brazilian Portuguese, Hmong, Vietnamese, Creole, Mandarin, Mixteco, Ghanaian Twi, and Arabic. Altogether, 37 interviews were conducted in a language other than English or Spanish.
Interviewers made an average of seven calls before completing the interview, and respondents took an average of 30.2 min to complete the survey. Respondents were sent a $25 gift card upon completion of the survey to compensate them for their time spent completing the survey.
Sample Design and Response
Women were eligible to participate in the survey if they had an infant ages 6-12-months-old at the time of the interview. We used a 6-12-month age criterion to allow enough time to measure postpartum outcomes, but not so much time that women would have difficulty recalling their prenatal and delivery experiences. Each site provided a data file containing contact information for the universe of participants who gave birth between October 2005 and June 2006. In two of the eight sites, the grantees required consent from individual participants before releasing contact information for the survey. The initial sample included the universe of 1,056 eligible cases across the eight sites (Table 1). We excluded cases from two sites for which consent had not been obtained, resulting in a working sample of 821 cases across the eight sites. Of the 821 cases, 646 were completed, 48 were ineligible, and the remaining 127 did not complete the survey. The final response rate (including cases for which consent was not obtained) was 65.7%. Five of the eight sites had response rates above 80% and two sites had response rates between 73% and 75%. In the two sites requiring participant consent before releasing contact information, the response rates were 73.0% and 36.8%; the survey completion rates among those giving consent were 96.4% and 93.8%. Little is known about how non-respondents differ from respondents. Recent research has shown, however, that non-response does not necessarily induce bias in survey estimates [13]. Weights were computed using a weighting class adjustment for non-response to the collection of the consent form (in two sites) and non-response to the interview (in all eight sites). These weights are called consentform-response-adjusted weights and interview-responseadjusted weights, respectively. In the two sites requiring consent, the consent-form-response-adjusted weight was the ratio of the number of all cases to the number of cases who returned the consent form within the weighting class. For the interview-response-adjusted weight, the interviewresponse adjustment factor was the ratio of the sum of consent-form-response-adjusted weights for all cases to the sum of consent-form-response-adjusted weights for respondents within the weighting class. The final interviewresponse-adjusted analysis weight was the product of the consent-form-response-adjusted weight and the interviewresponse adjustment factor. In the remaining six sites without a consent process, the interview-response-adjusted weights were computed using a weighting cell adjustment for non-response to the interview only. The interviewresponse adjustment factor was the ratio of the number of all cases to the number of cases that responded to the interview.
Questionnaire Content
The questionnaire contained 10 substantive sections: (1) participant background characteristics; (2) mother's current health status and stress; (3) receipt of health education services; (4) health insurance status and WIC participation; (5) access to postpartum care; (6) pregnancy history and current pregnancy status; (7) participation in the Healthy Start program (including satisfaction with Healthy Start services); (8) cigarette use and alcohol consumption before, during, and after pregnancy; (9) use of prenatal care and pregnancy outcomes; and (10) infant health status and access to care. To the extent possible, questions were drawn from several existing, well-established national surveys, including the ECLS, the National Survey on Drug Use and Health (NSDUH), and the National Survey of Early Childhood Health (NSECH). Definitions for the variables used in this study are provided in the following sections.
Sociodemographic Characteristics
Age was calculated by subtracting self-reported date of birth from the date the interview was conducted. Census categories were used for questions about race and ethnicity. Individuals of any race who reported they were of Hispanic origin (including Puerto Rican, Cuban, Mexican, Central or South American, or other Hispanic or Latina backgrounds) were classified as Hispanic. Other respondents were classified as White, Black, Asian/Pacific Islander, American Indian/Alaska Native, or multiracial. Participants reported the highest grade or level of school completed in six categories, which were collapsed into three categories for a Site names are masked to protect confidentiality b Sites A and G required individual consent before participants could be contacted for the survey. As a result, the working sample at these sites is not the universe of eligible participants analysis (less than high school, high school degree or equivalent, or more than high school). Other sociodemographic variables included marital status (married, separated, divorced, widowed, or never married), employment status (full time, part time, or not working), and the main language spoken at home (English or other).
Health Status and Risk Factors
The prevalence of various health conditions (including depression/anxiety/emotional problems, hypertension, asthma, diabetes, high blood cholesterol, and heart disease) was elicited by asking participants whether a health care provider had ever told them they had the condition. Selfreported health status was measured using a 5-point categorical scale (excellent, very good, good, fair, poor). Other risk factors included cigarette use and alcohol consumption during the 3 months before pregnancy, the third trimester of pregnancy, and at the time of the interview.
Health Education, Service Utilization, and Access to Care
Participants were asked to report whether they received information from a doctor or other health care provider on 13 health education topics spanning the prenatal and interconception periods. The survey also included several questions related to access to care for women and infants, including their health insurance status, presence of a medical home (that is, one person they thought of as their personal doctor or nurse), whether they had a recent checkup (postpartum or well-baby), and whether they had any unmet health care needs. Following definitions used by the Census Bureau, women and infants were considered insured if they responded that they had one or more of the following types of coverage: Medicaid, SCHIP, private insurance, military health care, or other type of coverage. Consistent with Census Bureau definitions, those reporting being covered solely by the Indian Health Service were considered uninsured for this study [14]. Participants were considered as having an unmet need for a service if they needed the service but did not receive it. To measure unmet need, participants were first asked whether they received selected services; if they answered ''no,'' they were asked if they needed the service but did not receive it. Because Healthy Start's role is to assure that participants receive needed services, we did not distinguish whether participants received the services from Healthy Start or another source.
Three measures of interconception care were included: (1) whether the participant (or her partner) is doing anything to keep from becoming pregnant, such as using birth control or a family planning method; (2) whether she was ever given advice about how long to wait before becoming pregnant again; and (3) whether she was currently (at the time of the interview) taking a multivitamin at least once a week.
Participant Satisfaction
Participants were asked to report whether they were ''very satisfied,'' ''somewhat satisfied,'' ''somewhat dissatisfied,'' or ''very dissatisfied'' with five Healthy Start program dimensions: (1) their overall relationship with Healthy Start program staff, (2) how frequently they were able to meet with program staff, (3) the way program staff treated them, (4) the amount of time program staff spent with them, and (5) services that the program helped obtain for them and their families.
Perinatal Health Outcomes
Three prenatal outcomes were included in the analysis: (1) percentage of participants receiving prenatal care during the first trimester, (2) percentage eliminating smoking during pregnancy, and (3) percentage eliminating alcohol use during pregnancy. First-trimester prenatal care was measured by asking the participant whether she had received any prenatal care from a doctor, nurse, midwife, or some other health care worker and, if so, how many weeks or months pregnant she was when she went for her first prenatal visit. Elimination of smoking during pregnancy was counted if the participant reported smoking any cigarettes during the 3 months before pregnancy but reported not smoking at all during the last 3 months of pregnancy. We used a similar approach for measuring elimination of alcohol during pregnancy.
Two measures of birth outcomes were included: (1) whether the infant was LBW and (2) whether the infant had to stay longer in the hospital due to medical problems. The child's birth weight was self-reported in either pounds or kilograms; an infant was classified as LBW if he or she weighed less than 5.5 pounds or 2.5 kg at birth.
Three infant health outcomes were included: (1) whether the participant had ever breastfed or pumped milk for her child, (2) whether the child was usually put to sleep on his or her back as a newborn, and (3) whether the child had had a well-baby checkup. Participants were asked these three questions only if their child was living with them at the time of the interview. (Nine cases were excluded from this analysis because the child was not living with the participant at the time of the interview.)
Analytic Approach
Analyses were conducted using SAS version 9.1 and STATA 9 statistical software. All estimates were weighted to account for non-response. To test for differences between subgroups of Healthy Start participants, we performed significance testing using Chi-square tests for categorical variables and t tests for continuous variables.
To place outcomes for the Healthy Start participants in the eight sites within a national context, we constructed a benchmark based on a sample of low-income mothers from the ECLS. The ECLS was selected for the benchmark for two reasons: (1) the sample size was sufficient to produce robust estimates for the subgroup of low-income mothers with infants ages 6-12-months at the time of interview; and (2) the survey asked detailed questions about mothers' health behaviors and practices in addition to infant health.
The ECLS includes a birth cohort of 14,000 children born in 2001. The first round of information was collected when children were approximately 9 months old. Most information was collected through CATI interviews with mothers, although certain key outcomes, namely LBW and first-trimester prenatal care, were obtained from birth certificates. It should be noted that measurement of outcomes in the ECLS differs from that based on the Healthy Start participant survey in two respects: (1) two ECLS outcomes (trimester prenatal care began and birth weight) were obtained from birth certificates rather than self-reported and (2) To make the ECLS benchmark more closely resemble the characteristics of Healthy Start participants in the eight sites, we restricted the ECLS to include respondents who were biological mothers, who had a child ages 6-12months-old at the time of the interview, and who were living in families with incomes below 185% of the federal poverty level (FPL). This poverty threshold was chosen because 80% of Healthy Start participants nationally lived in families below 185% of the FPL (unpublished data from the MCHB Discretionary Grant Information System). Across the eight selected sites, about 90% were in families with incomes below 185% of the FPL. We were unable to identify women in the ECLS sample who participated in Healthy Start; however, we estimate that Healthy Start served about 0.5% of births in 2002.
We adjusted the ECLS perinatal outcome rates to be similar to the age and race/ethnicity distribution of the Healthy Start participants in the eight sites using the direct standardization method [15]. First, we created 25 cells reflecting age (five categories) and race/ethnicity (five categories) and determined the proportion of Healthy Start participants in each cell. Next, we multiplied this proportion by the ECLS rate within each cell, and, finally, we summed the products to determine the ECLS adjusted rate. Also, for selected measures, we assessed the experiences of the Healthy Start participants in the eight sites and the ECLS sample of low-income mothers in relation to Healthy People 2010 objectives [11]. We cannot determine the statistical significance of the differences between the Healthy Start and ECLS surveys given differences in the survey design and sampling frame. What we describe as similarities or differences are based on qualitative assessments of the data informed by confidence intervals that were derived for measures within each survey.
Sociodemographic Characteristics
As shown in Table 2, 61% of Healthy Start participants in the eight sites were between the ages of 20 and 29, and more than two-thirds (70%) were Black or Hispanic. More than one-third (37%) mainly spoke a language other than English at home. The majority were never married (63%), and more than one-third (39%) had less than a high school education. Finally, 60% were not working at the time of the interview.
Compared to low-income mothers in the ECLS, Healthy Start participants in the eight sites were similar in terms of age, education, and employment status. Reflecting the Healthy Start program emphasis on reducing disparities, a larger percentage of Healthy Start participants in the eight sites reported being Black (34%), Asian/Pacific Islander (6%), and American Indian/Alaska Native (12%) compared to low-income mothers in the ECLS (22%, 2%, and 1%, respectively). Healthy Start participants in the eight sites were less likely to be married (26% vs. 48%) and more likely than low-income mothers in the ECLS to report they speak a language other than English at home (37% vs. 18%).
Health Status and Risk Factors
Healthy Start participants reported a mix of health conditions that may complicate their prenatal and interconception care, as well as affect infant health or parenting skills. About one-fourth (24%) had been told by a doctor or other health care provider that they experienced depression, anxiety, or an emotional problem (data not shown). Some Healthy Start participants had mental health issues that were not diagnosed but that limited their daily activities. For example, 44% reported that during the 4 weeks before the interview, they accomplished less than they would have liked because of feeling depressed or anxious, and 37% reported being limited at work or other activities because of feeling depressed or anxious.
One in 6 (17%) Healthy Start participants in the eight sites reported they were in fair or poor health at the time of the interview. This rate appears to be slightly higher than among low-income women nationally (ages 18-40 in households with less than 200% of poverty) based on the 2004 Medical Expenditure Panel Survey (10%).
Cigarette smoking and alcohol use are not only risk factors during pregnancy but may also affect infant health and well-being. Although Healthy Start participants in the eight sites reported declines in both cigarette smoking and alcohol use during pregnancy, consumption increased after pregnancy. For example, 34% reported smoking during the 3 months before pregnancy, 18% during the last 3 months of pregnancy, and 28% at the time of the interview. For alcohol consumption, 30% reported having at least 1 drink a week during the 3 months before pregnancy, whereas only 3% reported drinking during the last 3 months of pregnancy. However, 21% reported they had at least 1 drink a week at the time of the interview.
Health Education, Service Utilization, and Access to Care Healthy Start programs offer case management services to help participants obtain needed services within their communities. They also provide health education to promote healthy behaviors and reduce risky behaviors. Health education is provided through both face-to-face encounters (either individual or group sessions) and/or through the distribution of materials [16]. Healthy Start is responsible for filling gaps by providing services not otherwise available in the community as well as facilitating access to services provided by other agencies (such as making appointments with health care providers or providing transportation to services). Thus, although not all of the services reported by participants were provided by Healthy Start, the sites were responsible for ensuring that women received needed services. More than 80% of Healthy Start participants reported that they received health information concerning 13 selected topics since they became pregnant ( Table 3). The three topics participants reported receiving most often were eating healthy foods (reported by 96%), how to put their child to sleep (96%), and how to breastfeed (93%). The three topics reported least often were drug use (reported by 88%), how to manage stress (86%), and how much weight to gain during pregnancy (81%).
Healthy Start participants received help in obtaining a wide range of health care and other services during and after pregnancy (Table 4). (It should be noted that some participants may not have needed help obtaining these services.) The most common services, received by at least half of the participants, were help making prenatal appointments (70%), finding a provider who spoke the same language (61%), making postpartum appointments (60%) and appointments for the child (59%), obtaining transportation (55%), and applying for health insurance (53%).
An important indicator of access to care is the level of unmet need, that is, the extent to which participants reported they needed but did not receive specific services. Unmet need was low for most of the health care services, with the exception of making dental appointments. Whereas 56% of Healthy Start participants in the eight sites reported they needed help with dental appointments, 45% reported they received help, and 11% reported they needed but did not receive help. High levels of unmet need were also reported for finding child care (11%) and obtaining housing (13%). These services are frequently in short supply within the eight communities because of the lack of dentists, licensed and affordable child care providers, and low-income housing options.
Most Healthy Start participants in the eight sites (91%) reported having a postpartum checkup after their child was born, and 83% reported using a birth control or family planning method (data not shown). Other interconception care practices were less frequent. For example, 63% of participants reported receiving advice about how long to wait before their next pregnancy, and 32% of participants reported taking a multivitamin at least once a week.
Infants had better access to care than their mothers in the eight Healthy Start sites (Fig. 2). At the time of the interview, 97% of the infants were insured compared to 87% of their mothers. In addition, infants were more likely than their mothers to have a medical home (90% vs. 81%), more likely to have no unmet health care needs (97% vs. 93%), and more likely to have received a well-baby/postpartum checkup (97% vs. 91%).
Satisfaction
The vast majority of Healthy Start participants were satisfied with the services they received from Healthy Start and with their interactions with Healthy Start staff (Table 5). More than 90% reported they were either ''very satisfied'' or ''somewhat satisfied'' on all five measures. Healthy Start participants were most likely to report they were ''very satisfied'' with the way they were treated by staff (91%), and they were least likely to report they were ''very satisfied'' with the frequency of contact with the Healthy Start program (72%). These results suggest that participants would have liked more contact with Healthy Start staff because, in part, they were well treated by staff and valued the services they received. Given the participants' complex needs and the programs' limited resources, anecdotal evidence suggests that Healthy Start case managers were often stretched thin and were not able to spend as much time with each participant as they might have liked.
Another indicator of Healthy Start participants' high level of satisfaction with the program is that 97% would recommend the program to a friend or relative. When asked why they would recommend the program, participant responses reflected many dimensions. These perspectives reflect the ''participants' voice'' in the evaluation. For example, some participants commented on their relationship with the staff: ''they are very supportive and keep everything confidential,'' ''because they treat you nice,'' and ''they are very caring and help you with anything you need.'' Other participants appreciated the general support they received from the program: ''it helps a lot of people in need and helps them have successful pregnancies,'' and ''it's just helpful resources you would not know is out there.'' Others indicated that the program helped them with specific needs: ''they help with transportation and doctors' visits,'' ''they helped me quit smoking, find a good [doctor], and helped me through my pregnancy,'' and ''they are very knowledgable about food issues, like when to feed.'' Involvement of other family members was also considered an asset: ''they help couples out, they get them involved and do things,'' ''because they…help you do things with your children.''
Perinatal Health Outcomes
Most Healthy Start participants in the eight sites (86%) reported that they received prenatal care in the first trimester (Table 6), similar to the rate for low-income mothers in the ECLS. Moreover, both rates were within 4 points of the Healthy People 2010 objective of 90%. Healthy Start participants in the eight sites were twice as likely to eliminate alcohol during pregnancy (89%) than to eliminate smoking (46%). Placing these results in a national context, we observe a similar pattern among lowincome mothers in the ECLS (93% for alcohol and 53% for smoking). Of particular note is the gap toward achieving the Healthy People 2010 objective of 99% for the elimination of smoking during pregnancy both for Healthy Start and low-income mothers more generally.
The LBW rate was 7.5% for Healthy Start participants in the eight sites as well as for low-income mothers in the ECLS, 50% higher than the Healthy People 2010 objective of 5%. A related infant health outcome, the percentage of infants who had a longer hospital stay because of medical problems at birth, was also similar between the two groups (12% for Healthy Start participants and 13% for lowincome mothers).
Additional analysis of LBW rates was performed by race/ ethnicity (White, Black, and Hispanic) (Fig. 3). Among Healthy Start participants, Whites and Hispanics had LBW rates that met the Healthy People 2010 objective of 5%. The LBW rate for Blacks was nearly three times higher (14%). Racial/ethnic disparities were also observed among lowincome mothers in the ECLS (11% of Black infants and 6% of White and Hispanic infants were LBW). We cannot determine from these data, however, what the rate among Healthy Start participants would have been in the absence of the Healthy Start program, given the program's outreach to highrisk women with multiple medical and social risk factors.
Healthy Start participants had strong outcomes on the three selected postpartum measures. Table 6 shows that 72% of Healthy Start participants reported ever breastfeeding their infants, and 70% put their infants to sleep on their backs, compared to 60% and 48%, respectively, of lowincome mothers in the ECLS. Healthy Start participants in these eight sites achieved or nearly achieved the Healthy People 2010 objectives of 75% for breastfeeding and 70% for putting infants to sleep on their backs. Further analysis revealed large differences in these practices by race/ethnicity. Among Healthy Start participants, 90% of Hispanics reported ever breastfeeding their babies, compared to 61% of Blacks and 57% of Whites (Fig. 4). In contrast, 75% of Whites, 69% of Blacks, and 61% of Hispanics reported putting their infants to sleep on their backs. In nearly all The results for Hispanic Healthy Start participants varied according to whether English was their main language spoken at home (data not shown). Hispanic Healthy Start participants speaking English as their main language were 14% points less likely than those with another main language to have ever breastfed their infants (79% and 93%, respectively). The opposite pattern was found for putting babies to sleep on their backs, with a 25%-point difference between Hispanics speaking English as their main language and those with another main language (80% and 55%, respectively). n.a. not applicable a The ECLS benchmark includes respondents who were the child's biological mother, had incomes below 185% of the federal poverty level, and had infants ages 6-12-months-old at the time of the interview b Given the differences in sampling designs and sampling frames of the Healthy Start and ECLS surveys, these confidence intervals are meant to assist the reader in developing a qualitative assessment of differences rather than providing a true test of statistically significant differences between the populations c ECLS rates were adjusted using the direct method of standardization to reflect the age and race/ethnicity distribution of the Healthy Start participants in the eight sites. Both datasets exclude those reporting they were multiracial due to very small sample sizes
Discussion
This study showed that Healthy Start participants in the eight selected sites received health information on a wide range of topics, got help accessing many needed services, and were very satisfied with the program. The level of unmet need was relatively low, except for dental appointments, housing, and child care. Healthy Start participants in the eight sites had perinatal outcomes that were similar to or better than two external benchmarks on several measures. In particular, rates of ever breastfeeding their infants and putting infants to sleep on their backs were at or near the Healthy People 2010 objectives, an important achievement given the high-risk profile of these participants. Although the causal influence of the Healthy Start program on these outcomes cannot be determined, the high rates of health education among Healthy Start participants (more than 90% for both breastfeeding and putting babies to sleep on their backs) may have contributed to these positive outcomes. Several caveats affect the interpretation and generalizability of our results. A limitation of this study was that the evaluation design did not allow us to identify causal relationships between the services provided by the Healthy Start program and the perinatal outcomes among participants. The two national benchmarks were meant to provide a national context for understanding the perinatal outcomes of the Healthy Start participants in eight sites and were not meant to describe the effectiveness of the Healthy Start program. This approach does not control for the multitude of risk factors (medical, economic, cultural, and social) that may be associated with perinatal health outcomes. Moreover, this approach does not allow us to infer what the outcomes would have been in the absence of Healthy Start. Furthermore, these results cannot be generalized to all 96 Healthy Start sites because the eight survey sites were not randomly selected. To represent the diversity of the Healthy Start program, the evaluation included a site located near the Mexico border and a site serving indigenous populations. In addition, selected sites were required to have implemented all nine Healthy Start program components, as well as data systems to track referrals and maintain electronic records. Thus, the selected sites were intended to depict the Healthy Start program when it is fully implemented. Finally, even though this survey achieved a high response rate across seven of the eight sites, the effects of non-response on the results are unknown. Moreover, small caseloads in each of the sites precluded separate analysis of Healthy Start participants in selected subgroups, notably Asian/Pacific Islanders and American Indians/Alaska Natives.
A decade ago, an evaluation of the 15 original Healthy Start sites compared the outcomes of Healthy Start participants to those of other women in the same geographic area. The study found that Healthy Start participants in the 15 sites were significantly more likely than other women to receive enhanced prenatal care services and they were more likely to be using birth control at the time of the interview [17]. Unlike the previous evaluation, the current study did not include a comparison group within the same geographic area, and instead, relied on national benchmarks for comparison purposes. Nevertheless, a comparison of service use and health behaviors reported by participants in the original 15 sites versus the current 8 sites suggests that Healthy Start participants in the current study had higher rates of interconception services, such as postpartum care and well baby visits, and higher rates of healthy behaviors, such as breastfeeding and elimination of alcohol use during pregnancy. Moreover, self-reported birth control use was higher among those in the current 8-site study than the original 15-site study (83% versus 52%). Levels of participant satisfaction were consistently high during both phases [18]. These results should be interpreted with caution, however, because they do not control for differences in participant or program characteristics, nor do they account for secular trends over the past decade.
As the Healthy Start program enters its fourth phase, this study has implications for program improvements in the future. First, interconception care is an emerging focus of the Healthy Start program and the evidence from this study is mixed. Although most women reported they had a postpartum visit and had chosen a birth control or family planning option, fewer women recalled receiving advice on how long to wait before becoming pregnant again, and fewer still were taking a multivitamin at least weekly. Recent recommendations for improving preconception care [9] and forthcoming recommendations for improving interconception care may help shape future program initiatives in this area.
A second implication relates to the need for increased emphasis on smoking cessation during pregnancy, although this need is not unique to Healthy Start. Among Healthy Start participants in the eight sites (as well as low-income mothers in the ECLS), a large difference was found between the percentage of women eliminating smoking during pregnancy and the Healthy People 2010 objective. Given the association between smoking during pregnancy and adverse perinatal outcomes [19], further efforts to eliminate smoking during pregnancy may be warranted.
This study also has implications for supporting Healthy Start programs in meeting the multifaceted needs of participants. Even though unmet need for health-related services was low (with the exception of dental appointments), unmet need for housing, child care, public assistance, food assistance, and transportation services was reported by 6-13% of the Healthy Start participants in the eight sites. (These rates reflect unmet needs during late 2007 and early 2008.) The level of unmet need for the diverse array of services underscores the wide range of community-based supports needed by high-risk women, as well as the importance of collaboration between Healthy Start and its community partners, through such mechanisms as a consortium and local health system action plan. Although the Healthy Start program is designed to address multiple social determinants of health, such as safe housing, these wide-ranging needs cannot always be met by programs with limited budgets and scope. With the recent trend in national housing policy toward the use of housing vouchers (and away from the production of new housing) [20], Healthy Start program staff noted that severe housing shortages and waiting lists pose a barrier to obtaining housing for participants. This study suggests that the provision of technical assistance and best practices in facilitating access to non-health-related services would support Healthy Start programs' wide-ranging efforts to reduce disparities in maternal and child health outcomes.
Finally, this study has implications for expanding postpartum health care coverage for women on par with children's health coverage. This study found that infants had better access to health care than their mothers in the eight selected communities, with higher rates of insurance coverage, medical homes, and checkups and lower rates of unmet health care needs. These findings are noteworthy not only because access to care is important to women's postpartum health status, but also because it may affect their ability to care for their children and may even contribute to the health of future children. One clear implication of this study is that insurance coverage gaps exist for women during the postpartum period. Expanded Medicaid coverage for pregnant women typically ends 60 days postpartum, leading to significantly higher uninsured rates for mothers compared to their infants. Continuing Medicaid coverage through the interconception period may help reduce differences in health care access and, ultimately, improve perinatal health outcomes.
In summary, this study has demonstrated that outcomes of Healthy Start participants in eight sites compare favorably to national benchmarks. Noteworthy achievements include the high rates of breastfeeding and adherence to the ''back-to-sleep'' recommendations among participants. Nevertheless, these results suggest that challenges remain in reducing disparities in perinatal health outcomes. Further attention to risk factors that may be associated with LBW, such as smoking, weight gain during pregnancy, and stress, may help close the gaps. However, the life course theory of health development suggests that improved maternal and child health outcomes may require longer-term investments [10]. Healthy Start's emerging focus on interconception care has the potential to address the longer-term needs of participants.
|
2016-05-12T22:15:10.714Z
|
2009-07-10T00:00:00.000
|
{
"year": 2009,
"sha1": "7a3acb1e7cc221c374a857efc4b9d593bc86059d",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10995-009-0474-1.pdf",
"oa_status": "BRONZE",
"pdf_src": "Anansi",
"pdf_hash": "7a3acb1e7cc221c374a857efc4b9d593bc86059d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
99234245
|
pes2o/s2orc
|
v3-fos-license
|
Investigating microwave deicing efficiency in concrete pavement
Microwave deicing is an intelligent and environmentally friendlymethod that overcomes themany shortfalls of traditional deicing methods, including mechanical, chemical and thermal techniques. In this paper, a robust method was investigated and the microwave deicing efficiency was defined as the temperature-rise rate of a concrete surface heated to 0 C. The heating of a concrete surface covered with an ice layer using microwaves from a rectangular waveguide was explored numerically and experimentally. A microwave deicing simulation model was constructed on the basis of finite element theory. Laboratory experiments were then carried out using a self-designed microwave deicing apparatus. The effects of the microwave frequency and pavement material on the microwave deicing efficiency were examined. The results indicate that the microwave efficiency is closely linked with the microwave frequency and pavement material. Compared with the use of a frequency of 2.45 GHz, using 5.8 GHz radiation decreased the penetration depth to 45%, while the microwave deicing efficiency increased by more than fivefold. When black iron oxide was added into the concrete mortar at 10 wt% of the total cement content, the microwave efficiency increased by more than 1.8-fold. Furthermore, the validity of the simulation model based on finite element theory was verified by the consistent results obtained between the simulations and experiments. Moreover, these results could provide theoretical guidance for the future application of microwave deicing.
Introduction
Winter ice can greatly reduce the friction coefficient of a pavement. On an icy pavement, automobile brake difficulties can cause accidents, while an increase in aircra taxiing distance can easily cause overshooting. Pavement freezing has become a serious threat to transportation safety. 1 Therefore, it is very important to study effective methods to remove ice from pavements, and in particular, from important traffic facilities such as highways, airports and roads.
To deice pavements, the traditional methods are mechanical, chemical, or thermal. The mechanical method, which is easy to operate, requires substantial manpower and material resources, and its deicing effect is also not satisfactory, because the vibration and improper use of the machinery can lead to serious damage of the pavement surface. 2 Moreover, mechanical devices cannot be used under some severe environmental conditions. As for the chemical method, it may be effective in its ability to deice, but the salts required to remove the ice, mainly NaCl-based melting agents, may affect the chemical composition of the concrete structure and pollute the environment. 3 The thermal method involves either an internal heating method or an external heating method. The internal heating method involves a heating element buried within the pavement, and possesses several disadvantages such as large investment, high energy consumption and unsatisfactory removal of thick ice. It can only be used for short segments of roadway, such as bridges. 4 The external heating method involves the use of high-temperature air produced by an old aircra engine to melt snow and ice. The exhaust from the engine, at high temperatures of up to 400-500 C, is usually used as the external heat source. Although this method is efficient, the engine fuel consumption is very high and it causes severe heat damage to airport pavement structures. 5 Rubber particles are also used for deicing; these are inlaid into pavements and can produce self-stress under automobile tire loads, making pavement deicing easier. 6 However, rubber particles are not suitable for constructing concrete pavements that require high strength, especially for those in airport runways.
Microwaves, which can rapidly heat dielectric and magnetic materials, are commonly used as a heat source. Ice scarcely absorbs microwaves, while concrete is a type of dielectric material. Accordingly, microwaves can heat concrete surfaces directly through the ice layer and weaken the bonding between the ice and concrete surface. Therefore, microwave heating provides an alternative approach for melting ice on the surface of concrete in an efficient and environmentally friendly way, and has good prospects for application to pavement deicing. The United States put forward research into microwave deicing in the 1980s, but the method has not been widely adopted, due to low efficiency. 7 The natural magnetite in taconite is an outstanding microwave absorber and in order to enhance the microwave absorbing capacity of a pavement, Hopstock used taconite as the aggregate to construct a "microwave road". The microwave radiation was greatly absorbed by the taconite and converted into heat, improving microwave deicing efficiency. 8 Guan et al. 9 used a domestic microwave to heat a frozen road specimen. The ice slowly broke away from the specimen surface under the microwave radiation. However, their study lacked in-depth analysis of the factors contributing to the deicing efficiency. Jiao et al. 10 applied microwave deicing to an asphalt pavement and analyzed the key factors driving the deicing efficiency; however, water, which shows an excellent ability to absorb microwave radiation, was not taken into account in their simulation model. The water from melting ice has a great inuence on the distribution of the temperature eld. In addition, a number of patents concerning the design of microwave deicing vehicles have been led. For example, Xu et al. 11 designed a model for a microwave deicing vehicle and applied to patent it, while Witt Highway Maintaining Equipment Company Ltd. in Foshan 12,13 devoted much effort to developing a microwave deicing vehicle for pavements and obtained two patents. However, microwave deicing has not been adopted in practice, due to its low efficiency. Therefore, the key to the practical application of microwave deicing in pavement deicing lies in the enhancement of the microwave deicing efficiency.
In this paper, the microwave deicing mechanism and the key factors that contribute to the deicing efficiency were analyzed. The microwave deicing efficiency is mainly affected by the microwave frequency and the pavement material, which were analyzed indepth using simulation methods and experiments. A microwave deicing simulation model was constructed based on nite element theory. In order to verify the validity of the simulation model, a self-designed device was used to conduct the microwave deicing in laboratory experiments.
Mechanism analysis
Microwave deicing refers to the use of microwave radiation, which possesses the ability to rapidly heat dielectric and magnetic materials, to heat concrete. As a truck-mounted microwave generator is driven over an ice-covered road, the microwave radiation should pass through the ice layer, heat the road surface directly, weaken the bonding between the ice and road surface and make it easier to scrape the ice away. The mechanism of microwave deicing can be described as follows. Dielectric and magnetic materials are composed of molecules which can be regarded as electric dipoles or magnetic dipoles. As these materials are not in an electric eld, the positive and negative dipoles in the materials are disordered, and can cancel each other out. Therefore, the material as a whole usually does not show electrical properties. When the materials are placed in an electric eld, each dipole will deect along the direction of the eld. Based on quantum eld theory, this sudden ordering of electric dipoles (generally called polarization) generates a secondary electric eld that combines with and strengthens the rst eld. Moreover, the polarization direction changes as the electric eld direction changes, resulting in the rubbing together of molecules. Large heat is produced by the friction generated between different molecules. The faster the electric eld direction changes, the more heat will be produced. A diagram of dielectric polarization is shown in Fig. 1.
Based on microwave heating theory, 14 the microwave power consumed by a material on a per unit volume basis is given simply by: where P is the power consumed on a per unit volume basis; f is the microwave frequency; E is the electric eld intensity; 3 0 r is the relative dielectric constant; and tan d is the loss angle constant.
According to eqn (1), the microwave power consumed by a material is related to many factors. It has been generally accepted that different frequencies of microwave radiation have a different effective depth and efficiency. Preliminary studies show that the higher the frequency, the lower the effective depth and the greater the efficiency. 15,16 Therefore, microwave frequency is a key factor inuencing a material's microwave absorbing performance. Moreover, electric eld intensity is also related to microwave power. With an increase in the electric eld intensity, the polarization increases. Meanwhile, the heat produced by the polarization is greater. In addition, the relative dielectric constant and the loss angle constant are the inherent attributes of materials responsible for the microwave-absorbing ability. The larger these parameters are, the stronger the ability is. The characteristic parameters of related substances are listed in Table 1. 10 The loss angle of ice is close to zero, accounting for the fact that the ice layer, which hardly absorbs microwave radiation, is almost transparent to microwaves. 1 Therefore, the microwaves can pass through the ice layer and heat the concrete surface directly.
Research methods
The temperature-rise characteristic of a concrete material under microwave irradiation is very complex, and many factors may inuence this parameter. According to the mechanism analysis, microwave frequency and material type are the two prominent factors. In this paper, based on nite element theory, a microwave deicing simulation model was built. In addition, it is practicable to take the temperature-rise rate of a concrete surface heated up to 0 C as the index for microwave deicing efficiency. The temperature eld distribution in the interior and on the surface of the concrete was analyzed. The inuence of the microwave frequency and the material type on the microwave deicing efficiency was analyzed in depth.
Simulation model
The microwave generator of the microwave deicing vehicle is made up of many magnetrons and waveguides. For simplicity, the coupling effects between different microwaves produced by different magnetrons were ignored in this study. The microwave deicing efficiency was studied using a single magnetron and waveguide as an example. Relevant literature reports indicated that the penetration depth for 2.45 GHz microwaves in concrete is about 112 mm and the penetration depth for 5.8 GHz microwaves in concrete is deeper. 14 Therefore, the thickness of the specimen was set to 150 mm and the thickness of the ice layer was set to 15 mm in the simulation model. Based on nite element theory, the model was built. The mesh graph shown in Fig. 2 indicates that the simulation model is composed of several types of domain, including concrete, ice layer, waveguide and air. The concrete dimensions are 150 mm  150 mm  150 mm, the dimensions of the waveguide for 2.45 GHz microwave radiation are 109.2 mm  54.6 mm and the dimensions of the waveguide for 5.8 GHz microwave radiation are 40.4 mm  20.2 mm. A perfect matching layer (PML), which can absorb microwaves from different angles without reection, was adopted to model the radiation properties in a natural environment. Phase-change heat transfer was used to simulate the ice-to-water process. It can be seen that the water displays excellent microwave absorption, even though the ice layer hardly absorbs microwaves. Thus, the water melting from the ice layer must be seriously considered in the simulation. The origin of the coordinate system is set at the center of the concrete surface, the positive Z-axis points to the waveguide, and Path 1 is dened as the line from point (0, 0, À150) to point (0, 0, 15); namely, from the center of the concrete surface to that of the ice layer surface.
Laboratory experiments
Laboratory experiments were conducted on the self-designed microwave deicing apparatus shown in Fig. 3, which is composed of a magnetron, waveguide, adjusting lever for waveguide height, cooling system and circuit system. The height of the waveguide port can be adjusted using the adjusting lever. Two cooling pipes with circulating water were added onto the magnetron to control the internal temperature. The apparatus for generating a frequency of 2.45 GHz is similar to that for generating 5.8 GHz, except for the waveguide dimensions, which are the same as those adopted in the simulation Fig. 2 Microwave deicing simulation model. Fig. 3 Microwave deicing experiment apparatus.
model. In laboratory experiments, the microwave radiation is produced in the magnetron. Then, the microwaves propagate in a direction parallel to the waveguide. When the microwaves reach the waveguide port, they diffuse towards the ice layer and concrete specimen. Then they penetrate the ice layer and heat the concrete surface directly. The surface temperature increases under the microwave irradiation. The concrete specimens of dimensions 150 mm  150 mm  150 mm were covered with a 15 mm-thick ice layer. The thermocouple, recording the temperature change, was positioned at the interface between the concrete surface and the ice layer.
Microwave frequency
Microwave frequency is a key factor inuencing microwave heating which can affect the electromagnetic parameters of a material and the penetration depth of microwave radiation. A frequency of 2.45 GHz is generally used in industrial microwave heating. However, in contrast to industrial microwave heating, microwave deicing focuses only on the surface temperature change on a pavement. Compared with 2.45 GHz radiation, the heating efficiency of 5.8 GHz radiation is higher and exhibits a smaller penetration depth. Therefore, it is necessary to study the microwave deicing efficiency of 5.8 GHz radiation. This paper reports the deicing efficiency obtained using both simulations and experiments. 4.1.1 Simulation research. In the simulation model, the initial temperature of the air, concrete and ice layer was set to À10 C, the height of the waveguide end was set to 20 mm and the excitation power of the waveguide port was set to 1500 W. Free triangulation and free tetrahedral were adopted for mesh generation. The results demonstrate the relevance of employing a simulation model to microwave deicing. Fig. 4 shows the highest temperatures achieved at a concrete surface under these two frequencies.
The results indicate that when the highest temperature on the concrete surface reaches 0 C from the same initial temperature (À10 C), the microwave duration time for 2.45 GHz is 24.5 s and the temperature-rise rate is 0.41 C s À1 , whereas for 5.8 GHz, the microwave duration time is 4.5 s and the temperature-rise rate is 2.22 C s À1 , which is 5.4 times that for 2.45 GHz. A possible reason for this observation may be that the polarization direction of the material under the effect of 5.8 GHz radiation changes faster and the friction between the polar molecules is stronger. Consequently, more heat is produced in the material and the surface temperature is higher. In addition, it was observed that the temperature-rise rate increases aer the surface temperature reaches 0 C. This is due to the excellent microwave absorbing ability of water. The polar molecules in water are more active than those in ice. Therefore, more heat is produced in water than in ice under the same microwave radiation. When the ice layer melts into water, the water absorbs more microwave energy and more heat is produced than in ice. Therefore, the temperature-rise rate increases aer the surface temperature reaches 0 C. Another interesting observation is that the temperature-rise rate increases and decreases repeatedly aer the ice layer melts into water. The temperature difference between the ice and water is large and leads to heat transfer to the ice layer when the temperature of the water in the ice layer rises to a certain value. The surface temperature-rise rate then becomes slow, resulting in the melting of the ice into water and the absorption of more microwave energy. Then the temperature-rise rate continues to increase and even the surface temperature decreases, which can be clearly seen under the conditions of 2.45 GHz microwave irradiation.
The electric eld mode is an important parameter that affects the heat generation rate in microwave heating. Taking Path 1 as an example, this study investigated the distribution of the electric eld in concrete. Based on electromagnetic theory, microwave radiation can permeate into a material where the electric eld mode decreases to e À1 times that at the material surface. The electric eld distributions in concrete under these two frequencies are compared in Fig. 5. It is observed that, for 2.45 GHz, the electric eld mode is 8.21 kV m À1 on a concrete surface, and the depth is 118 mm as the eld mode decreases to 3.02 kV m À1 (e À1 times). However, for 5.8 GHz, the electric eld mode is 12.38 kV m À1 on a concrete surface. As the eld mode decreases to 4.56 kV m À1 (e À1 times), the depth decreases to 53 mm. Therefore, the penetration of 5.8 GHz radiation is just 44.9% that of 2.45 GHz radiation in microwave deicing, which means that the heat produced by 5.8 GHz microwaves is more concentrated near the concrete surface, indicating that the 5.8 GHz frequency is more conducive to the application of deicing. The temperature eld distribution inside concrete is demonstrated in Fig. 6, for when the surface temperature reaches 0 C. The direction of the temperature eld distribution is parallel to Path 1. This indicates that the temperature change curves along the direction of Path 1 under these two frequencies are similar. The temperature on the surface of the ice layer is the lowest. Then, the temperature increases with increasing depth until it reaches its peak at about 10 mm below the concrete surface. Aer this peak, the temperature decreases with increasing depth. As can be seen from Fig. 6, compared with 2.45 GHz, the heat generated by 5.8 GHz radiation is much closer to the concrete surface. The reason for this observation may be that the penetration depth of the 5.8 GHz radiation is smaller than that of the 2.45 GHz radiation. In addition, it also can be seen that the magnitude of the 5.8 GHz radiation is lower than that of the 2.45 GHz radiation. The main reason for this phenomenon may be that the duration time (4.5 s) of the 5.8 GHz microwave exposure is much shorter than that (24.5 s) for the 2.45 GHz microwaves. Thus, the heat produced by 5.8 GHz radiation is less. Above all, it can be concluded that microwaves with a frequency of 5.8 GHz are better for application to pavement deicing.
4.1.2 Experimental research. The concrete specimens were prepared in the mixture ratio of 330 kg of cement, 136 kg of water, 4 kg of superplasticizer, 563 kg of sand and 1438 kg of stone per 1 m 3 . The specimens were then cured in an environmentallycontrolled room at 20 C and 95% relative humidity for 28 days. Then, the ice layer was prepared in a refrigerator at a temperature of À20 C. The specimens covered with the ice layer are shown in Fig. 7. A self-designed microwave deicing apparatus was adopted to conduct the microwave deicing experiments. In the experiments, the magnetron power was set to 1500 W and the waveguide port height was set to 20 mm. The thermocouples, pasted at the interface between the concrete surface and ice layer, were used to record the temperature change on the concrete surface. The surface temperature changes under frequencies of 2.45 GHz and 5.8 GHz were studied. Fig. 8 displays the ice layer aer microwave irradiation. It is observed that there is a large hole in the ice layer. The hole appears to be cone-shaped, which indicates that the ice melted rst from the part close to the concrete surface. The adhesion between the ice layer and the concrete surface would then have been reduced and consequently the ice layer could easily have been removed by mechanical means. This phenomenon also indicates that the microwave-absorbing property of the ice layer is weak and that the microwaves could penetrate through the ice layer to heat the concrete directly.
The thermocouples recorded the temperature change at the interface between the concrete surface and the ice layer; the results are shown in Table 2. It can be seen that the initial temperature (namely the environmental temperature) is independent of the microwave efficiency, but it affects the deicing time. The lower the initial temperature, the longer the deicing time. The average temperature-rise rate is 0.34 C s À1 under 2.45 GHz radiation, whereas for 5.8 GHz radiation, the average temperature-rise rate can reach 1.72 C s À1 . Therefore, the microwave deicing efficiency of the 5.8 GHz microwaves is 4.99 times of that of the 2.45 GHz microwaves.
The experimental results are very close to the simulation results, indicating the reliability of the simulation model. However, it can be seen that the deicing efficiency obtained in the experiments is slightly lower than that from the simulation research. That is because the electromagnetic parameters are regarded as constant in the simulation model. In reality, these parameters are dependent on the temperature of the material, especially when it is exposed to microwave-frequency radiation. 19
Pavement materials
The relative dielectric constant and loss tangent are important indicators for the microwave-absorbing properties of materials. Black iron oxide, which is smelted from magnetic ore, is a type of oxide mineral with an equiaxed crystalline structure. This oxide exhibits strong magnetism and has a chemical formula of Fe 3 O 4 . Studies have shown that this type of mineral exhibits perfect performance with respect to microwave-absorption and temperature-rise behavior. [18][19][20] In the present work, black iron oxide was doped into cement mortar to improve microwaveabsorbing performance. The dosage of black iron oxide was 10 wt% of the total cement content. The new electromagnetic parameters, important input parameters for the simulation model, were calculated according to the volume ratio. The new dielectric constant was 28 and the new loss tangent was 0.075. Simulations and experiments were conducted on plain concrete (PC) and black iron oxide concrete (BC).
4.2.1 Simulation research. In the simulation model, the initial temperature of the concrete was set to À10 C, the frequency was set to 2.45 GHz, the power was set to 1500 W and the waveguide port height was set to 20 mm. The deicing process of PC and BC was simulated based on nite element theory. The highest surface temperatures with time for PC and BC are demonstrated in Fig. 9. It can be seen that the time taken for the BC surface temperature to reach 0 C is 15 s. The temperature-rise rate is 0.67 C s À1 , which is 1.87 times that of PC. The black iron oxide, mixed in cement mortar, improves the magnetism of the mixture and transforms the magnetic component into heat. Hence the temperature-rise rate of BC is higher than that of PC. Another interesting observation is that the temperature-rise rate increases and decreases repeatedly aer the ice layer melts into water, which is similar to the trend observed in the study of the microwave frequency.
The temperature eld distribution inside the concrete is shown in Fig. 10, for when the highest surface temperature reaches 0 C. The direction of the temperature eld distribution is parallel to Path 1. It can be seen that the general rule of temperature distribution inside these two kinds of concrete is similar, but the highest internal temperature in BC is 7.9 C, whereas the highest internal temperature in PC is 12.4 C. The internal temperature in PC is higher than that in BC at the same depth. The reason for this phenomenon is as follows. Black iron oxide increases the electromagnetic parameters of the concrete, so the microwave penetration depth decreases. 17 Consequently, . 9 The maximum surface temperature of PC and BC. Fig. 10 Temperature field distribution in the vertical direction. compared with PC, the heat produced by the microwave radiation is much closer to the concrete surface inside BC and the surface temperature-rise rate of BC is faster. Therefore, less time is needed to heat the surface temperature to 0 C and less heat is produced inside BC. Accordingly, the internal temperature in PC is higher than that in BC at the same depth.
Experimental research.
In order to study the microwave deicing efficiency of concrete with black iron oxide added, the microwave deicing apparatus was used to conduct microwave deicing experiments at the frequency of 2.45 GHz on BC and PC. The BC specimens were prepared using the same mixture ratio as PC, with 10 wt% black iron oxide added to the cement content. The preparation of the specimens and the ice layer for BC was the same as those for PC. Table 3 presents the microwave deicing efficiency of BC. It can be seen that the average temperature-rise rate for BC is 0.58 C s À1 , which is 1.73 times that for PC. In addition, the experimental results are in general agreement with the simulation results, thus verifying the validity of the simulation model again.
Conclusions
In this paper, the effects of microwave frequency radiation and pavement material on microwave deicing efficiency were explored numerically and experimentally. The microwave deicing efficiency was dened as the temperature-rise rate of the concrete surface heated up to 0 C. Based on nite element theory, a microwave deicing simulation model was built and simulations were conducted. Laboratory experiments were carried out using a self-designed microwave deicing apparatus.
The experimental results are in general agreement with the simulation results, verifying the validity of the simulation model. The results show that the microwave efficiency of 5.8 GHz radiation is about 5 times that of 2.45 GHz radiation. When black iron oxide was added into the concrete at 10 wt% of the total cement content, the microwave deicing efficiency improved by about 1.8-fold. In addition, a cone-shape hole formed in the ice layer aer microwave irradiation, indicating that the ice layer absorbs little microwave radiation and that the method of microwave deicing is feasible.
Microwave deicing is an intelligent, environmentally friendly method, which overcomes the shortfalls of traditional deicing methods such as mechanical, chemical and thermal methods. In the present study, a microwave deicing simulation model was constructed on the basis of constant parameters in Table 1. However, these parameters change depending upon concrete composition. The effects of the change in these parameters on the microwave deicing efficiency need to be further investigated.
Additionally, in future research more attention should be paid to studying 5.8 GHz microwave radiation and pavement materials with superior microwave-absorbing performance.
|
2019-04-08T13:11:24.375Z
|
2017-01-27T00:00:00.000
|
{
"year": 2017,
"sha1": "2439036c86a96cbdf0052dc3bd5db27ee27419f4",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c6ra27109j",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "57b4dc1147ad1790fb1604d5c0911c181c810b25",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
957067
|
pes2o/s2orc
|
v3-fos-license
|
Accuracy of Combined Visual Inspection with Acetic Acid and Cervical Cytology Testing as a Primary Screening Tool for Cervical Cancer : a Systematic Review and Meta-Analysis
1Faculty of Postgraduate Studies, University of Health Sciences, 3Lao-Oxford-Mahosot Hospital-Wellcome Trust Research Unit (LOMWRU), Microbiology Laboratory, Mahosot Hospital, Vientiane, 5Gynecologic Oncology Unit, Setthathirath Hospital, 8Institut de la Francophonie pour la Médecine tropicale, Vientiane, Lao PDR. 2Department of Social and Preventive Medicine, Faculty of Medicine, Laval University, Quebec, Canada. 6Mahidol-Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand. 4Centre for Tropical Medicine and Global Health, Churchill Hospital, 7Nuffield Department of Medicine, University of Oxford, UK *For correspondence: phetsavanh456@gmail.com Abstract
Introduction
Cervical cancer is the fourth most commonly diagnosed cancer and the fourth leading cause of cancerrelated death in women worldwide, and is amenable to the disease, but the rates vary largely among different subregions.The highest rate is found in South Asia (Bruni et al., 2015).Cervical cancer could be prevented through HPV vaccination and screening as primary and secondary prevention strategies, respectively (Jacob, 2009;Echelman and Feldman, 2012).Several approaches are available for the screening of precancerous cervical lesions.In developing countries, because of resources issues, the main options are cervical cytology and visual Inspection with Acetic Acid (VIA) (Sherris et al., 2009).
Yet, the accuracy of both cervical cytology and VIA tests for detecting cervical precancerous lesions varies from one setting to another.According to a systematic review on 12 studies, cervical cytology sensitivity ranged from 30% to 87% and its specificity from 86% to 100% (Nanda et al., 2000).Meanwhile, sensitivity and specificity estimates for VIA were 72% to 80% and 79% to 92%, respectively (Sauvaget et al., 2011).In India, for instance, screening with VIA could prevent 22 000 deaths due to a cervical cancer each year (Kay, 2013).Nevertheless, VIA, besides its easiness of use and its low cost (Sherris et al., 2009), has interesting characteristics, particularly regarding its sensitivity and its negative predictive value compared to conventional cytology.The sensitivity of VIA is commonly higher than the sensitivity of Cervical cytology, but its specificity for the detection of precancerous cervical lesions is lower, leading to more false positive results (Consul et al., 2012).
There is evidence that in comparison with screening by cytology alone, double testing with HPV DNA and cervical cytology results in a 35% (95% CI = 15% to 60%) increase in sensitivity to detect high-grade cervical intraepithelial neoplasia (CIN) or a cancer, compared to testing with cervical cytology alone (Naucler et al., 2009).Co-testing with these screening techniques is now currently practiced in the USA (Saslow et al., 2012).However, HPV DNA testing is limited in low-resource settings.Another potential combined method for the detection of cervical precancerous lesions would be cervical cytology and VIA as the latter is readily available in low-income countries.A few studies have been published on the topic.However, results diverged.A systematic review and a meta-analysis are still required to evaluate the accuracy and the potential usefulness of this combined test.
Search strategy
We conducted a systematic review and meta-analysis in compliance with the guidelines of the Cochrane Handbook for Systematic Review of Diagnostic Test Accuracy (Deeks et al., 2010) and the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines (Liberati et al., 2009).Articles were searched up to June 2014 in Pubmed, Embase, Website of Science, CINHAL and COCRANE databases using the following key-words: cytology; VIA and sensitivity and their synonyms based on CisMef, without language or publication type restrictions.After removing duplicated records, all citations were included in the citation screening process using EndNote Software, version X6 (Thomson Reuters, 2012).Two reviewers independently screened titles, abstracts and full articles to establish eligibility and extract the data from included studies.A third reviewer was consulted in case of disagreement.
Eligibility criteria
To be eligible, articles had to report data on the sensitivity and specificity of combined VIA and cytology testing.Both VIA and cervical cytology had to be performed in the same women with asymptomatic or symptomatic conditions.Colposcopy and/or biopsy on at least a positive VIA or cervical cytology result had to be selected as a goal standard.Review articles were excluded.
Outcome of interest
The primary outcome was the sensitivity, specificity, positive and negative likelihood ratio (LR+ and LR-) and diagnostic odds ratio (DOR) of combined VIA and cytology testing.A secondary outcome was the difference in sensitivity and specificity ratios between the combined test and the single tests.
Two situations were examined: either-positive result cases and both-positive result cases.In the either-positive result case, a positive result implies positivity in at least one of the tests.A negative result implies negativity in both tests.In the both-positive case, a positive result implies having both tests positive.A negative result implies negativity in one of them.
The definition of a positive result on cervical cytology was low-grade squamous intraepithelial lesion (LSIL) or higher, according to the Bethesda System.The positive result of Visual Inspection with Acetic acid (VIA) was the color of the cervix turning to white when acetic acid is applied.These definitions were used in all included studies.
Quality assessment
Two authors independently examined the risk of bias and applicability using the Quality Assessment of Diagnostic Accuracy Study 2 (DUADAS-2) tool (Whiting et al., 2011).A third author was consulted to solve discrepancies.Items examined included: 1) patient selection, 2) index test, 3) reference standard and 4) flow and timing.Meanwhile, the items examining applicability concerns were 1) patient selection, 2) index test, and 3) reference standard.Each item was rated as high, low or unclear risk or concern.
A study was considered to be of appropriate quality in the following cases: it avoided a case-control study design, it used a randomized recruitment strategy and more than 80% of patients were included in the analysis, the reference standard was performed within two weeks of the combined test, the interpretation of cervical cytology was blinded to VIA result and all patients underwent the same reference standard test.
The study was considered of low quality when it referred to symptomatic patients, patients with high HPV prevalence such as HIV patients, patients with precancerous lesions and invasive cancer.Partial verification bias was considered possible if only some of the included patients underwent the reference standard DOI:http://dx.doi.org/10.7314/APJCP.2015.16.14.5889 Combined VIA and Cervical Cytology in Primary Screening for Cervical Cancer: a Systematic Review and Meta-Analysis test.
Data collection
Two authors independently extracted the data from eligible studies.When results were discordant, a third author was consulted.We extracted information on the characteristics of the study; authors, year of publication, year the study was conducted, setting, study population and design, screener, threshold of cervical cytology positive results, and gold standard.The threshold for a positive result case of cervical cytology was either ASCUS or LSIL.When both ASCUS and LSIL thresholds were reported, we defined low-grade squamous intraepithelial lesions (LSIL) as a positive result because this was the threshold considered in most studies that were included in the analysis.
The true positive (TP), false positive (FP), true negative (TN) and false negative (FN) rates of both combined test and single tests were extracted from individual studies (Macaskill et al., 2010).
Data analysis
We used a bivariate hierarchical random-effects model, as recommended in Cochrane guidelines (Macaskill et al., 2010), using Stata program version 12 (StataCorp LP, College Station, TX, USA) with the metandi command (Harbord and Whiting, 2009).The meta-analytical random-effects model was used to pool and compare the relative ratios of sensitivity and specificity to detect precancerous lesions or cancers, using the combined test as numerator and single tests as denominators.A threshold of p<0.05 was used to establish statistical significance.Forest plots were produced to present pooled and individual estimates of sensitivity and specificity and their 95% confidence intervals using Cochrane Review Manager version 5.2 (The Nordic Cochrane Centre, The Cochrane Collaboration, Copenhagen, Denmark, 2012).
Hierarchical summary receiver operating characteristics (HSROC) curves were generated.Heterogeneity was assessed by evaluating the influence of pre-established variables (site of study "lower-middle-income countries or other", the sample size "more or less than 900" and the screener "Physician or other") on the DOR using a meta-regression model.The I2 statistic was calculated to quantify heterogeneity (Macaskill et al., 2010).Lowermiddle-income countries were defined, according to the World Bank, as countries with a gross national income (GNI) per capita from $1,046 to $4,125 (World Bank, 2014).Statistically significance was set at p<0.05 (Macaskill et al., 2010).
Sensitivity analyses
Sensitivity analyses on verification bias and disease positivity criteria were conducted to evaluate the robustness of the results.We restricted the analyses to the five studies without partial verification bias and to five studies with only CIN2+ as a definition of positivity for the disease.
Study characteristics
353 citations were identified based on article titles (Figure 1).After removing duplicates, 233 abstracts were examined.Forty-three were retained for full-text screening.Nine articles were retained.Among excluded articles, 29 did not provide data on the performance of combined VIA and cervical cytology testing and five were duplicates of the same study.
All included articles were based on cross-sectional studies (Table 1).Three were conducted in India and the others in Iran, Pakistan, Sudan, Brazil, Zimbabwe and Kenya.Five studies were conducted in asymptomatic healthy women; one in HIV-positive women, one in symptomatic women and two in women having an unknown clinical condition.The study with the highest sample size, 10,138 women, was a multiple setting study performed in Brazil and Argentina.Most screeners of VIA were trained nurses (55.6%).Most studies used LSIL as a cut-off point for a positive cervical cytology test (seven studies).Meanwhile, high-grade CIN was considered as a
Quality assessment of studies
Overall, two of nine studies met the criteria of high quality according to the QUADAS-2 tool.First, there was no risk of bias in terms of patient selection as all studies were cross-sectional, and all subjects were included in the analysis.However, there were application concerns as nearly half of the included studies did not clearly specify whether participants were asymptomatic or not.The risk of bias in terms of the index test was low; all studies had a clear definition of a positive result for VIA and cervical cytology tests (low risk of bias in terms of index test).Only one study did not specify the occupation of the screeners.Some studies did not specify whether the histology interpretation was blind from the result of the cervical cytology test, leading to a potential concern on risk of bias in terms of the reference standard.Among the nine studies, four had a high risk of partial verification biases, because only some positive results were referred to a reference standard examination (data not shown).
Summary estimates of test performance
Figure 2 presents the summary estimates of the sensitivities and specificities of the combined VIA and cervical cytology tests and of the single tests in detecting cervical precancerous lesions in each study included in the analysis.The range of sensitivity and specificity was large for all tests.
The pooled estimates of sensitivity and specificity of the combined test in the either-positive result case for detecting cervical precancerous lesions were 0.87 (95% confidence interval: 0.83-0.90)and 0.79 (95% CI: 0.63-0.89),respectively.The corresponding values for the combined test in the both-positive result case were 0.38 (95% CI: 0.29-0.48)and 0.98 (95% CI: 0.96-0.99),respectively.The pooled estimates of the positive and negative likelihood ratio and diagnostic odds ratio (DOR) of the combined tests were lower in the either-positive cases compared to the both-positive result cases in all included studies.Details are presented in Table 2.
There was a significant difference in performance between the combined test and the single tests.Compared to the combined test in the both-positive result case, the combined test in the either-positive result case had a significantly higher pooled estimated relative sensitivity, even in the sensitivity analyses restricted to studies without partial verification bias and in the CIN2+ study.Compared to the VIA and cervical cytology tests alone, the combined test in the either-positive result case also had a higher sensitivity.However, its pooled estimated relative specificity was significantly lower than that of the combined test in the both-positive result case or the VIA and cervical cytology tests alone.Meanwhile, the combined test in the both-positive result case had a significant higher pooled estimated relative specificity than the VIA and cervical cytology tests alone in both nonrestriction and restriction analyses (results not shown).
Figure 3 shows the hierarchical summary receiver operating characteristics (HSROC) curves of the combined test in the either-positive result case and in the both-positive result case under different scenarios i.e. all included studies, articles without partial verification bias and CIN2+ disease positive threshold analyses.The curves display the joint sensitivity and specificity in each study, showing the individual estimates, the summary estimates, their 95% confidence and the prediction region.Compared to the combined test in the both-positive result case, the summary point of the combined test in the either-positive result case was on the upper-right side, indicating a higher sensitivity and a lower specificity.Additionally, the 95% prediction region for the combined test in the eitherpositive result case was larger than the combined test in the both-positive result case.
Heterogeneity of diagnostic performance
Heterogeneity between studies was tested with the I2 statistic in addition to the influence of covariates on DOR.Results show that the combined test in the either-positive result case and in the both-positive result case presented a large heterogeneity between studies, with an I2 statistic higher than 75% (Figure 2).
Table 3 shows that there was no significant association between any covariates and DOR for the combined test in the either-positive result or the both-positive result cases if all studies were included in the meta-regression model.When the analysis was restricted to include only studies with CIN2+ as a threshold of the disease, we found that the place of the study had a significant influence on the DOR of the combined test in the either-positive result as well as in the both-positive result cases.Additionally, other covariates, including the screener and the size of study had a significant influence on DOR of the combined test in the both-positive results case.
Sensitivity analyses
In analyses restricted to articles without partial verification bias and high-grade CIN or worse (CIN2+) as a threshold for the diagnosis of the disease, the same pattern was produced.DORs rank did not change; the DOR of the combined test in the both-positive results case remained the highest.However, the DORs in the restricted analyses were lower than those calculated on all studies.In addition, the specificity of the combined test in the either-positive result case was lower when analyses were restricted to studies without partial verification bias and high-grade CIN as a threshold of disease (Table 2).
Discussion
To the best of our knowledge, this is the first metaanalysis aiming to determine the accuracy of combined VIA and cervical cytology testing in detecting cervical precancerous and cancerous lesions.The major findings in this meta-analysis are: 1) under the either-positive result case the combined VIA and cervical cytology test has a higher sensitivity but a lower specificity than under the both-positive result case for detecting cervical precancerous lesions; 2) the sensitivity of the combined test in the either-positive result case was significantly higher than the sensitivities of the VIA or cervical cytology tests alone; 3) specificity of the combined test in the either-positive result case decreased in analyses restricted to articles without partial verification bias and CIN2+ disease positive threshold; and 4) restriction analyses showed that the screener, the place of study and the size of the population are covariates that significantly influence the diagnostic accuracy of the combined test in the both-positive result case.
The low specificity of the combined test in the eitherpositive result case, compared to VIA or cervical cytology tests alone, is probably due to the fact that a true negative result required negativity of both VIA and cervical cytology.Similarly, low sensitivity of the combined test in the both-positive result case, compared to VIA or cervical cytology tests alone, required positivity of both VIA and cervical cytology.In contrast, the combination of HPV DNA and cervical cytology increases test sensitivity and maintains an adequate specificity (Vesco et al., 2011).Effectively, maintaining the performance of the test requires a high consistence of diagnostic accuracy in both tests to detect and rule out the disease.This might not be the case of VIA and cervical cytology.Result interpretation of these tests is subjective.VIA commonly has a high sensitivity, but a low specificity compared to cervical cytology (Arbyn et al., 2008;Consul et al., 2012).The positive result of VIA could be related not just only to cervical precancerous lesions, but also to inflammation and infections other than HPV infection (Vedantham et al., 2010).Meanwhile, the quality of cervical cytology depends on the quality of the sample collection and the competence of the cytologist in interpreting the result (Denny et al., 2006).As a result, there is in a large variation of the performance of the test, with both VIA and cytology, not only between countries, but also inside countries.For instance, it has been shown that the sensitivity of cytology varied from 28.9 to 76.9% at LSIL threshold in India (Sankaranarayanan et al., 2004).
DOR results lead to the same conclusion as LR+, indicating that the combined test in the both-positive case is the most accurate diagnostic test.The increase of DOR indicates an increase in the discriminating power of the tests (Bossuyt et al., 2013).The highest DOR in the combined test in the both-positive case might be explained by its highest specificity, which was nearly 1 despite its lowest sensitivity.
The combined test in both, the either-positive result and in both-positive result cases had advantages and limits to detect and rule out the disease.Our meta-analysis found a high probability of false positive results (1-specificity) in the either-positive result case, and of high false negative results in the both-positive result case (1-sensitiviy).The false positive result could lead to anxiety and further unnecessary invasive investigation or treatment, which are harmful in terms of physical, psychological and economic burden.In contrast, false negative results yield to considerable delay in diagnostic and treatment particularly when screening interval spreads over several years.This delay might lead to more complicated and advanced stages of the disease, requiring more advanced diagnostic investigations, and consequently delayed treatment and a higher risk of death as found in countries with high incidence and mortality rates of invasive cervical cancer (Bossuyt et al., 2013).
The performance of the combined test varied across studies.This variability might occur as a result of the variability of the performance of both VIA and cervical cytology tests.The result of I2 statistic found consistently large variations between studies in meta-regression analysis.Indeed, meta-regression analysis confirmed this significant variability by exploring the influence of covariates on DOR in restriction analyses, which consisted in including only studies with CIN2+ as the threshold of disease.Our finding is consistent with the study by Chen et al (Chen et al., 2012) that shows that the setting and the size of the population were significantly associated with DOR of VIA in restriction analyses.These covariates did not significantly influence the DOR in non-restriction analyses.This indicated that the influence of covariates depended on study characteristics, particularly the threshold of the disease.To better clarify and rule out the variability of the diagnostic test accuracy, more restriction is probably needed, for instance: restricting the analyses to articles with similar characteristics of test performance (setting, capacity of interpreter and etc.).However, we could not conduct this restriction analysis in our metaanalysis due to the limited number of relevant studies.Further individual studies on the performance of VIA and cervical cytology combined test are apparently required.
The specificity of the combined test in the eitherpositive result case decreased when analyses were restricted to studies without partial verification bias.This indicates an overestimate of specificity for the combined test in the either-positive result case.Evidently, a partial verification bias can lead to an overestimate of the sensitivities and specificities as a result of a lower proportion of false negatives.The verification biased could be corrected using a Bayesian approach, multiple imputation and the conventional correction method proposed by Begg and Greenes (de Groot et al., 2011).
As noted, the performance of colposcopy exam is not a perfect test for diagnosing cervical precancerous lesions.Meta-analyses showed that colposcopy had sensitivities ranging from 64% to 99% and specificities from 30% to 93% in the detection of high-grade CIN (Mitchell et al., 1998).In none of the included studies did all women receive a biopsy.The subjectivity of the colposcopy-directed biopsy exam could have affected the pooled estimated sensitivity and specificity found in our meta-analysis (Sideri et al., 1995).Due to the limited number of included studies, the restriction analysis could not be done for this case.
This meta-analysis does have some limitations, which could affect the interpretation of results.First, due to the limited number of studies included, we could not assess the change of sensitivity and specificity among women with ASCUS as positive result of cervical cytology, a low-grade CIN as a disease, the geographical region and symptomatic women.However, the performance of the combined test did not change when the analyses were restricted to articles without partial verification bias and CIN2+, with the exception of the specificity of the combined test in the either-positive result case, which was high compared to non-restriction analysis.This might reflect an overestimation of the specificity of this test.
Second, VIA is recommended only for women aged of 30-45 years.But we could not conduct the analysis in this subgroup due to lack of information on test performance according to the age.This could underestimate the sensitivity due to a greater number of false negative results (FIGO, 2009).
Third, due to the limited number of studies focusing on the diagnostic accuracy of the combination VIA and cervical cytology tests for the detection of cervical precancerous and cancerous lesions, we could not explore the performance of sequential testing cervical cytology in positive VIA cases.This strategy might diminish the false positive rate of VIA, particularly in settings where VIA screening is implemented.Further individual and meta-analytic studies are therefore needed to answer this question.
The combination of VIA and cervical cytology in the either-positive result case gained sensitivity compared to the use a single approach, but lost specificity, contrary to combination in the both-positive result case.Our results suggest that the combined test should be considered in developing countries as a primary screening test if facilities exist to confirm, through colposcopy and biopsy a positive result in order to diminish the number of false positive cases and its consequence, unnecessary treatment.
Figure 1 .Figure 3 .
Figure 1.Flowchart of Procedure Performed in Systematic Review
Figure 2 .
Figure 2. Forest Plot of the VIA and Cervical Cytology Combined test and Single Test
Characteristics of Included Articles in the Analysis
Symptomatic consisted of persistent vaginal discharge, intermenstrual bleeding, post coital bleeding, unhealthy cervix on examination; LSIL+ consisted of low grade squamous intraepithelial lesion or worse; ASCUS+ consisted of atypical Squamous Cells of Undetermined Significance and worse; ¶ Cervical cytology was Conventional cytology with Ayre's spatula and cytobrush; HG-CIN consisted of high-grade cervical intraepithelial neoplasia only; CIN2+ consisted of high-grade cervical intraepithelial neoplasia and invasive cervical cancer;CIN1+ consisted of low-grade and high-grade cervical intraepithelial neoplasia and invasive cervical cancer DOI:http://dx.doi.org/10.7314/APJCP.2015.16.14.5889CombinedVIA and Cervical Cytology in Primary Screening for Cervical Cancer: a Systematic Review and Meta-Analysisthreshold for the disease in six studies.The gold standard test for confirming the cervical precancerous lesions was a colposcopy/direct biopsy (Table1). #
Table 3 . Sources of Heterogeneity Assessment Through the Analysis of Covariates Influencing DORs in All Included Studies, CIN2+ and Asymptomatic Women
¶ CIN2+: Cervical Intraepithelial Neoplasia or worse; The meta-regression was used to assess the heterogeneity.The influence of covariate on DOR could not done in articles with verification bias through this analysis due to limited number of included studies
Table 2 . Pooled estimates of combined VIA and cervical cytology testing: Meta-analysis results in all studies included, verification unbiased articles and CIN2+
DOI:http://dx.doi.org/10.7314/APJCP.2015.16.14.5889Combined VIA and Cervical Cytology in Primary Screening for Cervical Cancer: a Systematic Review and Meta-Analysis
|
2017-11-06T18:14:40.637Z
|
2015-09-02T00:00:00.000
|
{
"year": 2015,
"sha1": "e277bb5329c2da3210319ee36265e1d25ffd3d66",
"oa_license": "CCBY",
"oa_url": "http://koreascience.or.kr/article/JAKO201528551642303.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e277bb5329c2da3210319ee36265e1d25ffd3d66",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214599979
|
pes2o/s2orc
|
v3-fos-license
|
Respiratory motion artefacts in Gd-EOB-DTPA (Primovist/Eovist) and Gd-DOTA (Dotarem)-enhanced dynamic phase liver MRI after intensified and standard pre-scan patient preparation: A bi-institutional analysis
Objective The objective of this study is to evaluate if intensified pre-scan patient preparation (IPPP) that comprises custom-made educational material on dynamic phase imaging and supervised pre-imaging breath-hold training in addition to standard informative conversation with verbal explanation of breath-hold commands (standard pre-scan patient preparation–SPPP) might reduce the incidence of gadoxetate disodium (Gd-EOB-DTPA)-related transient severe respiratory motion (TSM) and severity of respiratory motion (RM) during dynamic phase liver MRI. Material and methods In this bi-institutional study 100 and 110 patients who received Gd-EOB-DTPA for dynamic phase liver MRI were allocated to either IPPP or SPPP at site A and B. The control group comprised 202 patients who received gadoterate meglumine (Gd-DOTA) of which each 101 patients were allocated to IPPP or SPPP at site B. RM artefacts were scored retrospectively in dynamic phase images (1: none– 5: extensive) by five and two blinded readers at site A and B, respectively, and in the hepatobiliary phase of the Gd-EOB-DTPA-enhanced scans by two blinded readers at either site. Results The incidence of TSM was 15% at site A and 22.7% at site B (p = 0.157). IPPP did not reduce the incidence of TSM in comparison to SPPP: 16.7% vs. 21.6% (p = 0.366). This finding was consistent at site A: 12% vs. 18% (p = 0.401) and site B: 20.6% vs. 25% (p = 0.590). The TSM incidence in patients with IPPP and SPPP did not differ significantly between both sites (p = 0.227; p = 0.390). IPPP did not significantly mitigate RM in comparison to SPPP in any of the Gd-EOB-DTPA-enhanced dynamic phases and the hepatobiliary phase in patients without TSM (all p≥0.072). In the Gd-DOTA control group on the other hand, IPPP significantly mitigated RM in all dynamic phases in comparison to SPPP (all p≤0.031). Conclusions We conclude that Gd-EOB-DTPA-related TSM cannot be mitigated by education and training and that Gd-EOB-DTPA-related breath-hold difficulty does not only affect the subgroup of patients with TSM or exclusively the arterial phase as previously proposed.
Introduction
Respiratory motion (RM) during liver dynamic phase contrast-enhanced Magnetic Resonance Imaging (DCE-MRI) substantially degrades image quality and increases the economic burden for health care systems if examinations need to be repeated. Transient severe respiratory motion (TSM) is a well-known phenomenon after administration of gadoxetate disodium (Gd-EOB-DTPA; Primovist1/Eovist1, Bayer HealthCare Pharmaceuticals) that might impede image interpretation especially of the hepatic arterial phase. The reported incidence of TSM shows a considerable variation of 5-22% between institutions [1][2][3][4][5][6][7][8]. Its pathophysiology is not yet fully understood.
A technical approach to mitigate the effects of Gd-EOB-DTPA-related TSM comprises accelerated MR imaging with short breath-hold times [9][10][11], multiple arterial phase imaging [12] or free breathing protocols [13,14]. However, these imaging techniques require sophisticated hard-and software, which might not be available at every institution and despite these technological advances, best image quality is achieved in patients without RM during dynamic phase image acquisition. Alternative strategies to reduce the incidence of TSM and severity of RM in the first place are urgently needed. One alternative strategy that has been described previously to minimize TSM was the modification of the injection protocol of Gd-EOB-DTPA. Kim et al. [15] as well as Polanec et al. [16] found a 50% dilution of Gd-EOB-DTPA at an injection rate of 2mL/s [15] or 1mL/s [16] while Davenport et al. [17] found a fixed dose of 10mL instead of 20mL to reduce Gd-EOB-DTPA-related TSM significantly. Another alternative strategy recently described was a modified breathing command that has been advocated to reduce Gd-made educational material and standardized breath-hold training might reduce the incidence of Gd-EOB-DTPA-related TSM and the severity of RM during liver DCE-MRI. The effect of IPPP was crosschecked in patients who received gadoterate meglumine (Gd-DOTA; Dotarem1, Guerbet) for dynamic phase imaging.
Materials and methods
The ethical commission of the Otto-von-Guericke University and the University Clinic of Magdeburg, Germany, (Approval number: 31/14) and the ethical commission of the University of Cologne, Germany, (Approval number: 18-225) both waived the need for consent as all studies were necessary and medically indicated and our intervention did not influence patient care or patient health while all patient data were also analyzed anonymously. Hereafter, the University Clinic of Magdeburg, Germany, is referred to as site A while the University Clinic of Cologne, Germany, is referred to as site B.
Standard pre-scan preparation (SPPP)
SPPP was performed consistently at both sites and comprised informative conversation accompanied by standardized informed consent documentation (Thieme Compliance1). All patients were informed about the necessity of breath-holding during dynamic phase imaging, potential sensations associated with contrast agent administration and how to behave at the onset of dyspnea.
Intensified pre-scan preparation (IPPP)
IPPP comprised all preparatory steps taken in SPPP. During informative conversation an additional focus was placed on dynamic phase image acquisition, such as the number of acquired phases and diagnostic importance of each phase. Custom-made educational material illustrated the effects of RM during image acquisition (Fig 1). Supervised breath-hold training comprised two 20 s breath-hold cycles measured by means of a stopwatch, which were initiated with the same breath-hold command employed during dynamic phase imaging and patients were instructed to continue shallow and regular breathing at the onset of moderate but still bearable dyspnea.
Patient allocation to SPPP and IPPP
At site A, one board certified radiologist performed IPPP in 50 consecutive patients scheduled for Gd-EOB-DTPA-enhanced liver MRI in between May-August 2013 without dedicated randomization based on the radiologist's duty in the MRI unit. Fifty consecutive patients with SPPP within the study interval constituted the control group.
At site B, IPPP and documentation of the accomplished breath-hold duration was performed consecutively in 58 and 101 patients scheduled for Gd-EOB-DTPA-and Gd-DOTAenhanced dynamic phase imaging by several specialized MR-technicians in between October 2016 -February 2018 without dedicated randomization based on the technicians' duty in the MRI unit. The technicians who performed IPPP were not involved in the final image acquisition. Fifty-two and 101 consecutive patients scheduled for Gd-EOB-DTPA-and Gd-DOTAenhanced dynamic phase imaging received SPPP within the study period. The assignment of patients into either group was neither influenced by the investigators nor referring physicians. Patient allocation at both sites is depicted in Fig 2.
Image acquisition
The detailed technical parameters of T1-weighted (T1w) pre-contrast, dynamic phase imaging and hepatobiliary phase at site A and B are presented in Table 1.
Site A employed exclusively Gd-EOB-DTPA (0.25 mmol/mL) for liver imaging at a fixed dose of 10 milliliters (mL) administered intravenously with an injection rate of 1 mL/s using an automated power injector (Accutron1, Medtronic), followed by a 30 mL saline chaser at the same injection rate. Bolus tracking was used to detect contrast agent arrival in the distal thoracic aorta.
Site B employed Gd-EOB-DTPA (0.25 mmol/mL) or Gd-DOTA (0.5 mmol/mL) for liver imaging based on site specific standard operating procedures (SOPs) and/or the request of the referring physicians. Gd-EOB-DTPA was administered intravenously at a fixed dose of 10 mL with an injection rate of 2 mL/s by means of an automated power injector (Spectris Solaris EP1, Medrad, Bayer Healthcare), followed by a 30 mL saline chaser injected at the same rate. Gd-DOTA was administered weight-adapted with a dose of 0.2 mL/kg with the same injection parameters. Bolus tracking was performed to detect contrast agent arrival in the distal thoracic aorta. Both sites employed an automated breathing command during dynamic phase imaging
Image analysis
The pre-contrast, arterial, portal venous, transitional and hepatobiliary phase (HBP: only applicable in Gd-EOB-DTPA-enhanced scans) images were anonymized, randomized and loaded separately onto the PACS systems. Five blinded board certified radiologists (HBP: two Imaging parameters were consistent in pre-contrast and dynamic image phases after contrast agent administration. T = Tesla; FFE = Fast field echo; � = only applicable after Gd-EOB-DTPA administration. https://doi.org/10.1371/journal.pone.0230024.t001 blinded board certified radiologists) at site A and two blinded board certified radiologists at site B independently analyzed the images for severity of RM. RM was graded according to Davenport et al. [1,2]: Grade 1 = none, Grade 2 = minimal, Grade 3 = moderate with some impairment of image quality, Grade 4 = severe with substantial impairment of image quality, Grade 5 = uninterpretable images (see Fig 3). TSM was diagnosed, if the RM grade differed by � 2 points between pre-contrast and arterial phase image with return to pre-contrast values in portal venous or transitional phase (Fig 3). Patients with RM grade of � 3 in pre-contrast phase were not assigned to the TSM group. The hepatobiliary phase after Gd-EOB-DTPA administration, though not part of the dynamic contrast phases per se, was partly included in the analysis as it might allow a sufficient detection and characterization of focal liver lesions, especially when the arterial phase is uninterpretable due to severe TSM. Accordingly, in addition to the dynamic phases it is also important that the hepatobiliary phase is artifact-free or only has minor artifacts.
Evaluation of risk factors for Gd-EOB-DTPA-related TSM
Patient characteristics including comorbidities and potential risk factors for TSM were retrieved from the electronic medical record system. Pleural effusion and ascites were measured in the MR images and were scored as moderate (<2 and <5 cm) or severe (>2 and >5 cm). Signs of lung fibrosis or emphysema were evaluated as present or absent in computed tomography studies, whenever available.
Statistical analysis
Statistical analyses were performed using SPSS Statistics for Windows, version 23.0 (IBM Corp., Armonk, NY). Continuous variables are presented as the median and interquartile range (25 th -75 th percentile) and categorical variables as numbers and percentages. RM scores are additionally presented as the mean ± SD. Inter-reader agreement was assessed by calculating the absolute agreement, single measure intra-class correlation coefficient (ICC), applying a two-way random effect model. Pairwise comparisons were performed using the Mann-Whitney U test for continuous variables and Pearson's χ2 test or Fisher's exact test for categorical variables. Fisher's exact test was performed if at least one cell had an expected count < 5. All reported p-values were calculated based on two-sided test hypotheses and p-values of �0.05 were considered statistically significant. As the analyses were regarded as explorative, we did not adjust for multiple testing.
Inter-reader agreement for grading of respiratory motion artefacts
The inter-reader agreement for RM grading was excellent (>0.
IPPP and SPPP in Gd-EOB-DTPA-enhanced dynamic phase imaging
Patients allocated to SPPP and IPPP did not differ significantly in any of the baseline characteristics (all p�0.129;
Risk factors for Gd-EOB-DTPA-related TSM
Prior episodes of TSM (p = 0.005) and a breath-hold capacity of <17 s during pre-imaging breath-hold training were associated with the occurrence of TSM (p = 0.025; Table 3).
IPPP and SPPP in Gd-DOTA-enhanced dynamic phase imaging
More patients with moderate ascites were allocated by chance to SPPP (p = 0.048), otherwise baseline characteristics did not differ significantly between patients allocated to SPPP or IPPP (all p�0.052; Table 2). The Gd-DOTA group comprised more male patients (p = 0.001), with higher mean body mass index (BMI; p = 0.013) and cirrhosis (p<0.001) but less malignant tumors (p<0.001) than the Gd-EOB-DTPA group ( RM grades were similar in any dynamic phase in patients with and without prior liver DCE-MRI (all p�0.557). Contrary to the Gd-EOB-DTPA group, IPPP significantly mitigated RM in all dynamic phases in comparison to SPPP (all p�0.031; Fig 5). Patients who received IPPP in the Gd-DOTA group showed significantly less RM in the arterial, portal-venous and transitional phase (all p�0.020) than non TSM patients allocated to IPPP in the Gd-EOB-DTPA group, whereas RM was similar in both contrast agent groups in patients who received SPPP (all p�0.081; Table 4).
Discussion
In this bi-institutional study, we strived to investigate if an intensified pre-scan patient preparation (IPPP) could reduce the frequency of Gd-EOB-DTPA-related TSM and the severity of RM during liver DCE-MRI. We crosschecked the effects of IPPP in patients who received Gd-DOTA-enhanced DCE-MRI. Communication about the significance of dynamic phase imaging for diagnosis and the effects of RM might differ between institutions and this lack of standardization might contribute to the variable incidence of TSM. For that purpose, the bi-institutional approach strengthens the results of this study. Our rationale was to increase patients' awareness why it is crucial to adhere to breath-hold commands through detailed procedural information analogue to previous studies conducted to reduce unintentional head or limb movement during MRI [20,21]. Supervised breath-hold training in a standardized way aimed to increase patients' ability to cope with breath-holding, train adequate behavior at the onset of dyspnea and potentially increase breath-hold duration [22]. In our study, the frequency of TSM was lower in the IPPP than in the SPPP group, but without statistical significance. The TSM frequency discovered in our study matched the TSM frequency described previously in the literature [1][2][3][4][5][6][7][8] which corroborates the hypothesis that Gd-EOB-DTPA acts as a chemo-toxic trigger evoking TSM that cannot be willingly mitigated by education and training. Our results differ from the results of Gutzeit et al. [18] and Song et al. [19]. The authors reduced the incidence of TSM from 13% to 0% (4/30 vs. 0/30 patients) [18] and from 14% to 3.8% (14/100 vs. 3/80 patients) [19] by employing a modified breath-hold command with several breathing cycles prior to imaging. We speculate that additional mechanisms of action aside from training and habituation, as proposed by the authors, might have been activated through slow deep breathing, such as optimization of oxygenation [23] or short-term reduction of sympathetic activation and chemo-reflex response [24,25]. Such mechanisms would not have been targeted with our strategy. In our patient cohort, prior episodes of TSM were significantly associated with the occurrence of TSM, consistent with other studies [3,26], whereas other risk factors reported in the literature, such as age [6], gender [6,7,27] or BMI [5,28,29] were not. We identified impaired breath-hold capacity <17 s during breath-hold training as an additional risk factor for TSM. Interestingly, IPPP did not significantly mitigate RM in any of the Gd-EOB-DTPA-enhanced dynamic phases in patients without TSM, whereas it significantly reduced RM in all dynamic phases in patients who received Gd-DOTA. This finding implies that Gd-EOB-DTPA-related breath-hold difficulty does neither affect only the subgroup of patients with obvious TSM nor exclusively the arterial phase, as proposed in previous studies [1,2,30], but that it affects all dynamic phases albeit to a much lesser extent. To the best of our knowledge, our study is the first that used such a study design and yielded these results.
Despite the difficulty to reduce TSM, we want to emphasize that hepatospecific contrast agents with their unique pharmacokinetic properties cannot be replaced and are still urgently needed for liver lesion detection and characterization as well as the determination of liver function. Currently, the most promising strategies to either improve image quality despite TSM or reduce TSM in the first place, as we anticipated, include the dilution of gadoxetic acid [15,16] and new acquisition methods to shorten the acquisition time [12,31], acquire multiple arterial phase images in one single breath-hold [11,12,32] or acquire artifact-free images during free breathing [13,33,34]. The results from new acquisition methods are encouraging but their need for sophisticated hard-and software (parallel imaging techniques: SENSE, GRAPPA, CAIPIRINHA, VIBE, compressed sensing) still constrains their availability. Our study had some limitations. First, there was no dedicated randomization for IPPP at either site, which might have introduced a selection bias. However, it was performed in consecutive patients based on the staffs' duty in the MRI unit, which constitutes an element of coincidence. Aside from moderate ascites in the Gd-DOTA group, patient characteristics were similar in all patient groups. Second, there might be a bias by choice of contrast agent at site B, which, however, was based on site specific SOPs and not influenced otherwise. Third, injection rate differed between both sites. However, we found no significant association between injection rate and incidence of TSM, corroborating the results by Ringe et al. [35] but contradicting the results by Kromrey et al [31]. Here, it is important to mention that there is a huge variation and considerable overlap of the reported rates of TSM after different injection rates (1 mL/s: 4.8% to 12.9% [5,26]; 2 mL/s: 7.5% to 21.1% [6,8]). Also, some institutions prefer weightadapted, others fixed doses of gadoxetic acid making comparisons even more difficult. Fourth, acquisition time for the dynamic phases differed between both sites with a near significant association between scan time and TSM (p = 0.064; Table 3). Fifth, the effect of IPPP was measured only indirectly based on RM image artefacts, which is prone to be biased by subjective interpretation. Although the inter-reader agreement in our study was very good to excellent and matched the results of a recent multi-center trial [36], the assessment of IPPP by dedicated patient questionnaires, respiratory waveform analysis [7,10,14,37,38] or including classification of hyper-and hypovascular liver lesions might have added valuable information and should be addressed in future studies.
Conclusions
In conclusion, IPPP failed to reduce Gd-EOB-DTPA-related TSM and RM in patients without TSM in comparison to SPPP, corroborating the hypothesis that Gd-EOB-DTPA acts as a chemo-toxic trigger evoking breath-hold difficulties which cannot be mitigated by these measures. Interestingly, IPPP, however, seems to be an effective way to mitigate RM in liver DCE-MRI with extracellular contrast agents such as Gd-DOTA. This suggests that Gd-EOB-DTPA-related breath-hold difficulty does neither affect only the subgroup of patients with TSM nor exclusively the arterial phase as previously proposed but rather all patients and all dynamic phases, albeit to a much lesser extent.
Supporting information S1
|
2020-03-22T13:04:49.056Z
|
2020-03-20T00:00:00.000
|
{
"year": 2020,
"sha1": "dd678d85162d31996d82c46889902eda97519e54",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0230024&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3a5536a6a86b1c798d14f796f1f0035664a63a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
195820388
|
pes2o/s2orc
|
v3-fos-license
|
Energy of Computing on Multicore CPUs: Predictive Models and Energy Conservation Law
Energy is now a first-class design constraint along with performance in all computing settings. Energy predictive modelling based on performance monitoring counts (PMCs) is the leading method used for prediction of energy consumption during an application execution. We use a model-theoretic approach to formulate the assumed properties of existing models in a mathematical form. We extend the formalism by adding properties, heretofore unconsidered, that account for a limited form of energy conservation law. The extended formalism defines our theory of energy of computing. By applying the basic practical implications of the theory, we improve the prediction accuracy of state-of-the-art energy models from 31% to 18%. We also demonstrate that use of state-of-the-art measurement tools for energy optimisation may lead to significant losses of energy (ranging from 56% to 65% for applications used in experiments) since they do not take into account the energy conservation properties.
INTRODUCTION
Energy is now a first-class design constraint along with performance in all computing settings [1], [2] and a serious environmental concern [3]. Accurate measurement of energy consumption during an application execution is key to application-level energy minimization techniques [4], [5], [6], [7]. There are three popular approaches to providing it [8]: a). System-level physical measurements using external power meters, b). Measurements using onchip power sensors, and c). Energy predictive models. The first approach lacks the ability to provide fine-grained component-level decomposition of the energy consumption of an application. This is essential to finding energyefficient configuration of the application. The second approach is not accurate enough for the use in applicationlevel energy optimization methods [8].
Energy predictive modelling emerged as the pre- ravi.manumachu@ucd.ie, alexey.lastovetsky@ucd.ie eminent alternative. The existing models predominantly use performance monitoring counts (PMCs) as predictor variables. PMCs are special-purpose registers provided in modern microprocessors to store the counts of software and hardware activities. A pervasive approach is to determine the energy consumption of a hardware component based on linear regression of the PMC counts in the component during an application run. The total energy consumption is then calculated as the sum of these individual consumptions.
In this work, we summarize and generalize the assumptions behind the existing work on PMC-based energy predictive modelling. We use a model-theoretic approach to formulate the assumed properties of the existing models in a mathematical form. We extend the formalism by adding properties, heretofore unconsidered, that are basic implications of the universal energy conservation law. The new properties are intuitive and have been experimentally validated. The extended formalism defines our theory of energy of computing. Using the theory, we prove that an energy predictive model is linear if and only if its each PMC parameter is additive in the sense that the PMC for a serial execution of two applications is the sum of PMCs arXiv:1907.02805v2 [cs.DC] 27 Sep 2021 for the individual execution of each application.
Basic practical implications of the theory include an additivity test identifying model parameters suitable for more reliable energy predictive modelling and constraints for models (For example: zero intercept and positive coefficients for linear regression models) that disallow violation of energy conservation properties. We incorporate these implications in the state-of-the-art models and study their prediction accuracy using a strict experimental methodology on a modern Intel multicore processor.
As the first step, we test the additivity of PMCs offered by the Likwid [9] package for compound applications. We show that all the PMCs fail the additivity test where the input tolerance is 5%. We observe that a PMC can be nonadditive with error as high as 3075% and there are many PMCs where the error is over 100%. This suggests that the use of highly non-additive PMCs as predictor variables can impair the prediction accuracy of the models.
To understand the causes of the non-additivity, we study the behaviour of PMCs with different numbers of threads/cores used in applications. We demonstrate a rise in the number of non-additive PMCs with the increase in number of cores employed in the application. We consider this to be an inherent trait of a modern multicore computing platform because of its severe resource contention and non-uniform memory access (NUMA).
We select six PMCs which are common in the stateof-the-art models and which are highly correlated with dynamic energy consumption. All the PMCs fail the additivity test for input tolerance of 5%; one PMC is comparatively more additive than the rest. We construct seven linear regression models, {A, B, ..., G}. All the models have zero intercept and positive coefficients. They incorporate basic sanity checks that disallow violations of energy conservation property in our theory of energy of computing.
ModelA employs all the selected PMCs as predictor variables. ModelB is based on five most additive PMCs. ModelC uses four most additive PMCs and so on until ModelF containing the highest additive PMC. ModelG is based on three PMCs most correlated with dynamic energy consumption. We compare the prediction accuracies of these seven models plus Intel RAPL (Running Average Power Limit) [10] against the system-level physical measurements from power meters using HCLWattsUp, which we consider to be the ground truth. We demonstrate that as we remove highly non-additive PMCs one by one from the models, their prediction accuracy improves. ModelE, which employs two most additive PMCs has the best average prediction accuracy. Even though ModelF contains the highest additive PMC, it fares poorly due to poor linear fit thereby suggesting the perils of pure fitting exercise. RAPL's average prediction accuracy is equal to that of ModelA. ModelG fares better than RAPL and ModelA.
Therefore, we conclude that use of highly additive PMCs is crucial to good prediction accuracy of energy predictive models. Indeed, if PMCs used in the model are all non-additive with an error of 100%, then the predictive error of the model cannot be less than 100%.
Finally, to demonstrate the importance of the accuracy of energy measurements, we study optimization of a parallel matrix-matrix multiplication application for dynamic energy using two measurement methods. The first uses IntelRAPL [10] which is a popular mainstream tool. The second is based on system-level physical measurements using power meters (HCLWattsUp [11]) which we believe are accurate. We show that using IntelRAPL measurements instead of HCLWattsUp ones will lead to significant energy losses ranging from 34% to 67% for matrix sizes used in the experiments.
The main original contributions of this work are: • Theory of energy of computing and its practical implications, which include an additivity test for model parameters and constraints for model coefficients, that can be used to improve the prediction accuracy of energy models. • Improvements to prediction accuracy of the state-ofthe-art energy models using the practical implications of our theory of energy of computing. • Study demonstrating significant energy losses incurred due to employment of inaccurate energy measuring tools (in energy optimization methods) since they do not take into account the energy conservation properties.
We organize the rest of this paper as follows. We present terminology related to energy predictive models. This is followed by overview of our formal theory of energy of computing. Then, we present experimental results followed by survey of related work and conclusion.
TERMINOLOGY
There are two types of power consumptions in a component: dynamic power and static power. Dynamic power consumption is caused by the switching activity in the component's circuits. Static power or idle power is the power consumed when the component is not active or doing work. From an application point of view, we define dynamic and static power consumption as the power consumption of the whole system with and without the given application execution. From the component point of view, we define dynamic and static power consumption of the component as the power consumption of the component with and without the given application utilizing the component during its execution.
There are two types of energy consumptions, static energy and dynamic energy. We define the static energy consumption as the energy consumption of the platform without the given application execution. Dynamic energy consumption is calculated by subtracting this static energy consumption from the total energy consumption of the platform during the given application execution. If P S is the static power consumption of the platform, E T is the total energy consumption of the platform during the execution of an application, which takes T E seconds, then the dynamic energy E D can be calculated as, In this work, we consider only the dynamic energy consumption. We describe the rationale behind using dynamic energy consumption in the Appendix A.
ENERGY PREDICTIVE MODELS OF COM-PUTING: INTUITION, MOTIVATION, AND THE-ORY
We summarize and generalize the assumptions behind the current work on PMC-based power/energy modelling. We use a model-theoretic approach to formulate the assumed properties of these models in a mathematical form. Then we extend the formalism by adding properties, which are intuitive and which we have experimentally validated but have never been considered previously. The properties are manifestations of the fundamental physical law of energy conservation. We introduce two definitions based on the properties of the extended model, called weak composability and strong composability. An energy predictive model satisfying all the properties of the extended model is termed a consistent energy model. The extended model and the two definitions define our theory of energy predictive models of computing.
Finally, we mathematically derive properties of linear consistent energy predictive models. We prove that a consistent PMC-based energy model is linear if and only if it is strongly composable with each PMC variable being additive. The practical implication of this theoretical result is that each PMC variable of a linear energy predictive model must be additive. The significance of this property is that it can be efficiently tested and hence used in practice to identify PMC variables that must not be included in the model. The notation and the terminology used in the proposed theory is given in Table 1.
Intuition and Motivation
The essence of PMC-based energy predictive models is that an application run can be accurately characterized by a n-vector of PMCs over R ≥0 . Any two application runs characterized by the same PMC vector are supposed to consume the same amount of energy. The applications in these runs may be different, but the same computing environment is always assumed. Thus, PMC-based models are computer system-specific.
Based on these assumptions, any PMC-based energy model is formalized by a set of PMC vectors over R ≥0 , and a function, f E : R n ≥0 → R ≥0 , mapping these vectors in the set to energy values. No other properties of the set and the function are assumed.
In this work, we extend this model by adding properties that characterize the serial execution of two applications. To aid the exposition, we follow some notation and terminology. A compound application is defined as the serial execution of two applications, which we call the base applications. If the base applications are A and B, we denote their compound application by A ⊕ B. We will refer solely to energy predictive models hereafter since there exists a linear functional mapping from PMC-based power predictive models to them. When we say energy consumption, we mean dynamic energy consumption. The energy consumption that is experimentally observed during the execution of an application A is denoted by E(A). The energy consumption of the compound application A ⊕ B, E(A ⊕ B), is the energy consumption that is experimentally observed during the execution of the compound application.
First, we aim to reflect in the model the observation that in a stable and dedicated environment, where each run of the same application is characterized by the same PMC vector, for any two applications, the PMC vector of their serial execution will always be the same. To introduce this property, we add to the model a (infinite) set of applications denoted by A. We postulate the existence of binary respectively, the PMC vector of the compound application A ⊕ B will be equal to {a k • AB,k b k } n k=1 . Next, we introduce properties, which are manifestations of the universal energy conservation law. The following property essentially states that doing nothing (signified by a null vector of PMCs, N U LL = {0} n k=1 ∈ R n ≥0 ) does not consume or generate energy, Energy consumption of compound application Energy value for the input PMC vector a O Set of binary operators a k • AB,k b k Binary operator • AB,k combining the k-th PMCs a k and b k in the PMC vectors a and b for the applications A, B ∈ A, respectively {• AB,1 , · · · , • AB,n } Set of binary operators combining the PMC vectors for the applications A, B ∈ A The following property postulates that an application with a PMC vector that is not N U LL must consume some energy. The intuition behind this property is that since PMCs account for energy consuming activities of applications, an application with any energy consuming activity higher than zero activity (a N U LL PMC vector), must consume more energy than zero.
Finally, we aim to reflect the observation that the consumed energy of compound application A ⊕ B is always equal to the sum of energies consumed by the individual applications A and B respectively, To introduce this property in the extended model, we postulate the following, To summarize, while existing models are focused on abstract application runs and lack any notion of applications, we introduce this notion in the extended model. The additional structure introduced in the extended model allows one to prove the mathematical properties of energy predictive models.
Formal Summary of Properties of Extended Model
The formal summary of the properties of the extended model follows: ≥0 respectively, the PMC vector of the compound application P ⊕ Q will be equal to {p k • PQ,k q k } n k=1 . Property 3.3 (Zero Energy, Energy Conservation).
. We term an energy predictive model satisfying all the above properties of the extended model a consistent energy model.
Strong Composability: Definition
The definition of strong composability of models follows: The strong composability property of a model essentially states that binary operators used in the model to compute PMC vectors of compound applications are not application specific. In other words, the set O consists of only n binary operators, one for each PMC parameter, O = {• k } n k=1 , so that for any P, Q ∈ A and their PMC vectors p = {p k } n k=1 , q = {q k } n k=1 ∈ R n ≥0 , the PMC vector of the compound application P ⊕ Q will be equal to {p k • k q k } n k=1 .
Mathematical Analysis of Linear Energy Predictive Models Based on The Theory of Energy of Computing
In this section, we mathematically derive properties of linear consistent energy predictive models, that is, linear energy models satisfying properties (3.1 to 3.5).
By definition, a model is To the best of our knowledge, all the state-of-theart energy predictive models for multicore CPUs are based on linear regression. While they model total energy consumption, we consider dynamic energy consumption for reasons described in the Appendix 6. The mathematical form of these models can be stated as follows: where β 0 is called the model intercept, the β = {β 1 , ..., β n } is the vector of regression coefficients or the model parameters. In real life, there usually is stochastic noise (measurement errors). Therefore, the measured energy is typically expressed as where the error term or noise is a Gaussian random variable with expectation zero and variance σ 2 , written ∼ N (0, σ 2 ). We will ignore the noise term in our mathematical proofs to follow. Theorem 1. If a linear energy predictive model (3) is consistent, the model intercept must be zero and the model coefficients must be positive.
Proof. From the energy conservation property 3.3, From the energy conservation property 3.4, To summarize, a linear energy predictive model satisfying energy conservation properties (3.3 and 3.4) has a zero model intercept and positive model coefficients. Also as we only consider models satisfying property 3.3, then the linearity of function f E (x) can be equivalently defined as follows: for any α ∈ R ≥0 and p, q ∈ R n and Theorem 2. If a consistent energy model is linear, then it is strongly composable with O = {+}.
Proof. From properties 3.2 and 3.5 of weak composability, we have Using the property (5) of a linear predictive model, Proof. First, we prove the first defining linearity property (5), This proves the first property of linearity. We now prove the second defining property of linearity (6), For any integer n > 0, Thus, for any rational m n > 0, By definition, any real number α is a limit of an infinite sequence of rational numbers. Consider a sequence {α k } of positive rational numbers such that lim k→+∞ α k = α. Then, Therefore, we prove using theorem 2 and theorem 3 that a consistent energy model is linear if and only if it is strongly composable with O = +. A consistent PMCbased energy model is linear if and only if it is strongly composable, with each PMC variable being additive. The practical implication of this theoretical result is that each PMC variable of a linear energy predictive model must be additive.
EXPERIMENTAL RESULTS
This section is divided into two parts.
In the first part, we study the additivity of PMCs for compound applications using an additivity test. We analyse the impact on prediction accuracy of models using additive and non-additive PMCs as predictor variables.
In the second part, we study optimization of a parallel matrix-matrix application for dynamic energy using two measurement tools, IntelRAPL [10] which is a popular mainstream tool and system-level physical measurements using power meters (HCLWattsUp [11]).
Study of Additivity of PMCs
Our experimental platform is a modern Intel Haswell multicore server CPU whose specifications are given in the (Table 3) For each application run, we measure the following: 1). Dynamic energy consumption, 2). Execution time, and 3). PMCs. The dynamic energy consumption during the application execution is measured using a WattsUp Pro power meter and obtained programmatically via the HCLWattsUp interface [11]. The power meter is periodically calibrated using an ANSI C12.20 revenue-grade power meter, Yokogawa WT210.
We use Likwid [9], [12] to obtain the PMCs. It offers 164 PMCs on our platform. We eliminate PMCs with counts less than or equal to 10 To ensure the reliability of our results, we follow a statistical methodology where a sample mean for a response variable is obtained from multiple experimental runs. The sample mean is calculated by executing the application repeatedly until it lies in the 95% confidence interval and a precision of 0.025 (2.5%) has been achieved. For this purpose, Student's t-test is used assuming that the individual observations are independent and their population follows the normal distribution. We verify the validity of these assumptions by plotting the distributions of observations. The server is fully dedicated for the experiments. To ensure reliable energy measurements, we took following precautions: 1) HCLWattsUp API [11] gives the total energy consumption of the server during the execution of an application using system-level physical measurements from the external power meters. This includes the contribution from components such as NIC, SSDs, fans, etc. To ensure that the value of dynamic energy consumption is purely due to CPUs and DRAM, we verify that all the components other than CPUs and DRAM are idle using the following steps: • Monitoring the disk consumption before and during the application run. We ensure that there is no I/O performed by the application using tools such as sar, iotop, etc. • Ensuring that the problem size used in the execution of an application does not exceed the main memory, and that swapping (paging) does not occur. • Ensuring that network is not used by the application using monitoring tools such as sar, atop, etc. • Bind an application during its execution to resources using cores-pinning and memory-pinning. 2) Our platform supports three modes to set the fans speed: minimum, optimal, and full. We set the speed of all the fans to optimal during the execution of our experiments. We make sure there is no contribution to the dynamic energy consumption from fans during an application run, by following the steps below: • We continuously monitor the temperature of server and the speed of fans, both when the server is idle, and during the application run. We obtain this information by using Intelligent Platform Management Interface (IPMI) sensors. • We observed that both the temperature of server and the speeds of the fans remained the same whether the given application is running or not. • We set the fans at full speed before starting the application run. The results from this experiment were the same as when the fans were run at optimal speed. • To make sure that pipelining, cache effects, etc, do not happen, the experiments are not executed in a loop and sufficient time (120 seconds) is allowed to elapse between successive runs. This time is based on observations of the times taken for the memory utilization to revert to base utilization and processor (core) frequencies to come back to the base frequencies.
Ranking PMCs Using Additivity Test
We study the additivity of PMCs offered by Likwid using a test consisting of two stages. In the first stage, we determine if the PMC is deterministic and reproducible.
In the second stage, we check if the PMC of compound application is equal to the sum of the values of corresponding PMC of base applications. A PMC must pass both stages to be called additive for a given compound application on a given platform. First, we collect the values of the PMCs for the base applications by executing them separately. Next, we execute the compound application and obtain its value of the PMC. If the PMC of the compound application is equal to the sum of the PMCs of the base applications (with a tolerance of 5.0%), we classify the PMC as potentially additive. Otherwise, it is non-additive.
For the experimental results, we prepare a dataset consisting of 60 compound applications composed from the base applications presented in Table 3. No PMC is found to be additive within specified tolerance of 5%. If we increase the tolerance to 20%, 50 PMCs become additive. Increasing the tolerance to 30% makes 109 PMCs additive. We observe that a PMC can be nonadditive with an error as high as 3075% and there are many PMCs where the error is over 100%.
Therefore, we conclude that all the PMCs fail the additivity test with specified tolerance of 5% on current multicore platforms.
Evolution of Additivity of PMCs from Singlecore to Multicore Architectures
To identify the cause of this non-additivity, we perform an experimental study to observe the additivity of PMCs with different configurations of threads/cores employed in an application.
We choose for this study three applications: 1). MKL DGEMM, 2). MKL FFT and 3). naive matrix-vector (MV) multiplication. We perform additivity test for the applications for four different core configurations (2-core, 8-core, 16-core and 24-core). In the 2-core configuration, the application is pinned to one core of each socket. In the 8-core configuration, the application is pinned to four cores of each socket and so on. We design multiple compound applications from the chosen set of problem sizes. For each application and core configuration, we note the maximum percentage error for each PMC and count the number of non-additive PMCs that exceed the input tolerance of 5%. Figure 2 shows the increase in non-additivity of PMCs as the number of cores is increased for DGEMM, FFT and naive MV. For DGEMM, 51 PMCs are non-additive for 2-core configuration. The number increases to 126 for 24-core configuration. For FFT, the number increases from 61 to 146 and for naive MV, the number increases from 22 to 58 from 2-core to 24-core configurations. The minimum number of non-additive PMCs is for the 2-core configuration for each application.
Therefore, we conclude that the number of nonadditive PMCs increases with the increase in cores employed in an application execution because of severe resource sharing and contention.
Improving Prediction Accuracy of Energy Predictive Models
We select six PMCs common to the state-of-the-art models [13], [14], [15], [16], [17], [18]. The PMCs ({X 1 , · · · , X 6 }) are listed in the Table 4. They count floating-point and memory instructions and are considered to have a high positive correlation with energy consumption. They fail the additivity test for an input tolerance of 5%. X 6 is highly additive compared to the rest.
We build three types of linear regression models as follows: • Type 1: Models A 1 -G 1 with no restrictions on intercepts and coefficients. • Type 2: Models A 2 -G 2 whose intercepts are forced to zero. • Type 3: Models A 3 -G 3 whose intercepts are forced to zero and whose coefficients cannot be negative.
Within each type t, A t employs all the PMCs as predictor variables. B t is based on five PMCs with the least additive PMC (X 4 ) removed. C t uses four PMCs with two most non-additive PMCs (X 2 , X 4 ) removed and so on until F t containing only the most additive PMC (X 6 ). G t uses three PMCs (X 4 , X 5 , X 6 ) with the highest correlation with dynamic energy consumption.
For constructing all the models, we use a dataset of 277 points where each point contains dynamic energy consumption and the PMC counts for execution of one base application from Table 3 with some particular input. For testing the prediction accuracy of the models, we construct a test dataset of 50 different compound applications. We used this division (227 for training, 50 for testing) based on best practices and experts' opinion in this domain. Table 5 summarizes the type 1 models. Following are the salient observations: • The model intercepts are significant. In our theory of energy of computing where we consider modelling of dynamic energy consumption, the intercepts are not present since they have no real physical meaning.
Consider the case where no application is executed. The values of the PMCs will be zero and therefore the models must output the dynamic energy consumption to be zero. The models however output the values of their intercepts as the dynamic energy consumption. This violates the energy conservation property in the theory. • A 1 has negative coefficients for PMCs, X 4 and X 6 . Models B 1 -D 1 have negative coefficients for PMC, X 6 . The negative coefficients in these models can give rise to negative predictions for applications where the counts for X 4 and X 6 are higher than the other PMCs. We illustrate this case by designing a microbenchmark that stresses specifically hardware components resulting in large counts for the PMCs with the negative coefficients.
Since, in our case, X 4 and X 6 count the division and floating point instructions, our microbenchmark is a simple assembly language program that performs floating point division operations in a loop. When run for forty seconds, the PMC counts for this application on our platform were: X 1 =7022011, Consider, for example, X 4 in A 1 and C 1 . While it has positive coefficient in A 1 , it has a negative coefficient in C 1 . Similarly, X 6 in A 1 and B 1 has negative coefficient, whereas in F 1 it has a positive coefficient. We have found that the research works that propose linear models using these PMCs do not contain any sanity check for these coefficients. Therefore, we believe that using them in models without understanding the true meaning or the nature of their relationship with dynamic energy consumption can lead to serious inaccuracy.
The type 2 models are built using specialized linear regression, which forces the intercept to be zero. Table 6 2 contains their summary. All the models excepting E 2 and F 2 contain negative coefficients and therefore present the same issues that violate the energy conservation law.
The type 3 models are built using penalized linear regression using R programming interface that forces the coefficients to be non-negative. All the models of this type have zero intercept and are summarized in the Table 7. They incorporate basic sanity checks that disallow violations of energy conservation property.
We will now focus on the minimum, average, and maximum prediction errors of type 3 models. They are (6.6%, 31.2%, 61.9%) for A 3 . Since the coefficients are constrained to be non-negative, X 6 ends up having a zero coefficient. We remove the PMC with the next highest non-additivity (X 4 ) and construct B 3 based on the remaining five PMCs. In this model, X 5 has a zero coefficient. Its prediction errors are (6.6%, 31.2%, 61.9%). We then remove the PMC with the next highest non-additivity (X 2 ) from the list of four and build C 3 based on the remaining PMCs. Its prediction errors are (2.5%, 25.3%, 62.1%). Finally, we build F 3 with just one most additive PMC (X 6 ). Its prediction errors are (2.5%, 68.5%, 90.5%). The prediction errors of RAPL are (4.1%, 30.6%, 58.9%). The prediction errors of G 3 are (2.5%, 50%, 77.9%).
We derive the following conclusions: • As we remove non-additive PMCs one by one, the average prediction accuracy of the models improves significantly. E 3 with two most additive PMCs is the best in terms of average prediction accuracy. We therefore conclude that employing non-additive PMCs can significantly impair the prediction accuracy of models and that inclusion of highly additive PMCs improves the prediction accuracy of models drastically. • We highlight two examples demonstrating the dangers of pure fitting exercise (for example: applying linear regression) without understanding the true physical significance of a parameter.
-The PMC X 6 , which has the highest significance in terms of contribution to dynamic energy consumption (highest additivity), ends up having a zero coefficient in A 3 , C 3 , D 3 , and G 3 . D 3 has only two PMCs, X 1 and X 5 , effectively. The linear fitting method picks X 5 instead of X 6 thereby impairing the prediction accuracy of D 3 (and also G 3 ). This is because X 5 and X 6 have high positive correlation between themselves but the fitting method does not know that X 6 is highly additive. -F 3 containing one PMC with the highest additivity, X 6 , has the lowest prediction accuracy. The linear fitting method is unable to find a good fit.
• The average prediction accuracy of RAPL is equal to that of the A 3 and B 3 , which contain the highest number of non-additive PMCs. If the model of RAPL is disclosed, one can check how much its prediction accuracy can be improved by removing non-additive PMCs and including highly additive PMCs. • G 3 fares worse than RAPL and A 3 even though it contains PMCs that are highly correlated with dynamic energy consumption. E 3 with two most additive PMCs has better average prediction accuracy than G 3 , which demonstrates that additivity is a more important criterion than correlation. Figure 3 presents the percentage deviations in dynamic energy consumption predictions by type 3 models (Table 7) from the system-level physical measurements obtained using HCLWattsUp (using WattsUp Pro power meters) for different compound applications. RAP L, A 3 , and G 3 exhibit higher average percentage deviations than the best model, E 3 . While RAP L distribution is normal, A 3 and G 3 demonstrate non-normality suggesting systemic (not fully random) deviations from the average.
Study of Dynamic Energy Optimization using IntelRAPL and System-level Physical Measurements
In this section, we demonstrate that using inaccurate energy measuring tools in energy optimization methods may lead to significant energy losses.
We study optimization of a parallel matrix-matrix multiplication application for dynamic energy using two measurement tools, IntelRAPL [10] which is a popular mainstream tool and system-level physical measurements using power meters (HCLWattsUp [11]) which we believe are accurate.
For this purpose, we employ a data-parallel application that uses Intel MKL DGEMM as building block. The experimental platform consists of two servers, HCLserver1 (Table 2) and HCLserver2 (Table 8). To find the partitioning of matrices between the servers that minimizes the dynamic energy consumption, we use a model-based data partitioning algorithm, which takes as input dynamic energy functional models of the servers. We compare the total dynamic energy consumptions of the solutions returned when the input dynamic energy models of the servers are built using IntelRAPL [10] and HCLWattsUp [11]. We follow the same strict experimental methodology as in the previous experimental setup to make sure that our experimental results are reliable.
The parallel application computes a matrix product of two dense square matrices A and B of sizes N × N and is executed using two processors, HCLserver1 and HCLserver2. The matrix A is partitioned between the processors as A 1 and A 2 of sizes M × N and K × N where M + K = N . Matrix B is replicated at both the processors. Processor HCLserver1 computes the product of matrices A 1 and B and processor HCLserver2 computes the product of matrices A 2 and B. There are no communications involved.
The decomposition of the matrix A is computed using a model-based data partitioning algorithm. The inputs to the algorithm are the number of rows of the matrix A, N , and the dynamic energy consumption functions of the processors, {E 1 , E 2 }. The output is the partitioning of the rows, (M, K). The discrete dynamic energy consumption function of processor P i is given by y 1 ), ..., e i (x m , y m )} where e i (x, y) represents the dynamic energy consumption during the matrix multiplication of two matrices of sizes x × y and y × y by the processor i. Figure 4 shows the discrete dynamic energy consumption functions of Intel-RAPL and HCLWattsUp for the processors HCLserver1 and HCLserver2. The dimension y ranges from 14336 to 16384 in steps of 512. For HCLserver1, the dimension x ranges from 512 to y/2 in increments of 512. For HCLserver2, the dimension x ranges from y − 512 to y/2 in decrements of 512.
The main steps of the data partitioning algorithm are as follows: 1. Plane intersection of dynamic energy functions: Dynamic energy consumption functions {E 1 , E 2 } are cut by the plane y = N producing two curves that represent the dynamic energy consumption functions against x given y is equal to N . We use four workload sizes {14336, 14848, 15360, 16384} in our test data. For each workload size, we determine the workload distribution using the data partitioning algorithm employing model based on IntelRAPL. We execute the Table 7 from the system-level physical measurements provided by power meters (HCLWattsUp). The dotted lines represent the averages.
parallel application using this workload distribution and determine its dynamic energy consumption. We represent it as e rapl . We obtain the workload distribution using the data partitioning algorithm employing model based on HCLWattsUp. We execute the parallel application using this workload distribution and determine its dynamic energy consumption. We represent it as e hclwattsup . We calculate the percentage loss of dynamic energy consumption provided by HCLWattsUp compared to IntelRAPL as (e rapl − e hclwattsup )/e hclwattsup × 100. Losses for the four workload sizes are {65, 58, 56, 56}.
RELATED WORK
This section presents a brief literature survey of some important tools widely used to obtain PMCs, notable research on energy predictive models, and research works that provide a critical review of PMCs.
Tools to obtain PMCs. Perf [19] can be used to gather the PMCs for CPUs in Linux. PAPI [20] and Likwid [9] allow obtaining PMCs for Intel and AMD microprocessors. Intel PCM [21] gives PMCs of core and uncore components of an Intel processor. For Nvidia GPUs, CUDA Profiling Tools Interface (CUPTI) [22] can be used for obtaining the PMCs.
Notable Energy Predictive Models for CPUs. Initial Models correlating PMCs to energy values include [16], [17], [23], [24], [25], [26], [27], [28]. Events such as integer operations, floating-point operations, memory requests due to cache misses, component access rates, instructions per cycle (IPC), CPU/disk and network utilization, etc. were believed to be strongly correlated with energy consumption. Simple linear models have been developed using PMCs and correlated features to predict energy consumption of platforms. Rivoire et al. [29], [30] study and compare five full-system real-time power models using a variety of machines and benchmarks. They report that PMC-based model is the best overall in terms of accuracy since it accounted for majority of the contributors to system's dynamic power. Other notable PMC-based linear models are [14], [18], [31], [32], [33], [34], [35] Rotem et al. [10] present RAPL, in Intel Sandybridge to predict the energy consumption of core and uncore components (QPI, LLC) based on some PMCs (which are not disclosed). Lastovetsky et al. [36] present an application-level energy model where the dynamic energy consumption of a processor is represented by a function of problem size.
Critiques of PMCs for Energy Predictive Modelling. Some attempts where poor prediction accuracy of PMCs for energy predictive modeling has been critically examined include [26], [37], [38], [39]. Researchers highlight the fundamental limitation to obtain all the PMCs simultaneously or in one application run and show that linear regression models give prediction errors as high as 150%. The property of additivity of PMCs is first introduced in [40].
CONCLUSION
Energy predictive modelling based on PMCs is now the leading method for prediction of energy consumption during an application execution. We summarized the assumptions behind the existing models and used a modeltheoretic approach to formulate their assumed properties in a mathematical form. We extended the formalism by adding properties, heretofore unconsidered, that are basic implications of the universal energy conservation law. The extended formalism forms our theory of energy of computing.
We considered practical implications of our theory and applied them to improve the prediction accuracy of the state-of-the-art energy predictive models. First implication concerns studying additivity of model parameters. We studied the additivity of PMCs on a modern Intel platform. We showed that a PMC can be non-additive with error as high as 3075% and there are PMCs where the error is over 100%.
We selected six PMCs which are common in the state-of-the-art energy predictive models and which are highly correlated with dynamic energy consumption. We constructed seven linear regression models with the PMCs as predictor variables and that pass the constraints. We demonstrated that prediction accuracy of the models im-proves as we remove one by one from them highly nonadditive PMCs. We also highlighted the drawbacks of pure fitting exercise (for example: applying linear regression) without understanding the true physical significance of a parameter. We show that linear regression methods select PMCs based on high positive correlation with dynamic energy consumption and ignore PMCs that have high significance in terms of contribution to dynamic energy consumption (due to high additivity) thereby impairing the prediction accuracy of the models.
Finally, we studied optimization of a parallel matrixmatrix multiplication application for dynamic energy using two measurement tools, IntelRAPL [10], which is a popular mainstream tool, and power meters (HCLWattsUp [11]) providing accurate system-level physical measurements. We demonstrated that we lose significant amount of energy (up to 67% for applications used in the experiments) by using IntelRAPL most likely because it does not take into account the energy conservation properties (we found no explicit evidence that it does).
|
2019-07-05T12:56:57.000Z
|
2019-07-05T00:00:00.000
|
{
"year": 2019,
"sha1": "6f93cd7ab35faeb5aea1379b6e01a1a176bce4bd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b1f044a0fd5f5dbd0419af93d2a713be7c5d3de3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
74691590
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of pulmonary functions of thalassemic and of healthy children
Objectives The aim of this study was to compare some pulmonary functions of thalassemic patients and those of normal children. Factors correlated with lung dysfunction were assessed. Methods This cross-sectional study compared some pulmonary functions of thalassemic patients with those of healthy children. The study was performed in the Department of Child Health, Cipto Mangunkusumo Hospital, Jakarta, Indonesia. Preand post-transfusion hemoglobin levels of the thalassemic subjects were determined. Other data such as chelation therapy and serum ferritin levels were also obtained. Both thalassemic and control subjects underwent routine physical examinations and lung function tests using an electronic spirometer. Spirometry was repeated three times for each subject, and only the best result was recorded. Results Sixty-three thalassemic patients were enrolled, consisting of 32 male and 31 female subjects. Healthy subjects consisted of 31 males and 31 females. Most thalassemic patients (46/63) were found to have lung function abnormalities. This was significantly different from control subjects, of whom most (39/62) had normal lung function. Restrictive lung function abnormality was the most common (42/63) observation documented. Serum ferritin levels were obtained from 28 male and 29 female thalassemic subjects. There was no correlation between percentage from predicted forced vital capacity and serum ferritin levels, whether in male (r=0.191; P=0.967) or female (r =-0.076, P=0.695) thalassemic subjects. Conclusion Thalassemic patients have significantly lower lung function than healthy children. More thalassemic patients had lung function abnormalities compared to healthy children. Restrictive dysfunction was the most common finding in the thalassemic group. No correlation was found between lung function and serum ferritin levels [Paediatr Indones 2005;45:1-6].
functions of thalassemic patients with those of healthy children.
The study was performed in the Department of Child Health, Cipto Mangunkusumo Hospital, Jakarta, Indonesia. Pre-and post-transfusion hemoglobin levels of the thalassemic subjects were determined. Other data such as chelation therapy and serum ferritin levels were also obtained. Both thalassemic and control subjects underwent routine physical examinations and lung function tests using an electronic spirometer. Spirometry was repeated three times for each subject, and only the best result was recorded.
Results Sixty-three thalassemic patients were enrolled, consisting of 32 male and 31 female subjects. Healthy subjects consisted of 31 males and 31 females. Most thalassemic patients (46/63) were found to have lung function abnormalities. This was significantly different from control subjects, of whom most (39/62) had normal lung function. Restrictive lung function abnormality was the most common (42/63) observation documented. Serum ferritin levels were obtained from 28 male and 29 female thalassemic subjects. There was no correlation between percentage from predicted forced vital capacity and serum ferritin levels, whether in male (r=0.191; P=0.967) or female (r =-0.076, P=0.695) thalassemic subjects.
Conclusion Thalassemic patients have significantly lower lung function than healthy children. More thalassemic patients had lung function abnormalities compared to healthy children. Restrictive dysfunction was the most common finding in the thalassemic group. No correlation was found between lung function and serum ferritin levels Keywords: pulmonary function, thalassemia, spirometry, serum ferritin level, restrictive lung dysfunction T halassemia refers to a heterogeneous group of heritable hypochromic anemias of various degrees of severity; [1][2][3][4][5][6][7][8][9] it is considered to be the most prevalent genetic disorder in the world. 3 Transfusions of 15-20 ml/kg of packed cells are usually required every 4-5 weeks. 9 Unless adequate chelating agent is prescribed, hemosiderosis is an unavoidable consequence of longterm transfusion therapy, because each 500 ml of blood delivers to the tissues about 200 mg of iron that cannot be excreted by physiologic means. Besides the heart, liver and pancreas as the target organs most frequently involved, abnormalities of lung mechanics have been reported by almost all studies of patients with thalassemia. [10][11] There is no consensus on the nature of lung impairment in thalassemic patients. Most studies found that restrictive dysfunction is the predominant pattern of lung function abnormality in thalassemic patients 1-2,4-7 , although some others found obstructive lung dysfunction as the major pattern. [12][13][14][15] Furthermore, the relationship between changes in the lung mechanics of transfusion-dependent thalassemic pa-tients and iron burden or overload remains unclear. Some studies find a significant inverse correlation between total lung capacity and iron burden 1-3 as well as between total lung capacity and age. 1,2 Others find that neither age 4 nor iron load 4,5,8 correlated with pulmonary function. To our knowledge, no data on this issue is available in Indonesia. The aim of this study was to compare the pulmonary function of thalassemic patients to that of normal children. Factors correlated with lung dysfunction, including serum ferritin level, were also assessed.
Methods
This was a cross-sectional study comparing thalassemic patients with healthy children. Patients enrolled in the study were those visiting the Thalassemia Center, Cipto Mangunkusumo Hospital, Jakarta during the period of the study. The study protocol was approved by the Committee of Medical Research Ethics of the Medical School, University of Indonesia. Subjects were patients with either homozygous β-thalassemia or compound hemoglobin E-β thalassemia.
Two calculations were performed to determine sample size. The first was in accordance to the first objective of this study, which was to compare pulmonary function between thalassemic patients and normal children. The primary variable to be investigated in this study was forced vital capacity (FVC), therefore calculation of sample size was based on this variable, using the formula to calculate sample size of two independent groups. 16 With level of significance (α) of 0.05, power of 0.80, standard deviation of two groups of 14%, 4,5 and clinically important difference of 10%, the sample size was found to be 31. As this study differentiated between male and female subjects, the total sample size was 62 subjects for the thalassemic group and 62 for the control group. The second calculation was based on the objective of evaluating the correlation between serum ferritin level and pulmonary function in thalassemic patients. For this purpose, sample size was determined using the sample size table for correlation coefficient. 16 With a correlation coefficient (r) of 0.6; 2 level of significance (α) of 0.05; and power of 0.20, the sample size was found to be 19. 16 We decided to comply with the larger sample requirement as determined in the first calculation.
To be enrolled in this study, the thalassemic patients were required to be at least 6 years and not older than 12 years of age. Informed consent had to be obtained from their parents. Patients were recruited if they were clinically stable and had just received their latest regular transfusion to achieve a minimum hemoglobin level of 9 g/dl at the time of the study.
They were excluded if on physical examination they were found to have cardiac dysfunction i.e., cardiac failure, or obstructive lung disorder i.e., asthma.
Data regarding identity, date of birth, and history of illness including the age at which the diagnosis of thalassemia was established and the use of chelation therapy were obtained from parents and medical records. Chelation therapy was considered adequate if it involved intravenous deferoxamine infusions, >3 times a week. 17 When information on chelation therapy from the history was discordant with medical record data, or when medical record data was incomplete, information from the parents will be recorded. Transfusion years were calculated by subtracting current age with age at the time of diagnosis. Routine physical examinations were performed and the results recorded. Data on pre-transfusion hemoglobin (Hb) level and most recent serum ferritin level were also taken. After subjects had their regular transfusion, venous blood samples were taken to attain their post-transfusion Hb level. Only those who achieved a post-transfusion Hb level of 9 g/dl or higher were included. Patients were let to rest for 30 minutes. Each subject then performed lung function tests by means of electronic spirometer AS-7. 18 Control subjects consisted of 6 to 12 year old children attending SDN Pegangsaan 01 elementary school, Jakarta who did not have any sign or symptom of respiratory illness, cardiac failure, or any other significant health problem. Subjects were selected in a random method from this accessible population. History of illness was taken, routine physical examination was performed and the results were recorded. Subjects then performed lung function tests by means of an AS-7 electronic spirometer.
Each subject performed spirometry three times, of which only the best result was recorded. 19 Data recorded during spirometry were forced vital capacity (FVC), one-second forced expiratory volume (FEV 1 ), ratio of FEV 1 to FVC (FEV 1 /FVC), peak expiratory flow (PEF), V 25 , and V 50 . All data were expressed as percentage of the predicted normal values according to age, sex and present height (% predicted). FVC and FEV 1 values of less than 80% of the predicted normal values were classified as abnormal. 16 The means (SD) of these values were calculated.
Spirometry results were categorized as consistent with the normal pattern (normal FVC and FEV 1 /FVC ratio), restrictive pattern (reduced FVC, normal or elevated FEV 1 /FVC ratio), or obstructive pattern (reduced FEV 1 /FVC ratio, normal or reduced FVC). [18][19][20] The results obtained from spirometry were also determined and confirmed by plotting spirometry values to a pentagram, calculating the obstruction index (OI), and generating a flow-volume (F-V) curve. An F-V curve was said to exhibit a restrictive pattern if it was similar in shape with, but smaller than, the normal curve. The curve was considered as showing an obstructive pattern if the portion of the curve after peak flow took on a concave or "scooped out" shape. [18][19][20] Subjects were classified as having normal lung function, restrictive, obstructive, or combined (restrictive and obstructive) pattern lung dysfunction.
Statistical analyses used were Student's t-test for comparing means between the two groups and correlation coefficient for determining the correlation between lung function (FVC) and serum ferritin. The level of significance was taken at P<0.05. Data collected were processed using SPSS 11.0 for Windows.
Results
There were 63 thalassemic patients enrolled in this study, comprising 32 males and 31 females. Healthy subjects consisted of 31 males and 31 females. The characteristics of subjects are shown in Table 1. All thalassemic subjects had sufficient post-transfusional hemoglobin levels. The total amount of transfusion received by each subject could be inferred from transfusion years, as all patients in the Thalassemia Center obtained transfusions regularly every 4 weeks unless they looked extremely pallid or have any other disorder requiring extra transfusion.
Serum ferritin values were available from 28 out of 32 male patients, of whom all had higher values than normal, ranging from 671 to 7992 ng/ml (normal reference value: 30-400 ng/ml). Out of 31 females, serum ferritin values were available from 29, of whom only one had a normal value (151 ng/ml; reference value: 20-300 ng/ml). The other female patients had elevated serum ferritin, ranging from 1095 to 13,807 ng/ml (normal reference value: 13-150 ng/ml).
Among the male thalassemic subjects, none had received adequate chelation therapy. Only two female subjects had received adequate chelation therapy, both of whom had normal lung function values. In most patients, chelation therapy was done only once a month, 1-5 days following transfusion. Patients reported that the sparseness of chelation therapy was because they did not have their own syringe pumps and had to borrow them from the Thalassemia Center.
Spirometry results are reported in Tables 2 and 3. On average, FVC and FEV 1 of thalassemic patients were significantly lower than predicted values, whereas FEV 1 / FVC and PEF were within normal limits. On individual basis, FVC and FEV 1 were less than 80% of the predicted values in 27 and 21 male patients, respectively and in 20 and 16 female patients, respectively. Forced vital capacity (FVC) was significantly lower in both male and female thalassemic subjects compared to controls (P=0.0001 and 0.018, respectively). Other lung function measurements in female thalassemic subjects did not differ significantly with female controls. In male thalassemic patients, 1-second forced expiratory volume (FEV 1 ), peak expiratory flow (PEF) and V 50 were significantly lower than in male controls (P=0.0001).
Analysis of lung function test results show a significant difference between male patients (p=0.028) and female patients (p=0.009) with their respective controls. The most common type of lung dysfunction in both male and female patients was the restrictive type. When all subjects were analyzed together regardless of sex (Table 3), significant differences between thalassemic and control subjects were found in FVC (% predicted), FEV 1 (% predicted), FEV 1 /FVC, and lung function test results.
Forced vital capacity (% predicted) was not related with age in both male and female thalassemic subjects Serum ferritin levels were obtained from 28 male and 29 female thalassemic subjects. There was no correlation found between forced vital capacity (% predicted) and serum ferritin levels both in male (r=-0.191; P=0.967) and in female (r =-0.076, P=0.695) subjects.
Discussion
Most thalassemic subjects (46/63) were found to have lung function abnormalities. This was significantly different from control subjects, of whom most (39/62) were found to have normal lung function. This significant difference was present in both males and females. Among the thalassemic subjects, restrictive lung function abnormality was 1 Factor et al, 2 Tai et al, 5 Luyt et al, 6 and Filosa et al. 7 What is unique to our study is that it was performed exclusively in children, while those studies mentioned earlier were all performed in both children and adults, with a considerably older mean age. Therefore, our study shows that lung dysfunction in thalassemic patients has occurred since childhood. The finding of 19 control subjects with restrictive dysfunction drew our attention. Children in the control group live in a highly populated urban slum area. This may add some confounding factors to their lung dysfunction, which was not investigated further in our study. Nevertheless, we exclude the probability of errors in lung function maneuvers, as the tests were performed in optimal condition.
Two thalassemic patients were found to have obstructive lung dysfunction, and two others had combined (restrictive and obstructive) lung abnormalities. In control subjects, there were four children with obstructive lung dysfunction and none with combined disorder. We have attempted to exclude subjects with asthma by careful history taking and physical examination. These findings may be due to limitations in screening, which explain why the obstructive pattern was found in an almost equal proportion of both thalassemic and control subjects. Another possible explanation would be consistent with studies performed by Santamaria et al, [12][13] Keens et al, 14 and Hoyt et al 15 which found the obstructive pattern as the most common type of lung dysfunction in thalassemic patients. The mechanism of airway obstruction in thalassemia is unclear. Airway reactivity and a disproportionate growth of the alveolar mass relative to the airways and chest cage have been proposed to be involved in the complex mechanism. 13 The pathogenesis of restrictive lung dysfunction in thalassemic patients has been associated with hemosiderosis. [1][2]7 Serum ferritin levels, which reflected iron overload, were abnormal in all but one female thalassemic subject who received regular chelation therapy (5 times a week). However, no correlation was found between serum ferritin levels and any of the lung function values. This finding supported those discovered by Tai et al, 5 and Luyt et al 6 , but contrasted with studies by Carnelli et al, 1 Factor et al, 2 and Filosa et al. 7 It is suggested that the duration of iron overload may be more important than the actual amount of iron provided through transfusions. 2 Moreover, serum ferritin levels change during the process of chelation, and do not necessarily reflect total body iron stores. 2 Therefore, a cross-sectional study such as this one, where only the latest serum ferritin level was obtained, lacked the ability to demonstrate the relationship between lung dysfunction and iron overload. A complex mechanism in addition to iron overload has been proposed to play a role in the development of lung dysfunction in thalassemic patients, such as transfusion-dependent chronic fluid accumulation. 13 Chelation therapy was a confounding factor in this study. Most patients in our study did not receive adequate chelation therapy due to financial reasons. On the other hand, most patients in previous studies had regular and adequate chelation therapy but were still found to have lung dysfunction. [1][2][5][6][7]11 This may lead us to question the effectiveness of chelation therapy. Nevertheless, the lung function values observed in our patients were lower than those in other published studies, which may indirectly reflect the benefit of chelation therapy and the role of iron overload.
We conclude that thalassemic patients have significantly lower lung function compared to healthy children, with restrictive dysfunction being the most common type of lung dysfunction in thalassemic patients. We have found no correlation between lung function and serum ferritin values; however, we still suggest that iron overload plays a role in the mechanism of lung abnormalities. No other factor that may contribute to the development of lung dysfunction in thalassemic patients has been found in this study. Further studies concerning the cause and effect of iron overload, or other mechanisms such as chronic fluid accumulation, are needed.
|
2019-03-12T13:04:34.925Z
|
2016-10-10T00:00:00.000
|
{
"year": 2016,
"sha1": "45f0895ea09c71bddaefd55698db33c648251507",
"oa_license": "CCBYNCSA",
"oa_url": "https://paediatricaindonesiana.org/index.php/paediatrica-indonesiana/article/download/789/638",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fdfd3b60fee284724b757ab78d238e3800923c17",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245376587
|
pes2o/s2orc
|
v3-fos-license
|
Experiences of enacting critical secondary school history pedagogy in rural Zimbabwe
Abstract Previous studies have demonstrated the importance of a critical approach in the teaching and learning, especially for candidates dealing with complex subjects as History. This study corroborates research in this field by reporting on the experiences of a teacher who taught secondary school History as a subject in rural areas. The findings from a teaching practice at one secondary school in Mwenezi District reveal that there are peculiar issues that are common among rural school learners. The major objective of the study is to give a critical reflection on the in-service teacher education programme for teachers deployed in under-resourced and remote secondary schools. Through action research, the paper demonstrates that teachers face distinct challenges of non-compliance and resistance in enacting a critical pedagogy. Engaging with history and historical evidence is made complex by material shortages in schools and skill gaps among the learners. Whilst exposing rural learners to sophisticated historical narratives is the rationale behind the implementation of a critical pedagogy, there are structural challenges that are evidenced by stringent syllabus requirements, time constraints and nature of learners that are found in the schools. The paper therefore recommends that there should be some flexibility in the history syllabus to allow learners and teachers to fully engage with history material. Resource mobilisation is critical and schools need to support teachers who intend to improve rigour and discourse analysis on the subject.
PUBLIC INTEREST STATEMENT
This study examines the experiences of a teacher doing action research as part of professional development in teaching secondary school history in a remote, rural and under-resourced district in Zimbabwe. It also explores the viability of adopting a critical pedagogy methodology as a teaching strategy for non-professional students of history and upcoming historians amid increasing resistance from school authorities, syllabus requirements and the learners themselves. Using Budirirai High School as a case study, the study reveals that this teaching strategy is rich and enables critical engagement with texts, articles and other sources of history. It shows that the methodology empowers the learners so that they are able to question the status quo and so-called absolute historical facts. The paper recommends that teachers and learners should strive to engage with texts and reconfigure the understanding of history.
Introduction
Critical pedagogy is a teaching philosophy that invites educators to encourage students to critique structures of power and oppression (Shulman). It is rooted in critical theory, which involves becoming aware of and questioning the societal status quo (Jennings et al., 2006). In critical pedagogy, a teacher uses his or her own enlightenment to encourage students to question and challenge inequalities that exist in families, schools, and societies (Alsubaie, 2016). Such a teaching philosophy is critical in the discipline of history where nuanced criticism is the lifeline of history teaching. A cursory reflection on key tenets of critical pedagogy is important for this study, looking at the implementation of such pedagogy in the rural secondary schools. Table 1 gives a summary of the respondents' preferrences with regard to teaching methods.
Teaching history is understood to be an engaging process that calls for both teachers and learners to be co-creators of the learning environment (Dewey, 1938). There are calls to improve the engagement levels in order to realise a stage where the class will be doing the discipline of history. Accordingly, teaching and learning is seen as both problem posing and problem solving (Liljedahl et al., 2016). Seixas (1999) notes that there are two closely related aspects of `doing the discipline' of history. The first is the critical reading of texts, both primary sources and secondary accounts of the past. Whilst this is plausible, there are challenges in implementing critical reading of primary and secondary sources in contexts where resources are limited. Therefore, this study looks at the practicality of implementing a critical pedagogy in the teaching of secondary school learners in remote areas of Mwenezi District.
History has been treated as a dangerous subject over the years and all over the world (Billington, 1966) especially in the United States of America and Europe. The interest of the state in what is taught is not confined to the Western hemisphere. In Zimbabwe, the state through the curriculum development unit (CDU) designs the curriculum as well as the subject syllabi (Chitate, 2005). Teachers are considered voiceless implementers of educational policies and materials prepared by others (Alsubaie, 2016). This has seen the teaching of the so-called patriotic history. Patriotic history is a much narrowed down version of nationalist history. It focuses on the three "revolutions"-1896, the guerrilla war and the "third chimurenga" of land redistribution (Ranger, 2004). It divides the nation into "patriots" and "sell-outs". The enactment of a critical pedagogy is a paradigm shift from the traditional teaching strategies that have been lambasted by curriculum theorists and revisionist scholars.
Whilst studies in Canada and the United States of America have demonstrated that critical pedagogy can be implemented even at junior levels of learning, it is important to give a contextual reflection of what is being experienced in Zimbabwe (Parkes, 2007). As highlighted above, learners in both rural and urban schools have not been exposed to critical pedagogy, but this study puts to test the applicability of the learned concepts from a graduate diploma course in pedagogy. It is important to note that Seixas (1999) stresses that teachers can engage in the discipline of history through reading, selecting, and editing texts that offer students accounts of the past. Yet there are shortages of textbooks in rural schools and most learners fail to get their own copies. Therefore, Blended learning 5 10.5 this paper foregrounds that there is need to improve resource availability so that teachers and learners can construct an account of the past for their mutual learning.
Understanding the efficacy of this practice is important for an in-service teacher undergoing training in teaching methods. This is the reason why previous researchers in this field emphasise that teachers and learners need to approach the history instruction and learning using their critical minds (Clark, 2006;Darling-Hammond et al., 2020). This can be done by involving reworking, analysing and interpreting traces and accounts of the past to construct narratives that are contextually relevant to the learners and their worldview (Cowgill & Waring, 2017;Seixas, 1999)). These skills need to be imparted to the learners as they begin their secondary school education. This enables them to think independently, criticise texts and formulate individual opinions that are not imposed by the teacher (Shor & Freire, 1987). This study is significant in that it sheds light on the particular experiences of a teacher undergoing in-service training in teaching methodologies. The fact that the teaching practice happens in a remote and rural context puts to test the efficacy of implementing a critical pedagogy in rural areas. It is also important to understand that learners were initiated into these concepts having done Zimbabwe Junior Certificate (ZJC) history for 2 years. This study is guided by the overall question: what strategies can be implemented by in-service teachers to overcome challenges associated with trying to enact critical pedagogy for secondary school history in under-resourced and remote areas?
The results of the paper also contribute to the "scholarship of teaching, especially with regard to improving pedagogical content knowledge, integrating disciplinary inquiry into instruction, and engaging teachers and students in critical pedagogy" (Manfra, 2019). Whilst both rural and urban schools are not implementing critical pedagogy, the study seeks to assess the implications of this approach in a resource constrained environment. In doing so, it attempts to answer the question whether the implementation of critical pedagogy improves learning for young learners in secondary schools? Thus, the study foregrounds the need to adopt a paradigm shift in the teaching and learning of history in rural schools that face challenges of limited resources.
Literature review: educators' malpractice in stifling critical pedagogy
Previous studies have noted the importance of adopting a critical pedagogy approach in the teaching of secondary school history (Cranston & Janzen, 2017;Failler, 2015;Parkhouse, 2016). However, it is noted that many classroom practitioners continue to implement the traditional teaching methodologies, focusing on meeting the requirements of the public examinations (-, Kellaghan & Greaney, 2020). A literature survey on the implementation of critical pedagogy showed that many researchers agree that the teaching strategy is effective in emancipating students. Wineburg and Wilson (1991) observed that it improves rigor in teaching and learning. They suggest that choices of topics that have historical significance to students as well as the knowledge of students' capacity for understanding difference are critical for the teacher. It is also emphasised that the selection of documents to be studied should be appropriate for students' levels of interest and understanding (Wineburg & Wilson, 1991). Teachers' knowledge of their students is obviously crucial in dealing with concerns about implementation of nuanced learning in history. This observation is imperative in helping teachers to foreground critical pedagogy in their classes, especially for rural contexts.
In the interests of teaching appropriate history skills, teachers should expose their learners to a process of constructing warranted historical accounts so that students can arrive at their own understandings of the past through processes of critical inquiry (Darling-Hammond et al., 2020;Seixas, 1999). It is imperative for teachers undergoing in-service training to equip their learners with analytical skills so that they would go beyond the simple acceptance of teacher's (or textbook's) account (Clark, 2006;Omar, 2014). This process is important in that it makes both learners and teachers subject to criticism, thereby making the learning process interesting rather than rudimentary.
Tenets of a critical pedagogy in history
For effective teaching, there is a need to ensure that activities that are done in class resonate with active participation and should be learner centred. Such learning activities should constitute the delivery methods and should be very open ended and both learners and teachers need to be prepared for matters that arise in the class (Davies & Sinclair, 2012). These activities should be epitomised by learning by doing. This is premised on the decolonial foundations of learning where there are no canonised formats that are seen as epistemic and unchangeable. Historical pedagogy means leading students through the processes and making them question whatever decision will be made in class (Bhurekeni, 2020). It is argued that without such activities, there can be no critical historical knowledge at all and learning will be confined to traditional practices.
That underlying meanings are hidden in the images and texts in history sources gives credence to the critical discourse that historians need to unpack in order to reach the bottom of issues. This is the reason why adopting 's argument that learners need to problematize their everyday experience is essential. He maintains that, this critical understanding by empowering students to develop the courage to participate in their self-formation and liberate them from the shackles of the four corners of the classroom. In this light, learners should be taught to take risks and pose challenges to those in power. The authority used in the classroom should not come from the teacher alone because the learners are part of the problem solving dimension of learning.
Challenges in implementing critical pedagogy include the limited availability of television, videos, movies and commercials that are regarded as authentic materials. These are considered representative of the culture that are to be examined by the students and which serve as the basis for discussion and critical reflection of the said culture (Ohara et al., 2000). In this light, remote rural schools are seen as handicapped in their attempt to expose learners to real material that should form the basis of primary sources. For a teacher doing in service training in education, this context should be understood, and they would need to plan accordingly. Some learners would have no access to videos outside the school or classroom environment and exposing them to such will likely have little impact on what they will understand as history.
Nexus between theory and practice in critical pedagogy
Whilst there are strong theoretical assumptions on implementing critical pedagogy, it is prudent to ensure their relevancy in the context and settings in which they are to be implemented. Previous studies have emphasised the need to link history content and the teaching strategies so that the lessons become comprehensive and respond to the immediate needs and expectations of learners (Bhurekeni, 2020). This can only be done if learners feel that what they are being taught is familiar and reflect their everyday experiences. Traditionally, teachers who manned most schools, especially in rural areas, had no prior training in both content and pedagogy. Most schools were utilising Advanced level graduates to teach secondary school history. The shortage in human capital was worsened by the fact that most people dropped History as a subject arguing that it was too complex. This mainly happened during the 2166 syllabus, which was also criticised in missionary schools for undermining the Christianity religion (Chitate, 2005). It is within this context that many rural schools are bereft of the trained human capital. Therefore, the training of graduate teachers represents an opportunity to amalgamate pedagogical and content knowledge. Shulman () contends that pedagogical content knowledge includes subject matter knowledge and curriculum. The teacher is also expected to have grounded knowledge of students and pedagogical knowledge.
That history and historical knowledge is riddled with subjectivity is very clear and learners need to approach the subject with an open mind (Breunig, 2005). Interpretation of history is very critical, and facts are moulded rather than given. Many producers of public historical knowledge, whether they are states, religious institutions, the market, or private individuals, deliberately mould historical facts and fiction into emotionally appealing narratives that exclude other perspectives, thereby contributing to group identities (Black, 2005;Jonker, 2008;Lowenthal, 1998). This observation is informative for all parties that intend to do the discipline of history.
In line with the arguments above, Barton and Levstik (2004) as well as Barton (2006) find that history teaching should be directed at critical inquiry and dialogue about crucial historical events. This argument gives a presupposition that other events are not worth studying. This could be a challenge in terms of selection, especially on whose ideas count in accrediting certain events more important than others. However, the approach is seen as important in that it will serve to explore different perspectives and stimulate students to find mutual understanding before they agree on any historical narrative to be studied (Parkes, 2007). Whilst the process has several implications on the time spent during the lessons, it is worth the cause because it creates critical minds that are poised to be academics and theorists in future.
Theoretical framework
This study is informed by the critical social theory of learning which is built on the basis of emancipation for the oppressed (Jennings et al., 2006). As such, critical pedagogy is understood as an approach to history teaching and learning which, according to Kincheloe (2005), is concerned with transforming relations of power, which are oppressive and which lead to the oppression of people. In doing so, the process tries to humanize and empower learners so that they approach real-life issues with an empowered mentality rather than a weakened position (Breunig, 2005). This approach to teaching is most associated with the Brazilian educator and activist Paulo Freire, who advocated for an independent enquiry rather than a master and servant relationship in class (Freire, 1971).
Critical pedagogy, like critical theory, tries to transform oppressed people and to save them from being objects of education to subjects of their own autonomy and emancipation (Shor & Freire, 1987). In this view, secondary school learners should act in a way that enables them to transform their societies, which is best achieved through emancipatory education (Freire,171). For this particular study, rural secondary school learners should seek to change their situation. Through problem posing education and questioning the problematic issues in their lives, these young scholars learn to think critically and develop a critical consciousness (Barton, 2006). This enables them to improve their life conditions and to take necessary actions to build a more just and equitable society (Shor & Freire, 1987). Thus, it can be said that critical pedagogy challenges any form of domination, oppression and subordination with the goal of emancipating oppressed or marginalized people (Jennings et al., 2006). This is in-keeping with the decolonisation process where classrooms should not be seen as mini-prisons where the learners are under the subjugation and power of the teacher.
Methodology
This paper is based on an action research done during teaching practice at Budirirai Secondary school in Mwenezi District. The Action Research concept was developed principally by Kurt Lewin (1946). Broadly, Stenhouse (1975)viewed much educational research as "unable to 'get at' the complexity of what goes on in the classroom, because of its distance and its framing of research questions in the form of objective and external questions". The intensive application of critical methodology was implemented from January to April 2015, when the researcher was a student undergoing training in education at Great Zimbabwe University. The period covered the beginning of a school term and academic year for learners transitioning from ZJC joining Ordinary level. These learners were exposed to the traditional teaching and learning approach that is conventional in all schools in Zimbabwe, where learners do not engage with primary sources.
The target population at Budirirai comprised 49 form three learners who were in one class that was used to assess the efficacy of enacting a critical pedagogy. Of these 49 learners, 20 were boys and 29 were girls. Six groups were created based on the surrounding villages where the learners were coming from and also constituting the catchment area of the school. These villages are Musvoti, Zvihwa, Marufu, Sitera, Timire and Mangezi. This class was chosen because it had learners of mixed abilities, and they were being initiated into the 2167 syllabus, which enabled the researcher to test the approach for learners who had not been exposed to other teaching methods before. Exposing the learners to critical pedagogy was done through a deliberate process where they were told that they need to engage primary sources, question evidence from textbooks rather than taking it as given. The idea was to make them comfortable in criticising their teacher and even the textbooks. In essence, the researcher encouraged the learners to do the discipline of history through active engagement. So instead of the traditional approach of providing and explaining notes to the learners, the researcher preferred to give them research activities to complete tasks in their groups. This was done as a way of encouraging them to visit elders in the villages, eliciting primary evidence and report to their colleagues. This enabled the learners to critique sources of history and engage with what their peers reported.
Whilst the History lessons of 40 min per day were done four times a week, learners were encouraged to work in six groups to complete given tasks in a fortnight. These groups based on the villages also enabled collection of data in the sense that learners would act as respondents during focus group discussions. The fact that the groups would be composed of learners coming from a homogenous location enabled coordination. Due to time constraints and the nature of the engagements, a few aspects could be covered during the normal lesson period.
It is important to note that theory and practice are not separated in action research. This is because the theory emerges from systematic and intentional reflection in practice (Loughran, 2002). Therefore, the methodology adopted in this study helped to build the theoretical underpinnings of action research. Teaching practice at Budirirai secondary school exposed the researcher to real-life challenges that in-service teachers experience in trying to engage in critical pedagogy. According to Hendricks (2009, p. 3), "Knowledge is something that action researchers do and it is their living practice". For 13 weeks, the researcher taught form three learners, an attempt was made to discover and recover history through the learners' lived experiences in the six surrounding villages, which constitute the catchment area of the school.
Findings and discussion
The study sought to explore the teaching and learning strategies that learners considered desirable in the teaching and learning of their ordinary level history. During a focus-group discussion, 15 respondents from 3 different groups gave the following response to the question, how best can history be learnt: We cannot do without the notes that you give us. We do not understand the textbooks because the language used is rather complicated to us. Your notes are more simplified and we can easily identify the points.
The respondents in the Zvihwa group added the following dimension on the same question: We use the textbooks that you gave to us to work on assignments and group work. Although we face some difficulties in understanding the information in the textbooks, they help us to answer the assignment questions and researches.
The experiences of the researcher in the four different schools taught in Mwenezi District between 2009 and 2019 demonstrated that learners do not criticise the texts that they use. They simply regurgitate information that is provided in their notebooks, and the teacher tried in vain to motivate them to read in between the lines and formulate their own opinions. This is the reason why Kincheloe (2005) points out that texts and their themes should be provided by both teachers and history learners who bring their experiences for study and place that knowledge within the given context. The rationale behind this approach is to ensure that students are able to pick up themes that are most meaningful and most relevant to their own lives. This makes learning closer to the lived experiences and therefore they will internalise the information a great deal.
Adopting teaching strategies that promote interaction among learners is critical for training future historians. The teaching practice enabled the researcher to test the effectiveness of groups, especially at Budirirai Secondary School, where learners from the same village were grouped together for improved cohesion. Although some members in the groups were hardly participating, it is clear that their confidence was boosted, and they identified with the points raised by their fellow learners. The learners dialogued in the groups and there was an in-depth engagement in discussions that were held during feedback. This enabled a higher level of nuanced discussions as they learned from each other and theorised how to question the authoritarian power of the classroom.
Meeting the critical pedagogy expectations in a rural history class
The action research sought to identify the preferred teaching and learning method by the learners when they enrolled for their Ordinary Level History course. To achieve this, learners were asked about the mode of delivery that they feel enhances the acquisition of critical historical insights. The table below indicates the responses from the learners: As shown in the table above, the responses given by the interviewees indicated that 33 of the 49 learners preferred to be actively involved in the teaching and learning of their history lessons. They said role-plays, debates, group discussions and essay competitions were better modes of learning compared to passive lecture method. Eleven learners preferred the lecture method arguing that the teacher is a fountain of knowledge and as pupils they cannot effectively explore the grand narratives of history. Five of the respondents indicated that there is a need to combine both teaching strategies depending on the nature of the phenomenon under study.
These responses reflect the differences among the learners and the researcher employed the several strategies to ensure that a critical pedagogy is implemented. The strategies include research assignments, role-plays, reading and analysing primary sources as well as engaging in debates in class. The theoretical components of critical pedagogy in History enabled the researcher to understand teaching from a new perspective which espouses generative enquiry. Generative inquiry "embodies an underlying belief in children as learners whose natural curiosity leads them to explore their world in meaningful ways" (Manfra, 2019). Parallels can be drawn between these instructional and learning methods with the traditional approaches that emphasise the central role of the teacher (Freire, 1971). Indeed, the traditional thrust that sees the students as having no right to question, reject, or reconstruct what they are told to take as unobjectionable and absolute is no longer taken seriously by teachers who have been exposed to the pedagogical content knowledge. Yet in the language of critical pedagogy, the critical person is the one who is empowered to seek emancipation (Shulman).
As part of assessing the learners' comprehension of key taught concepts, the researcher started each lesson by querying on learners' experiences and worldview. This enables learning through sharing knowledge, which is in sync with the generative curriculum that believes that children can both learn and share their knowledge in multiple ways and that everyone has areas of strength that educational effort can capitalize (Darling-Hammond et al., 2020). According to Manfra (2019) "this approach applies to both students and teachers as learners in the world and can lead to developing a broad repertoire of teaching strategies that enable children to approach their learning in different ways. In a generative curriculum, there is a continuous interplay between content learning and process learning. The two complement and enhance each other". Before enrolling for the course, the researcher placed emphasis on content delivery through the lecture method.
Embracing blended teaching and learning method ensured that learners actively participated in the lessons. One of the learners aptly observed that: When I participated in the role play on the historical experiences of economic organisation of pre-colonial states, it became easy for me to assess the relevance of history in the contemporary world. I managed to understand the trade interaction in the community as well as with foreigners.
In this light, lectures in critical pedagogy proved essential in that they made the researcher aware that he was not the fountain of history knowledge and historical facts. The experience clearly deemphasised the issue of chalk and talk, which is characteristic of the traditional teaching methods (Dewey, 1938). In the same vein, Freire (1971) argued that people need to engage in a praxis that incorporates theory, action, and reflection as a means to work toward social change and justice. In this sense, it will be imperative for history teachers in rural secondary settings to grasp the critical pedagogic content knowledge for them to be able to enact critical history pedagogy, giving learners time to reflect and assess their experiences.
It is critical that my conceptions of history teaching changed through my encounter with critical pedagogy. The changes were seen in the implementation of a critical inquiry in the classroom as the researcher and pupils engaged with the history discourse. In an attempt to find the effectiveness of the group work, learners were given some research areas to work on and then present later. The researcher realised that some learners did not make any contribution because when the time for feedback came, they had no subject matter knowledge to present. One group member had the following to say: When we worked on the question, these (names supplied) refused to participate and they said they were too busy to make contributions. It is only when you were around that they appeared to be following the process.
This observation was disturbing because critical pedagogy demands that the learners' experiential knowledge be valued and given as the basis for all the learning that takes place in the day-to-day engagements in the classroom (Breunig, 2005). Yet such an important ingredient in the day-to-day lessons was undermined by resistance by some learners.
Positive results of critical pedagogy for rural teachers and learners
After doing the course on the pedagogics in History the teacher had to change the teaching strategies that are teacher centred and demean the role of the pupils. Role-play is one of the methods employed in teaching the organisational aspects of the society. It is a strategy that introduces problem situation dramatically, provides opportunity for people to assume roles of others and thus appreciates another point of view, allows for exploration of solutions, and provides opportunity to practice skills (Nsamenang & Tchombe, 2012). In one of the learning activities, learners dramatized the roles of the king in the pre-colonial states. In the discussion that ensued after the role-play, the following emerged: We noted that Simba (pseudonym) had a wide array of power, allocating land, solving disputes in the community, receiving tribute and leading various ceremonies. We also noted that he worked in consultation with advisors which was democratic.
This resonates well with the democratic perspective of critical pedagogy. By participating in the various roles, learners' interests were entrenched, and the teacher was able to analyse from an outsider perspective how learners related to what they were involved in.
The course enabled us to employ the Socratic method of teaching history using a constructivist approach. Socratic teaching strategy is accomplished by asking questions instead of by "telling" what the teacher intends to get across to learners (Nsamenang & Tchombe, 2012). In its purest form, the Socratic Method uses questions and only questions to arouse curiosity (Lam, 2011). Some of the questions posed in the History lessons include: • In your village or families, how do people negotiate lobola issues?
• From your experiences, why would people migrate?
• Explain the challenges of believing an individual's personal account of events.
This strategy served as a "logical, incremental, step-wise guide that enables students to figure out about a complex history topic or historical issue with their own thinking and insights" (Davies & Sinclair, 2012). In a democratic class environment that the researcher taught, the pupils were afforded the opportunity to express themselves in their most comfortable language since the teacher believes that they relate their own experiences in the discussions held. This was done during role-plays and debates to enable learners to freely express themselves.
As a result of the exposure to critical theory, the teacher imbibed the various teaching strategies that are in line with making pupils do the discipline of history. The idea is that the teacher should have big ears to listen instead of a big mouth to talk during the learning process. Good teaching is about caring, nurturing, and developing minds and talents. It is about devoting time, often invisible, to every student (Nsamenang & Tchombe, 2012). In this environment, learners assume the role of history researchers who engage in dialogue with both living and non-living historical sources. For example, learners manage to interact with communal leaders, such as kraal heads (Sabhuku), Headmen (Sadunhu), Chiefs (Vashe) and Councillors to reflect on the political organisation of pre-colonial states and making some informed comparisons with contemporary experiences.
Elsewhere, it is noted that a trait which is synonymous with teachers who have experienced critical pedagogy in their studies is self-evaluation. Buchanan and Jackson (in Nsamenang & Tchombe, 2012) assert that self-evaluation refers to the fact of assessing one's strengths and weaknesses, one's successes and failures. As a practicing history teacher, the researcher had to make diagnostic evaluations of the lesson transaction. The evaluation done in both the lesson plan and scheme of work enabled the implementation of cogent evaluation of the researcher, noting weak areas and areas that needed improvement. Self-introspection suggests that the individual is involved in reflection and critical analysis, that the teacher makes plans to impact on the future-plans that will make them "self-developing professionals" (Howard, 2006). The habit of reflecting on one's weaknesses enabled the teacher to become aware that teaching is rather a complex phenomenon.
An attempt to turn the theories of a Freirean critical pedagogy developed and debated in the university course seminars into pedagogical practices in history lessons done at a rural school like Budirirai is putting theory into practice. The old adage in China which says "if I see I remember, if I read I forget but if I do I know" illustrates the importance of doing the discipline of History (Parkes, 2007). Indeed, listening is the last thing in the sequence of learning. This is because it assumes a slave-master dichotomy, which flies in the face of democratisation of school experiences . Therefore, as part of emancipating learners, the researcher devoted most of the time to activities, which were learner centred.
In essence, the course in pedagogics transformed the researchers' vision on how the act of teaching should be done. It became clear that subject matter knowledge alone is not enough to make one a good teacher. The chalk and talk teaching strategy as well as the idea of getting into class bombarding pupils with notes was highly discouraged. This is so because it presented an assumption that the teacher is a reservoir of all history, yet history is a discipline of contestations. The teaching strategy also exposed the interest of the state in the content of history that has to be done by the pupils at school. This became eminent when learners were given the syllabus to evaluate its composition as well as the topics covered. This was done to enlighten learners on the aspect of interests best served by history and historians. This confirmed the observation by Clark (2006) that the "resultant 'History Wars' need to be seen, however, within the longer trend to see history in schools as being part of nation building". In this context, the numerous inquiries into school history, civics and citizenship and values take on a problematic character and a particular view of the discipline of history.
Challenges of implementing a critical pedagogy in rural schools
Whilst it is prudent and ideal to move from the traditional lecture method in the history classes, implementing a critical pedagogy at ordinary level is riddled with several challenges. The course emphasised the use of inquiry in doing the discipline of history. Using inquiry-based learning in history takes a lot of time, energy, and planning, but it is often very effective in the long term (Nsamenang & Tchombe, 2012). During focus group discussions concerning the effectiveness of the new learning methodologies, one learner had this to say: But sir, you are not giving us notes that we need to use in our preparations for examinations. So how are we going to show our guardians that we are learning?
This revelation demonstrates that some learners had misgivings in the approaches that were being emphasised in class. The resistance was cemented by parents and guardians who still hold on to the traditional view that teachers are the fountain of knowledge.
Whilst the argument above demonstrates the progress with which critical pedagogy facilitates strong and empowered learners, there were misgivings amongst some members in the class. Some learners indicated that the workload demanded by critical pedagogy was rather incompatible with the expectations in other learning areas and balancing these was a big challenge for them. Besides time constraints, the exercise also demands both learners and the teacher to increase their analytical skills. Rethinking and reimagining historical experiences is not something that can be easily done over a short period of time. The amount of rigor also meant that some learners with low abilities in terms of critical thinking ended up contributing virtually nothing. One learner said: I am not sure if I will register for this subject next year if this is the kind of analysis demanded in the subject.
Thus, whilst it was clear that when learners are exposed to critical pedagogy, they practice problem solving and critical thinking skills to arrive at a conclusion in history, it also created some backlash. This is despite the fact that the teaching method is student-centred and studentdirected, and can be modified for students studying history at ordinary level.
Although the teaching method empowers learners to work as professional historians who deal with primary source documents, it was badly received by some learners. They claimed that exposing shortcomings of given texts, photographs and statistics was beyond their abilities. Another challenge was that the previous 2 years of traditional teaching and learning had left an indelible mark amongst learners.
Critical pedagogy as a source of transformation and empowerment
The thrust of engaging in critical pedagogy was to emancipate the learners as well as enrich the teacher's teaching strategies. Therefore, history teaching practiced after the engagement with critical pedagogy was transforming and empowering. The traditional vestiges of authorities were not only challenged, but they were also questioned by the practitioner. Indeed, the traditional image of teachers as voiceless implementers of educational policies and materials prepared by others became anachronistic. Teachers become aware of the limitations of the set standards within the institution espoused by the curriculum through the syllabus (Alsubaie, 2016). In the case of Zimbabwe, the current 2167 syllabus limits the potential of history in generating knowledge that questions the established structures by promoting rote learning and memorisation of given facts.
Teachers with no pedagogical content knowledge relate learning more to curriculum content and academic achievement (Breunig, 2005). This system conditions teachers' classroom behaviours to teach for examinations and not for lifelong learning. When learning is too dependent on the curriculum, it becomes "narrow for quality life-related learning outcomes and flexibility in teaching" (Darling-Hammond et al., 2020). Besides, teachers' perception about "learning look only at the rituals and routines that ensure effective learning is taking place" (Nsamenang & Tchombe, 2012). This attitude is worsened by the school system which is examination dominated. Too much focusing of learning on prescribed curriculum content limits the scope of classroom discussions (Kellaghan & Greaney, 2020). This is tantamount to enslavement, and the type of learners is equated to the congregation in a church who meekly listen to the priest's sermon without making their own input.
In the traditional teaching set up that the teacher engaged in before undertaking to study the course on pedagogics in History, the prime time was to try and drill pupils for the examinations. This resonates well with the findings of other researchers who realised that the focus in the traditional teaching strategies is on teachers' insistence on learning concentration for pupils to the teacher-directed teaching that is concrete (Dunne et al., 2007;Kellaghan & Greaney, 2020). The university course enabled me to realise that for learning to be a conscious and deliberate activity, it requires effort from both teachers and pupils as co-constructors of knowledge. This is because it is not all that is said by the teacher that makes historical knowledge. Indeed, some of the knowledge is constructed in the experiences of the learners at home and in the life in general.
Having done the course on the pedagogics in History the teacher now knows how to control and attend to individual differences in the learners from various cultural backgrounds. The teacher was made aware that every history classroom is a multicultural context, even within the same ethnic community due to class differences and personal experiences. This enabled the teacher to make informed decisions on the comments to make to the pupils' work. Understanding the learners' pattern of development and the importance of individual differences enabled the teacher to make wise decisions concerning the choice of teaching methods and culturally appropriate illustrative examples. This was also made possible by the fact that the pupils themselves were given an opportunity to work in groups in doing research work. This enriched the discussions through diversity. This also resonates well with the democratic inclination of the transformative education.
After gaining the pedagogic content knowledge through the studies, the researcher was developed professionally and also capacitated to deliver lessons in a way that liberates and empowers the history students. The teacher becomes well equipped to enhance the optimal development of the student and knows, and is ready to expand the knowledge base of teaching and learning through research (Nsamenang & Tchombe, 2012). It has been noted that emphasis on content matter knowledge and pedagogical skills usually ignores the socio-political and cultural constraints that affect the teacher's work. Teaching in Mwenezi district enabled me to access the history that the pupils were prepared to share with me through dialogue, research presentations, class debates and drama.
Whilst the course in pedagogy at Great Zimbabwe helped in transforming the researcher's approach to teaching secondary school history, developing the ability to think historically is counter-intuitive and has been described as an "unnatural act" (Wilschut, 2019). It can seldom be acquired from everyday experiences. Rather, it requires systematic instruction in how the discipline of history operates (Parkes, 2007). Teachers exposed to critical history pedagogy are, however, expected to develop better historical understanding and how to transact business in their history classes compared to their counterparts without such exposure.
Conclusion and recommendations
Whilst teachers are expected to be transformative intellectuals who have the knowledge and skill to critique and transform existing inequalities in society, their experience may not be in sync with the realities in the community. In this regard, as a practitioner, I had to learn from the learners in order to appreciate their viewpoints. However, the culture of silence and lack of exposure limited the efficacy of this practice. Although an attempt was made to make the learners talk in groups and give feedback, some of the learners could not understand the aspect of critiquing the texts and their teacher.
Beliefs that are placed at the core of the definition of knowledge are tested and indeed acquire meaning only through the interaction between the believer and the environment. This implies that the content that is taught in schools needs to be reflective of the learner's experiences. The course of critical pedagogy enabled the researcher to remodel the lesson transactions so that they met these basic requirements. Therefore, it is recommended that teachers implementing critical pedagogy need to embrace these strategies.
This paper also recommends that there should be some flexibility in the history syllabus to allow learners and teachers to fully engage with history material. Following the letter and spirit of what is written in the official 2167 syllabus does not do any justice to nuanced discussions and the teachers and learners are severely limited in their attempts to delve into critical pedagogy. Besides, resource mobilisation is critical and schools need to support teachers who intend to improve rigour and discourse analysis on the subject through enacting a critical pedagogy. The researcher also suggests that teachers should adopt reflexive approaches in their day-to-day delivery of lessons. This practice enhances self-introspection and can improve the teaching of history at an early stage.
|
2021-12-22T17:06:47.913Z
|
2021-12-20T00:00:00.000
|
{
"year": 2022,
"sha1": "85f856512426162cc04866a8ef5aeb9213ac3709",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311983.2021.2010927?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "1db648bb38e952d4ba475e8e5a0821e2103e8341",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
233866717
|
pes2o/s2orc
|
v3-fos-license
|
Management and implementation strategies of pre-screening triage in children during coronavirus disease 2019 pandemic in Guangzhou, China
BACKGROUND Emerging infectious diseases are a constant threat to the public’s health and health care systems around the world. Coronavirus disease 2019 (COVID-2019), which was defined by the World Health Organization as pandemic, has rapidly emerged as a global health threat. Outbreak evolution and prevention of international implications require substantial flexibility of frontline health care facilities in their response. AIM To explore the effect of the implementation and management strategy of pre-screening triage in children during COVID-19. METHODS The standardized triage screening procedures included a standardized triage screening questionnaire, setup of pre-screening triage station, multi-point temperature monitoring, extensive screenings, and two-way protection. In order to ensure the implementation of the pre-screening triage, the prevention and control management strategies included training, emergency exercise, and staff protection. Statistical analysis was performed on the data from all the children hospitalized from January 20, 2020 to March 20, 2020 at solstice during the pandemic period. Data were obtained from questionnaires and electronic medical record systems. RESULTS A total of 17561 children, including 2652 who met the criteria for screening, 192 suspected cases, and two confirmed cases without omission, were screened from January 20, 2020 to March 20, 2020 at solstice during the pandemic period. There was zero transmission of the infection to any medical staff. CONCLUSION The effective strategies for pre-screening triage have an essential role in the prevention and control of hospital infection.
INTRODUCTION
Emerging infectious diseases are a constant threat to the public's health and health care systems around the world [1] . Coronavirus disease 2019 (COVID-19), which was defined by the World Health Organization as pandemic [2] , has rapidly emerged as a global health threat [3] . The virus was officially named as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [4] . To curb the transmission of the virus, health care professionals, committees, and governments have combined many approaches, such as extensive COVID-19 screening, effective patient triage, the transparent provision of information, and the use of information technology [5] .
Compared with the adult cases, most of the cases found in children are obviously mild, where children tend to recover quickly and have a good prognosis. Currently, the incubation period of the virus in children has been reported to be 1 d to 14 d, most of which are 3 d to 7 d. Early general symptoms include fever, fatigue, cough, nasal congestion, runny nose, cough sputum, nausea, vomiting, diarrhea, headache, dizziness, and similar, most of which disappear in 1 wk. If the condition worsens, dyspnea and cyanosis may appear. These often appear 1 wk after the illness and may be accompanied by systemic poisoning symptoms such as malaise or restlessness, difficulty in feeding, loss of appetite, reduced crying, and reduced body movement [6] . Mild and atypical presentations of the infection in children may make the identification of the disease challenging [7] .
Outbreak evolution and prevention of international implications require substantial flexibility of frontline health care facilities in their response [8] . Guangzhou Women and Children's Medical Center, located in Guangzhou, Guangdong Province, China, is a specialized tertiary pediatric hospital caring for children under 18 years of age, which is the designated treatment center for children with SARS-CoV-2 infection in Guangdong Province during the epidemic period. To respond effectively to the epidemic prevention and control, this hospital has done a lot of work in the prevention of virus transmission, cross-infection, and medical staff infection. The effective strategies for pre-screening triage have an essential role in the prevention and control of hospital infection. The aim of this study was to explore the effect of the implementation and management strategy of pre-screening triage in children with the COVID-19 pandemic who consulted the Guangzhou Women and Children's Medical Center.
Standardized triage screening procedures
Development of a standardized triage screening questionnaire: Standardized triage screening procedures are shown in Figure 1. Standardized triage screening questionnaire was designed to assist health care providers with the next steps, which was about symptoms of fever, respiratory symptoms, fatigue, diarrhea, conjunctival congestion, and their travel history and any history of contact with people with confirmed cases of COVID-19.
All patients who visited the hospital, regardless of the reason for their visit, were required to fill out this questionnaire before entering the hospital. People with a mobile phone could fill out the questionnaire by scanning the QR code, which reduced the time of collecting information, minimized the risk of potential contamination by touching the pen and paper, and also shortened the waiting time. After this step, people were allowed to enter the hospital, depending on their symptoms. If the symptoms were combined with epidemiological risk, the patients were isolated, appropriate infection prevention and control measures were implemented, and testing for SARS-CoV-2 was initiated.
Setup of pre-screening triage station: The triage station was located outside the entrance of the outpatient hall with highlighted signs. All patients had to enter the hospital from the pre-screening triage office. The medical guide nurse guided the patients to the pre-screening station for screening first and then to the corresponding general department after the initial exclusion of specific infectious diseases. The suspected cases were registered, and the patient was escorted to the fever clinic by designated routes.
Multi-point temperature monitoring and screening: Temperature monitoring was performed three times: First, when the children and their relatives reached the entrance of the hospital, the security guard used the mobile infrared thermometer to measure their body temperature. Second, when they went to the pre-screening triage station, the triage nurse measured the body temperature again while they were filling out of the standardized triage screening questionnaire. Third, they needed to stop at the secondary triage table before the doctor's visit presenting the registration report and taking the temperature, where the nurse would also ask about the epidemic history.
When the doctor received a pediatric patient, he would again carefully ask about the epidemic history and comprehensively evaluate the symptoms of the child. In any of the four links, if the child's situation was consistent with the suspected case, he/she was escorted to the fever clinic for further treatment.
Extensive screenings
To respond effectively to the epidemic prevention and control work, all children with fever or respiratory symptoms were tested for SARS-CoV-2 during the pandemic.
Two-way protection
Based on the prevention and control measures of contact isolation and droplet isolation and air isolation, an intensive preventive and control of hospital infection strategy for novel coronavirus were adopted to prevent both transmissions of the disease from the patient to the medical staff and from the medical staff to the patients and cross-infection among the patients by emphasizing the "two-way protection". Trained physicians and nurses wearing personal protective equipment worked together for initial assessment and the differential diagnosis of the fever. Routine hand sanitizers were made ready to be used at any time. Hands were washed, or a quick hand disinfectant was used after each contact with the patient, and the outpatient environment was kept clean and well ventilated. A prompt reminder was given to those not wearing masks, and timely distribution of masks was provided for those not wearing masks.
Prevention and control management strategy
Training and emergency exercise: In order to cope with the changing situation of the epidemic, the prevention and control measures were also accordingly adjusted. The emergency work manual at our hospital was updated from the first edition to the 18 th edition. For each edition, the staff was organized to study and take exams. On the basis of training and examination, the emergency exercises were organized, and all of the staff was enabled to master the triage screening process through repeated simulation drills.
Staff protection: According to the outpatient flow, and flexible scheduling, it was assured for triage nurses to take turns to rest and avoid overwork. Managers strived to understand the psychological activities of employees and timely conduct psychological counseling. The department's infection control team was set up to monitor the physical health of all employees, and the staff was monitored, and their body temperature was recorded twice a day, while the abnormal symptoms were reported on time. Once the employee was found to have a fever (above 37.3 ºC), cough, and other symptoms related to COVID-19, he/she immediately stopped working. The virus pathogen samples were collected for inspection at the department twice within 24 h, and the observation was performed in the designated areas.
Data collection
Statistical analysis was performed on the data from all children hospitalized from January 20, 2020 to March 20, 2020 at solstice during the epidemic period. The age, gender, clinical manifestation, epidemic history, time of onset, and other relevant data were collected by questionnaires and electronic medical record systems.
Ethics approval
Ethics approval was obtained from the Ethics Committee of Guangzhou Women and Children's Medical Center.
RESULTS
A total of 17561 children, including 2652 who met the criteria for screening, 192 suspected cases, and two confirmed cases without omission, were screened from January 20, 2020 to March 20, 2020 at solstice during the pandemic period (Table 1).
Forty-two point six seven percent of the children were female with a median age of 48 mo, 68.9% were from local areas, 20.2% had a history of prevalence, and the median onset time was 1 d. The average time to fill out the questionnaire was 2.9 ± 3.5 min. There has been a zero transmission of the infection to any medical staff.
DISCUSSION
It has been reported that COVID-19 can spread through droplets, aerosols, contact, or digestive tract [9,10] . The hospital is a high-risk area for nosocomial transmission, and the most vital strategy for minimizing the risk of nosocomial infection starts from the triage stations [11] .
As the designated treatment center for children with SARS-CoV-2 infection in Guangdong Province during the epidemic period, we developed standardized triage screening procedures to assist health care providers. A simple, questionnaire addressing crucial points was designed to assist diagnosing patients. The multi-point temperature monitoring and screening, extensive screenings, and two-way protection are all effective methods for preventing the spread of the pandemic. There were no missed infected patients or transmission of the infection to any medical staff. Similar studies have shown that, in Western Chongqing, COVID-2019 has been rapidly and well controlled in all of the counties, which was mainly due to qualified triage station and fever clinics in combination with community isolation, quarantine, and medical support [12] . The initiated first fever screening system has an important role in the prevention and control of hospital infection in the third people's hospital of Shenzhen China [13] .
Compared with the adult cases, most of those found in children were obviously mild [6] . Mild and atypical presentations of the infection in children may make the infection challenging to detect [7] . Extensive screenings allowed the early identification of asymptomatic or mild patients who had viral loads.
Infectious disease risk screening, similar to other disaster plans, must be flexible enough to adapt to specific emergency situations [14] . In order to cope with the changing situation of the pandemic, the prevention and control measures are also being adjusted accordingly. Recent research have shown that obesity plays an important role in the pathogenesis and transmission of COVID-19 infection [15] and may be a risk factor for COVID-19-related mortality, while thrombotic events were an aggravating cause of death [16] . An endothelial damage was found in obese children, which was confirmed by increased carotid intima-media thickness values [17] . These outcomes recommend that we should pay extra attention for patients with obesity in the screening of COVID-19 during this pandemic.
This study has a few limitations. First, this is a retrospective single-center study. Second, the pre-screening triage process is not unique, and it is based on national or local policies to a large extent. April 26, 2021 Volume 9 Issue 12
CONCLUSION
The effective strategies of pre-screening triage have an important role in the prevention and control of hospital infection. As the situation with epidemic changes, it is necessary to timely adjust preventive and control strategies.
Research background
Coronavirus disease 2019 (COVID-19), which was defined by the World Health Organization as pandemic, has rapidly emerged as a global health threat. Compared with the adult cases, most of the cases found in children are obviously mild, where children tend to recover quickly and have a good prognosis. Mild and atypical presentations of the infection in children may make the identification of the disease challenging. The implementation and management of pre-screening triage in children played an important role in the prevention and control of the pandemic.
Research motivation
The hospital is a high-risk area for nosocomial transmission, and the most vital strategy for minimizing the risk of nosocomial infection starts from the triage stations. The effective strategies for pre-screening triage have an essential role in the prevention and control of hospital infection. Outbreak evolution and prevention of international implications require substantial flexibility of frontline health care facilities in their response. The prevention and control measures need to adjust accordingly, in order to cope with the changing situation of the pandemic.
Research objectives
To explore the effect of the implementation and management strategy of pre-screening triage in children during COVID-19 pandemic.
Research methods
The standardized triage screening procedures included a standardized triage screening questionnaire, setup of pre-screening triage station, multi-point temperature monitoring, extensive screenings, and two-way protection. In order to ensure the implementation of the pre-screening triage, the prevention and control management strategies included training, emergency exercise, and staff protection. Statistical analysis was performed on the data from all the children hospitalized from January 20, 2020 to March 20, 2020 at solstice during the pandemic period.
Research results
A total of 17561 children, including 2652 who met the criteria for screening, 192 suspected cases, and two confirmed cases without omission, were screened from January 20, 2020 to March 20, 2020 at solstice during the pandemic period. There has been zero transmission of the infection to any medical staff.
Research conclusions
We developed standardized triage screening procedures to assist health care providers. A simple, questionnaire addressing crucial points was designed to assist diagnosing patients. The multi-point temperature monitoring and screening, extensive screenings, and two-way protection are all effective methods for preventing the spread of the epidemic. There were no missed infected patients or transmission of the infection to any medical staff. The effective strategies for pre-screening triage have an essential role in the prevention and control of hospital infection.
|
2021-05-07T05:22:21.404Z
|
2021-04-26T00:00:00.000
|
{
"year": 2021,
"sha1": "be83b5b951d184e6959b614d333405681669b672",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12998/wjcc.v9.i12.2731",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be83b5b951d184e6959b614d333405681669b672",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
129311152
|
pes2o/s2orc
|
v3-fos-license
|
Health impact assessment of community-based solid waste management facilities in Ilorin West Local Government Area, Kwara State, Nigeria
Adverse inter-relationships between man and his environment has been the main cause of disequilibrium, which usually result in negative effects to man himself, his environment and his survival as epitomized by the current global climate change phenomenon. This study applied the concept of Health Impact Assessment (HIA), an evolution of Environmental Impact Assessment (EIA) to predict the health impact of the proposed community-based solid waste management facility in Ilorin metropolis; which is a part of the Millennium Development Goals’ (MDG) Health and Environmental Sustainability Projects (Goals 4-7). Through systematic sampling methods data was collected from four administrative wards in the metropolis for this work. Using The Nigerian Institute of Town Planners’ (NITP) guidelines on impact assessment, the study was able to discover that there are many benefits derivable from the proposed project, however, there can be negative impacts too even if the Environmental Management Plan and project operational guidelines are strictly adhered to. This gives the need for overall assessment of such projects as against the practice of benefit analysis usually embarked upon by proponents of such investments. The study further suggested different participatory approaches in establishing the sustainability of projects.
mium, minerals and synthetic chemicals present in wastes can contribute to the pollution of surface and underground water, and environmental degradation. This can escalate with increasing rates of urbanization and the equally increasingly range of economic activities in cities and towns. This unplanned and fortunately unmanaged situation has the capacity to reduce the capability of local governments and urban authorities to effectively manage waste in their domains. However, governments and urban authorities have continued to invest enormous resources in fighting this menace in order to ensure healthy living conditions and livelihood of their citizens. Hence, the increasing calls for the need to employ strategies to evaluate these engagements. The impact assessment process -a tool in planning, providing a veritable opportunity to integrate the views, concerns and values of the affected population, is one of such approaches known for impact assessment.
Justification for the study
Indiscriminate disposal of solid wastes has the potentials to cause damage to the environment and the health of people. In Nigeria, waste management is at the lowest ebb in most towns and cities. At many inner and periurban centres, refuse heaps are left unattended to, and where the Local Government Authorities do the collection, it is often irregular and sporadic. The recycling of waste is almost unknown, while methods of collection and final disposal are very much unsatisfactory. The alarming rate at which heaps of solid waste continue to occupy cities, coupled with the fact that 87% of Nigerians use disposal method adjudged unsanitary, has not only contributed to visual blight and odour, but has also encouraged the breeding of rodents, mosquitoes, and other pests raising serious concerns over public health (Ónibokun, 2000). For instance, about 50% of Nigerians suffer at least one acute episode of malaria every year with grave socio-economic implications in terms of productivity and cost of medications, in addition to infant and Child Mortality Rates 100 and 201 per 1000 live births, respectively (NPC, 2005).
The situation in Kwara State and Ilorin in particular, is a replica of similar issues in Nigeria. The establishment of Millennium Development Goals' (MDGs) Community Based Solid Waste Management facilities for the cities is seen as a panacea to the Solid Waste Management Problems. The facility involves processing and transforming municipal solid wastes into useful end products such as domestic goods, fertilizers, and steel implements, under a congenial and healthy environment. It is imperative that projects such as this, designed to alleviate the problem must be validly assessed for its potential health impacts in the context of the target population, hence; the application of the Health Impact Assessment (HIA) technique which has much in common Oyekan and Sulyman 27 with Environmental Impact Assessment (EIA).
Aim and objective of the study
The aim of this study is to assess the potential health impacts of the execution of solid waste management facility on the population and the environment with a view of arriving at informed planning policy decisions to be made for improving public health in Ilorin West Local Government Area and beyond. The specific objectives of this study include: -Identification of the target communities in the old dump sites and around the location of the proposed facility -Socio-economic characteristics of the communities -Assessment of existing waste management practice -Assessment of environmental and health impacts of proposed facility.
Study area
Ilorin metropolis is the administrative capital of Kwara State. It lies on latitude 8°30'N and longitude 4°30'E. Its elevation ranges from 250 to 400m above sea level. It is also the headquarters of the Ilorin West Local Government Area (LGA) which is surrounded by other LGAs of the state. This gives her roles as the commercial and administrative capital of the State, the headquarters of Ilorin West LGA, and together with Ilorin East, Ilorin South, Asa and Moro LGAs they constitute the Ilorin Emirate. The location of Ilorin west is shown in Figure 1. Ilorin has diverse ethnic groups of mainly Yoruba, Fulani, Hausa, Kambari, Gobir, and Nupe, that constituted it. The multi-linguistic and multi-cultural nature of the people could be traced to their historical background. Ilorin is said to be founded as hamlets in 17 th century by an itinerant farmer called Ojo from Gambe near Oyo-Ile. The hitherto existing hamlets were in 1830s consolidated under the sovereignty of Fulani hegemony by Abdul-Salam, the son of Sheikh Alimi. The total population of Ilorin West LGA is 365,221 in 2006. This is comprised of 180,387 males and 184,834 females; being the most populous LGA in Kwara State that has 3.0% as its growth rate (NPC, 2006).
The major occupation of the people is mixed farming. The wide expanse of arable and fertile soil and favourable climatic conditions supported the cultivation of variety of food and cash crops, including cashew, yam, beans, groundnut, varieties of vegetables, maize and guinea corn.
The rearing of animals is made possible due to the existence of savannah type of vegetation. Other prominent economic activities include cloth weaving, pottery making, blacksmithing, Shea butter production, and gum processing. Participatory Development and Sustainability Analyses (PDSA) as noted by Ohakweh and Ezirium (2006), involves getting all key people and institutions involved in the development decisions that affect them as an indispensable ingredient in achieving sustainable development. When a beneficiary community is involved in project development and implementation, it helps to build local capacity to solve problems and make sound decisions. This in turn leads to an improved chance that facilities and services will be used and maintained on a sustainable basis. Thus, one of the most important determinants of project success is the attention given to institutional arrangements particularly with respect to the receiving side or the inputs by the beneficiary groups (Yahie, 1993;Narayam, 1996, cited in World Bank, 1998. Participation implies that people require a greater voice in local affairs and an expanded role in decisionmaking processes. The benefits of participation derive not only from mobilizing additional community resources but, more importantly, from increased effectiveness in the use of available resources -skills and knowledge (Honadle andVansant, 1995, cited in World Bank, 1998).World Bank (1998) defines project sustainability generally as the capacity of a project to continue to deliver its intended benefits over an extended period of time. However, this depends on whether or not a balance can be achieved in the use of the principal forms of capital namely, human, natural, cultural, institutional, physical and financial.
Health Impact Assessment (HIA): An evolution from EIA
The environment has many connotations. For many persons, it is the natural world of plants and animals. In planning, environment includes not just the natural surroundings but, it also includes such natural factors as water and wildlife and such economic and social features as employment and housing (Frank et al., 1977, cited in Nwafor, 2009. EIA thus involves just about everything, from environment, economic or political matters to concerns such as energy and air pollution. It is also a statutory requirement in many countries before a proposed project is approved. The main purpose of EIA is to determine the outcome of a development proposal through the process of generating information on various changes that may occur in the environment in response to the implementation of a particular proposed activity and to be an aid to decision -makers about the possible or likely impacts of a proposed project (Ortolano, 1984: Wathern, 1990. The other purpose of the assessment is to ensure that decision makers consider the ensuing environmental impact whether to proceed with the project. Hence, predictions constitute much of the basis of EIA. Indeed the whole of EIA exercise is about prediction (Glasson et al., 1999).
Health Impact Assessment (HIA) is the stock taking evaluation of the overall or marginal gains and deficiencies in the total well-being and aspects of health status of a defined population as a result of natural occurrences or other man-made interventions. Such gains or deficiencies can be measured in terms of longevity, wellness and health promotion and productivity (Abanobi, 1997). HIA, therefore, is the estimation of the effects of a specified action on the health of a defined population with a view to assessing the potential health impacts (positive and negative) of polices, programmes and projects; and to improve the quality of public policy decision making through recommendations to enhance predicted positive health impacts and minimize negative ones.
Impact assessment in the planning process
All planning processes have the same principal elements: Identifying problems and goals; specifying objectives; compiling an inventory of conditions and resources; developing alternatives; evaluating alternatives; and plan implementation and monitoring. Impact Assessments applied to all these planning elements is aimed at avoiding, reducing or mitigating any adverse effects of implementing a program or a project. They are more than the coverage of economic, physical and social concerns in the planning process (Frank, 1977); therefore, it is not an activity that is handled separately from other planning functions.
When undertaking HIA, the stages involved are progressively outlined; however, they may not be necessarily implemented in a strict serial fashion. In practice, one often has to return to an earlier stage when there is more information (Sridhar, 2007). Key features to Oyekan and Sulyman 29 be considered according to Abanobi (2008) include: Screening, Scoping, Identifying impacts, Assessment impacts, Making recommendations, and Monitoring impacts. The first step in the HIA process, having decided to do it is to have a quick review of the possible health impacts as shown in Table 1, and also to consider the size and importance of the proposal and availability of resources to do the assessment. A good way is to use a checklist that covers questions like: Does the proposal impact on one or more determinants of health? What are the personal and family lifestyles and characteristics; socio-economic environment, physical environment, and access to and quality of health and other services? Will any of the results of the proposal be irreversible? What population subgroups will be affected by the proposal? Who might be disadvantaged by the proposal? What is the geographical and population scale of the proposal? Is there conflict or disagreement about the proposal? If so, would a HIA help to resolve it? Are there time, money and expertise to do a HIA? Is it possible to change the proposal if necessary?
RESEARCH METHODS
The study area is officially structured into twelve (12) political and administrative wards in the LGA. From these, four (4) Wards were randomly selected for intense study. These are: Alanamu, Ajikobi, Baboko and Magaji Ngeri. These wards, situated within Ilorin Metropolis, were selected to provide estimates on demographic and other socioeconomic characteristics for the entire Local Government Area. The four wards were further stratified into 36 clusters as shown in Table 2. The study area has only 4 wards with an estimated population of 470,400 residents. Since the study is household based, a total of 8,231 households were listed in the 4 wards out of about 33% (of the LGA total) or 19,856 households found there ( Table 2). The number of sampled households came to 2000 or 10% of the 33% households at 1 in every 3 systematically sampled. Devices used to collect first hand primary information include; structured questionnaires on Demographic and socioeconomic characteristics and the identification. In addition, land survey of the area was also carried out using theodolite and high precision GPS equipment to obtain coordinates and other characteristics of the sites. Secondary data such were sourced from published and unpublished sources like academic journals, books, internet materials and so on.Descriptive statistical methods were used to analyse data on demographic and socio-economic variables of target communities including population characteristics-size, and composition, size of households, projection, economic activities, social and cultural structure, and property characteristics. Both quantitative and qualitative statistical techniques of data analysis were further used as designed for impact assessment studies.
Economic activities and livelihoods
The people are engaged in numerous occupational activities. The major economic activities in the area -Undertaken on policies, programs, projects, plans or other detailed strategic proposals. -Undertaken when it will add value to decision-making processes.
-Undertaken prior to the implementation of the policy, programs and project that is being assessed. It is prospective, pre-emptive, based on forecasts and predictions.
-Should assess or identify the potential positive and negative impacts on health.
-Should look at the impact on populations both directly and indirectly affected by the proposal. -Should include equity as a central concern.
-Should engage key stakeholders in the formulation of recommendations.
-Should be solution -focused.
-Should aim at enhancing the benefits of health and minimize any risks to health.
-Should include explicit consideration of the differential impacts on different groups in the population. (Table 3).
Facilities and services
Meanwhile, the areas lacked modern facilities and infrastructures that can enhance their well-being as also indicated in Table 4. The only source of water at Gaa Saka is the water well, which is just one serving the whole community of seventy-two people. In Modi, there is also a single water well while the only borehole found there were inactive. However, it is the only place among the five with primary schools. While Gerewu has a borehole and water well, Peke has a natural stream and Idi-Ape has natural spring water as their sources of water supply, but no school. Finally, basic health facilities, such as maternity, primary health centre, dispensary and, patent medicine store, are totally absent in these communities. None of their roads is tarred also or graded this makes them inaccessible in rainy season.
Housing facilities
Although there are several facilities a housing unit is expected to possess, the study shall restrict its analysis to two variables of toilet facility and source of domestic water which are directly related to man's health and sanitation. While in the entire study area Pit latrine (51%) is the commonest toilet facility, in Magaji Ngeri ward 64.2% of the facility is Bucket Latrine. Other 3 wards are characterized by Pit Latrine, specifically Ajikobi (58%), Alanamu (54%) and Baboko (66%). The modern water closet types are distributed minimally within Ajikobi (6%), Alanamu (14%), Baboko (10%) and Magaji Ngeri (6%). Ajikobi is also second in line with 24% of Bucket Latrine (Table 4).
In the study area, interestingly about 72% of the sources of domestic water are from improved sources comprising of 31% tap water and 41% borehole water.
The borehole source (67%) is commonest in Magaji Ngeri while the tap source (63%) is commonest in Baboko.
Alanamu ward has the greatest percentage (30%) of well water followed by Ajikobi and Baboko with about 29% each and Magaji Ngeri 20%. The dependent on stream or rains in this area is virtually non-existent with only Alanamu having 0.5% of each.
Methods of disposal of wastes by the households
The waste disposal methods include burning, dumping in Kwara State Wastes Management Company refuse bin, along the street, at the central dump, communal dumps, open dumps, drainage/canals/stream and others. As shown in Table 5 and Figure 2, the communities disposed -off their solid waste in many ways, Majority of the households in Ajikobi (about 56%) dump their wastes along the street while about 41% in Alanamu burn theirs.
In Magaji Ngeri about 33% dumps in the refuse bin of Kwara State Waste Management Company (KWMC). However, it is common in all wards to dump refuse at unauthorized places, like central dump, commercial bins dump, drainage/canal/streams and other sensitive places.
Description of the project
This project is about the establishment of Millennium Development Goals (MDGs) Community-based Solid Waste Management facilities for the cities in Nigeria. It is conceptualized, supervised and managed by the Federal Ministry of Environment and financed by the World Bank. It is designed with expectation of being handed-over to the State and Local Governments for day-to-day running. The establishment of Millennium Development Goals (MDGs) Community-based Solid Waste Management facilities for the cities in Nigeria is seen as a panacea for the Solid Waste Management Problems. The facility involves processing and transforming municipal solid waste into useful end products such as fertilizers, steel implements and so on; under a congenial and healthy environment.
Potential positive and negative impacts of the project
The impacts assessed in this work are broken into two: positive and negative impacts. There is a thin line between the health impacts and their socio-economic and environmental counterparts; as they are interwoven and intertwined in many respects. The level of environmental friendliness of an area and its socio-economic well-being are a function of the health status of its people and their communities. The following are the potential positive and negative impact of the project (Table 6).
Potential positive impacts on the communities
Creation of employment opportunities: With the takeoff of this project, jobs will be created to the benefits of the people in the localities. Such jobs may however, involve low-skilled labour such as watchmen, labourers, artisans, gardeners, and petty-trading; thereby enhancing the standard of living of these ordinary people.
Increase demand for basic goods and services:
The functioning of the waste management facility in the area will lead to increase in demand for basic goods and services by and to all stakeholders like petty traders, artisans, propertyowners, water sellers, restaurant operators. This will improve the economic well-being of the communities. There is also the improvement of basic infrastructures such as roads, water supply, and power supply; especially in these rural communities.
Increase urbanization: Due to in-migration of recycling plants workers and traders, there is bound to be increase in population of these localities as workers tend to settle down in them. Positive Negative -Creation of employment opportunities.
-Increase demand for basic goods and services, hence improved economy.
-Upgrading of infrastructures and facilities.
-Increase urbanization, due to immigration of Recycling plant workers and traders.
-Increase environmental and health problems.
-Likely chemical explosions and fire hazards.
-Pollution of source and channels of water.
-Reduction in size of cultivable arable land.
-Abandonment of agriculture for formal jobs. Source: Fieldwork, 2009. This will lead to increase social and economic activities as the localities are changing face gradually from rural to urban.
Increase environmental and health problems:
The location of the existing dumpsites around the rural communities could lead to potential environmental and health problems. With the location of the proposed one not far away from the former, the problems are bound to be aggravated, unless pro-active measures are taken. In the five communities wherein the study were carried out, problems of nauseating odour and smell, swarm of flies, mosquitoes, rodents and air pollution were so endemic, with their attendant cause of diseases such as fever (malaria, typhoid), TB, and related diseases. The largescale poverty of the people coupled with inadequacy of basic facilities such as pipe-borne water, electricity, modern and adequate shelter, food, tarred roads, schools, markets exacerbated the problem.
Possibility of chemical explosions and fire hazards:
Gas, liquids and fumes produced by the decomposition of wastes can be explosive if it accumulates in confined space e.g. cellars of buildings. This may also lead to great fire occurrence with its attendant effects such as air pollution, decrease visibility, and fire hazards.
Water pollution and general environmental pollution:
The tendency for the source and channels of water supply to be greatly polluted is very high in the communities. Polluted water flowing from waste dumps and recycling plants can cause serious pollution of water supplies. The careless and disorderly way of the Waste Management Trucks' drivers usually results in noise pollution which may impede the peace and tranquillity of these rural communities. Heavy trucks also cause significant damage to roads that were not designed for their weight and frequencies, creating pot holes resulting in intense erosion.
Deforestation:
The clearing of trees for this project results in a negative impact with its attendant distortion of the ecosystem and destruction of flora and fauna lives. It also leads to reduction in the size of arable lands, loss of food sources, hunting, fuel energy, raw materials for building, and herbs. To mitigate this, afforestation programme policy must be adopted for the remaining lands to curb indiscriminate clearing of land. Erosion control strategies must also be employed to protect the already distorted ecosystem of the project site and surrounding area while environmental and forestry protection laws must be strictly enforced.
The establishment of a recycling plant which necessarily will generate other forms of occupation different from conventional agriculture will not only encourage the abandonment of the latter but will result in the reduction of available cultivable land. Hence, reduction in the volume of food produced in the area.
General evaluation of the impacts of the project
An assessment of the key impacts earlier identified becomes pertinent, in order to arrive at an objective and independent decision. Consequently, the authors used mathematical weighting technique, which involves assignment of weights ranging from 1 to 5 for the environmental and health factors impacts for the analysis. This is in accordance with the Nigerian Institute of Town Planners (NITP) scores guide as follow: 5 (Very Positive Impact), 4(Fairly Positive Impact), 3 (Neutral Impact), 2 (Fairly Negative Impact) and, lastly, 1 (Very Negative Impact) ( Table 7). The percentage score is thus 87.2%. The rating as interpreted by the NITP scoring guide as Acceptable.
RECOMMENDATIONS
HIA's strength lies in its being a tool which enables informed policy decisions to be made based on a valid assessment of their potential health impacts, at the same time adding health awareness to policy making at every level. This study has shown that it is more than just a monitoring or evaluation tool, though it has much in common with the more established EIA. HIA provides a practical framework for identifying health impacts and ways of addressing them within its principles of social model of health, equity and social justice, multidisciplinary and participatory approach, use of qualitative and quantitative evidence, explicit values and openness to public scrutiny. Through evaluation of impacts and other assessments the proposed establishment of the Solid Waste Management facility at Ita-Amo Area of Ilorin West LGA with performance of over 87%, is considered acceptable. The study vividly shows that the establishment of a modern solid waste management facility for the communities is a right step in the right direction as it will bring about several benefits. In addition, it was revealed that most HIAs need input from people with different perspective and from different organizations. It is a veritable opportunity to integrate these views, concerns and values of the affected population in the planning of projects that may have potential impact on their lives. The following recommendations are made to further strengthen these points.
-Efforts should be made by planners to seek harmony in partnership with public health workers in particular and the community in general in the process of conducting HIA studies.
-HIA studies should be integrated formally into the planning process in Nigeria with necessary statutes.
-Advocacy and sensitization of the members of the public, community participation in all waste management projects, plans, policies and programmes and institutionalization of good governance at all levels are necessary impetus for the success of the project. -Embrace Integrated Waste Management (IWM) strategy which is "the selection and application of suitable techniques, technologies, and management programme to achieve specific waste management objective and goals." Every component of the waste should be taken into consideration in the management practice. -Guidelines of the design for effective environmental laws should include management instruments that are well thought out, sound and result-oriented. For example; Environmental Management Plan, Health and Safety Plan, Monitoring and Evaluation Plan, Action Plan, and so on.
-The current and future Environmental Policy and in particular waste management policy on ground must have a well -specified guidelines. For example, it should specify and implement source segregation of nonhazardous recyclable waste, so that the economic incentive for waste picking at disposal sites is reduced.
-Waste pickers should be integrated into the main stream of Waste Management and provide basic healthcare facilities for their operations and healthful living.
Oyekan and Sulyman 35
There should be the registration of these and other workers in the waste management chain including like waste pickers, and be provided with medication and adequate vaccination.
-Waste pickers in addition should be provided with quality education to enhance their work, about personal hygiene, and safe care.
-They can also be trained on areas to diversify their skills in areas like livestock rearing, solid waste re-use and recycling.
-The management of hazardous chemicals is not only a matter of technology and legislation, but also enforcement and funding. Some wastes are so hazardous and expensive to treat that priority attention should be focused on changing to processes that use substitutes that are less hazardous, and to minimizing the quantities that are discarded. Indeed minimization and substitution should be seen as the preferred options in dealing with difficult waste.
|
2019-04-24T13:13:11.970Z
|
2015-02-28T00:00:00.000
|
{
"year": 2015,
"sha1": "2fe23fe6d5051d7c109ecfd1e1ea7df3bac4d45e",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/JGRP/article-full-text-pdf/A4DB97349899",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c70edd9026cb93e54ab7f28fbe5e8a789db1ccf5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
268526640
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of UVA vs UVB Photoaging Rat Models in Short-term Exposure
Background: Prolonged exposure to sunlight is known to induce photoaging of the skin, leading to various skin changes and disorders, such as dryness, wrinkles, irregular pigmentation, and even cancer. Ultraviolet A (UVA) and ultraviolet B (UVB) radiation are particularly responsible for causing photoaging. Objective: This study aims to identify and compare photoaging rat models exposed to UVA and UVB. Methods: This research method compared macroscopic (scoring degree of wrinkling) and microscopic (histology) signs and symptoms on skin samples of rat exposed to UVA and UVB for 4 weeks at a radiation dose of 840mJ/cm2. Results: The results of this study indicated that the degree of wrinkling was highest in rat skin exposed to UVB rays by 51% (p<0.05). UVB histological results showed that the epidermis layer (40 µm, p<0.05) was thickened and the dermis layer (283 µm, p<0.05) was thinned in the skin of mice exposed to UVB light. The UVB group, showed the density of collagen in the dermis with a mean value of 55% (p<0.05). Conclusion: Our results suggest that short-term exposure to UVB radiation (in the acute, subacute or subchronic phase) induces more rapid and pronounced damage to rat skin when compared to UVA radiation exposure.
The most important benefit of ultraviolet light lies in its role in facilitating the synthesis of vitamin D within the human body.Nonetheless, prolonged exposure to UV light can have detrimental effects on the structural integrity, physiological functionality, and barrier properties of the skin, ultimately leading to the process of photoaging.Photoaging is a skin aging process in the form of photodamage induced by sun exposure.Skin aging is a complex biological process that cannot be avoided and affects the appearance of the skin due to a decrease in the ability to restore normal skin function [1,2].Basically there are two processes of skin aging, namely intrinsic aging and extrinsic aging.Intrinsic aging (genetics, race, variations in skin anatomy in certain areas, and hormonal changes) is a natural skin aging process that occurs with age and progresses slowly [3].Extrinsic aging is triggered by exposure to sunlight containing ultraviolet (UV) light, known as photoaging.About 80% of facial skin aging is related to sun exposure [4].
According to research conducted in Australia, approximately 72% of men and 42% of women below the age of 30 encounter photoaging effects [5].Being a tropical country with year-round exposure to the sun's ultraviolet rays, Indonesia has a population highly susceptible to photoaging.The prevalence of photoaging symptoms on the face were most commonly found in Javanese ethnicity (30.5%), age group 30-39 years (59.1%),non-smoking behavior (35%), indoor work such as laboratory assistants, employees and teachers (71%), duration of sun exposure > 34 hours/week (62%), Fitzpatrick IV skin type (moderate brown skin) (65%) and lentigo symptoms (28%) [6,7].
Sunlight emits ultraviolet (UV) rays, which can be categorized into three types: ultraviolet A (UVA) with a wavelength of 320-400 nm, ultraviolet B (UVB) with a wavelength of 280-320 nm, and ultraviolet C (UVC) with a wavelength of 100-280 nm.UVA, characterized by its long wave, constitutes 95% of the UV rays reaching the Earth's surface, allowing it to penetrate deep into the dermis and subcutaneous layers.UVA can induce the production of reactive oxygen species (ROS) which causes photoaging.UVB has short waves and only about 5-10% can reach the surface and can be absorbed by the epidermis and part of the dermis.UVB radiation can cause
Study group
At the start of the experiment, rat were randomly divided into three groups, with nine rats in each group.Control or normal rat group were only shaved dorsal skin from time to time, as were the rat of other groups.UVA rat group were the dorsal skin of rats was UVA exposure with a total dose of 840 mJ/cm2.UVB rat group were the dorsal skin of rats was UVB exposure with a total dose of 840 mJ/cm2.
Experimental design
This research is a true experimental study using a post test only control group design.
Sample collection
Thirty male rats of the Wistar strain aged 10-12 weeks with a body weight of 150-250 grams were used in this study from the Pharmacology Laboratory of Brawijaya University.Rat were randomly divided into 3 treatment groups, with 9 rats in each group.Group Control/Normal (rat without UVA and UVB irradiation), group UVA (rat exposed to UVA) and group UVB (rat exposed to UVB).All rats were maintained under standard environmental conditions, consisting of a temperature of 25 ± 2°C, with a relative humidity of 50 ± 5%, and a 12 h light/ dark cycle, and were given standard food and water ad libitum.Rat were adapted for one week before treatment.In this study, the dorsal surface of the rat skin was shaved with a razor over an area of 5 × 5 cm2 and was hairless during the experimental period.The frequency of exposure was determined three times a week (Monday, Wednesday and Friday) for four weeks.Radiation exposure was carried out using a Nomoy Pet 25W UVA lamp with a wavelength of 320-400 nm and a Philips TL 20W/12 RS SLV/25 UVB lamp with a wavelength of 290-320 nm.The details of the exposure dose were 50 mJ/cm2 in the first and second weeks, 70 mJ/cm2 in the third week and 80 mJ/cm2 in the fourth week, bringing the total dose to 840 mJ/ cm2.Throughout the experimental period, the skin of the rat's back was photographed 2 times per week.Wrinkle formation was evaluated and measured via ImageJ 1.53e software.Rat were sacrificed using cervical dislocation and skin tissue was collected for histological analysis.
Histology analysis
Hairless mouse dorsal skin tissue was collected, fixed with 10% formalin neutral buffer solution (Sigma-Aldrich), embedded in paraffin, and cut into 4 μm-thick packets, deparaffinized with xylene, and rehydrated via graded alcohol.Hematoxylin eosin (HE) staining is used for histological observations of skin structure, thickness of the epidermis and dermis.HE staining was carried out in several stages starting with deparaffinization, hydration, hematoxylin staining, eosin staining, and finally dehydration.The preparations were analyzed at 10 random locations per slide using an Olympus Cx21 light microscope with 1000x magnification.Each specimen is photographed under a camera with 48MP.Histological changes and collagen fiber density were evaluated and measured using ImageJ 1.53e software.
Statistical analysis
Results are presented as mean ± standard deviation.Differences between groups was analyzed by one-way analysis of variance (ANOVA) followed by post hoc Tukey test analysis using SPSS software [SPSS, Version 21.0].The difference was considered statistically significant when the p-value < 0.05.
Macroscopic assessment
The process of wrinkle formation can be assessed using grading scale for evaluation of skin wrinkles (macroscopic visual on photoaging) [8].The macroscopic appearance of skin wrinkling in rat in the final week of the study for each group was photographed, recorded, and displayed [Figure -1(a)].Wrinkle formation was quantified using ImageJ 1.53e analysis software [Figure -1(b)].During the study period, the control (normal) group showed light wrinkling of 10% (p<0.05),which corresponds to the age of 10-12 weeks (young adults) rat.The UVA group showed little damage to the surface of the rat skin, the skin looked dry with rough wrinkles by 31% (p<0.05).The UVB group showed dry skin and rougher wrinkles than the other groups, which was 51% (p<0.05).
Microscopic assessment Assessment of epidermal thickness
Epidermal thickness is one of the histological parameters that reflects skin damage due to ultraviolet light.the control group (normal), showed epidermal thickness with a mean value of 19 µm.The UVA group showed epidermal thickness with a mean value of 33 µm (p<0.05).The UVB group showed epidermal thickness with a mean value of 40 µm (p<0.05).
Assessment of dermal thickness
Dermal thickness is also shown in .HE staining of the dermis layer showed that the control group (normal) showed dermal thickness with a mean value of 615 µm ].The UVA group, showed dermal thickness with a mean value of 508 µm (p<0.05).The UVB group, showed dermal thickness with a mean value of 283 µm (p<0.05).The UVA group showed the density of collagen in the dermis with a mean value of 80% (p<0.05).The UVB group, showed the density of collagen in the dermis with a mean value of 55% (p<0.05).
dISCUSSION
The most important benefit of ultraviolet light is that it helps the synthesis of vitamin D in the body.However, long-term exposure to ultraviolet (UV) light can damage the structural integrity and physiological function and barrier properties of the skin resulting in photoaging.Photoaging is a skin aging process in the form of photodamage induced by sun exposure.Interest in photoaging has grown with people's increased awareness of skin aging.Various efforts have been made to prevent skin aging, and many studies have been conducted and dedicated to skin health and beauty.Skin aging factors can be classified as intrinsic or extrinsic.Extrinsic aging is caused by external factors, such as exposure to ultraviolet radiation, food and chemicals (cigarettes), thus increasing skin damage such as sagging, wrinkle formation, and skin roughness.This study aims to identify and compare photoaging rat models exposed to UVA and UVB.
The appearance of wrinkled, rough, sagging skin is an aging process due to direct exposure to ultraviolet light which is also related to the thickness of the epidermis, dermis thickness, and collagen fibers.In this study, we demonstrated the effect of ultraviolet radiation on the control group, UVA group and UVB group.Epidermal thickness is one of histological parameter used to determine the extent of skin damage due to exposure to ultraviolet light.On macroscopic assessment, exposure to UVB rays shows that the degree of wrinkles on the skin surface is more severe than UVA rays.These results are in accordance with the research of Feng et al (2014) and Wang et al (2019) which showed that UVB rays have a higher energy so that UVB rays can cause wrinkles earlier [9,10].In the interim, the exposure to UVA rays necessitates an extended duration of
DISCUSSION
The most important benefit of ultraviolet light is that it helps the synthesis of vitamin D in the body.However exposure and a considerably greater dosage of radiation (10-100X) to result in more severe damage.
The assessment of the epidermal thickness within 4 weeks (840 mJ/cm2) showed that exposure to UVB light can cause thickening of the epidermal layer of the skin faster than exposure to UVA light.These findings align with the research conducted by Kim et al. (2016) and Wang et al. (2019) which showed that exposure to UVB rays had a higher amount of radiation in the epidermis and some of the dermis [10,11].Meanwhile, UVB rays possess higher energy levels and exhibit greater carcinogenic potential, thereby leading to premature damage in the skin exposed to these rays.
In the assessment of the dermis thickness within 4 weeks (840 mJ/cm2) showed that exposure to UVB rays can reach the dermis layer and can cause damage to the dermis layer faster than UVA rays.These results are consistent with research by Hidayati et al (2015) and Maeda (2018) which showed that low-dose acute UVB exposure can change skin immunity and activate keratinocytes to secrete IL-10 so that UVB rays can cause immunosuppression that affects the structure of the dermis layer [12,13].UVB exposure can penetrate the dermis layer and only requires a dose of 30-50% of the total UV dose needed to cause damage.Meanwhile, UVA exposure requires a longer time and a larger dose of radiation to cause the same effect.
Skin aging caused by exposure to ultraviolet light increases collagen breakdown and decreases collagen synthesis, resulting in an overall reduction in collagen levels.Collagen is the main component of the extracellular matrix (ECM) which is responsible for maintaining tensile strength, wrinkle formation, skin resilience and can be directly degraded by exposure to ultraviolet light.The assessment of collagen density shows that exposure to UVB light can affect collagen regularity more than UVA light in the dermis layer.These results are in accordance with research by Hidayati et al (2015), Maeda (2018) and Wang et al (2019) which showed that exposure to UVB rays can penetrate up to the dermis layer and can cause collagen degradation [10,12,13].Meanwhile, exposure to UVB rays can penetrate up to a thickness of 300 µm.
CONCLUSION
During a consistent time frame of four weeks and under the identical radiation dose of 840 mJ/cm2, diverse UV wavelengths can elicit varying effects on the skin of rats exposed to ultraviolet light.Compared to UVA light exposure, the skin of rats shows an earlier onset of damage when exposed to UVB light within a brief timeframe.Consequently, UVB light exposure can serve as a valuable reference for developing photoaging models during short-term exposure (acute, subacute, and subchronic phases)..
Figure 1 .Figure 1 .
Figure1.Wrinkle formation on rat skin exposed to ultraviolet light.Rat were radiated with UVB (840 mJ/cm 2 ) three times a week for four weeks.(a) Skin macroscopic appearances at the end of the experiment period, (b)The quantitative analysis of wrinkle (grading macroscopic visual rat skin exposed to ultraviolet light).Data are presented as mean ± SD.Data with different notation in the same chart implied a significant difference (p < 0.05).
Density of collagen fibers and irregular arrangement of collagen fibers is a manifestation of skin damage due to ultraviolet light [Figure-4(a)].In the control group (normal), it shows the density of collagen in the dermis with a mean value of 95% [Figure-4(b)].
Figure 2 .
Figure 2. Assessment of epidermal thickness.(a) Dorsal skin sections were stained with hematoxylin and eosin (H&E) (microscope magnification 1000×).(b) The quantitative analysis of epidermal thickness.Data are presented as mean ± SD.Data with different notation in the same chart implied a significant difference (p < 0.05).SC: stratum corneum; EP: epidermis; D: dermis; yellow line: epidermal thickness
Figure 2 .Figure 3 :Figure 3 .
Figure 2. Assessment of epidermal thickness.(a) dorsal skin sections were stained with hematoxylin and eosin (H&E) (microscope magnification 1000×).(b) The quantitative analysis of epidermal thickness.data are presented as mean ± Sd. data with different notation in the same chart implied a significant difference (p < 0.05).SC: stratum corneum; EP: epidermis; d: dermis; yellow line: epidermal thickness
Figure 4 :
Figure 4: Assessment of collagen density.(a) Dorsal skin sections were stained with hematoxylin and eosin (H&E) (microscope magnification 400×).(b) The quantitative analysis of collagen density.Data are presented as mean ± SD.Data with different notation in the same chart implied a significant difference (p < 0.05).C: collagen.
, long-term exposure to ultraviolet (UV) light can damage the structural integrity and physiological function and barrier properties of the skin resulting in photoaging.Photoaging is a skin aging process in the form of photodamage induced by exposure.Interest in photoaging has grown with people's increased awareness of skin aging.Various efforts have been made to prevent skin aging, and many studies have been conducted and dedicated to skin health and beauty.Skin aging factors can be classified as intrinsic or extrinsic.Extrinsic aging is caused by external factors,
•
Ethical approval: The experimental animal protocol was approved by the Ethics Committee of the Faculty of Medicine, Universitas Brawijaya, Indonesia.• Author's contribution: The all authors were involved in all steps of preparation this article. .• Conflict of interest: None declared.• Financial support and sponsorship: Financial support and sponsorship : The Center for Higher Education Fund (Balai Pembiayaan Pendidikan Tinggi) or Center or Education Service (Pusat Layanan Pendidikan-Puslapdik) and Indo-nesia Endowment Fund for Education (Lembaga Pengelola Dana Pendidikan -LPDP) Ministry Education, Culture, Research, and Technology of the Republic of Indonesia (Kementerian Pendidikan, Kebudayaan, Riset dan Teknologi Indonesia) has provided financial assistance for funding of this research (BPI ID Number : 202101121613), Research Collaboration Universiti Sains Malaysia (represented by its Advanced Medical and Dental Institute) and Universitas Airlangga Indonesia (represented by its Faculty of Medicine).
|
2024-03-20T15:04:12.430Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "5488ff72b5838adc566b49468384caa2cfceebb4",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "880e4a0d7f4ce8d365c4150cac9981c703e44f2f",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271077157
|
pes2o/s2orc
|
v3-fos-license
|
Commensal HPVs Have Evolved to Be More Immunogenic Compared with High-Risk α-HPVs
Commensal human papillomaviruses (HPVs) are responsible for persistent asymptomatic infection in the human population by maintaining low levels of the episomal genome in the stratified epithelia. Herein, we examined the immunogenicity of cutaneotropic HPVs that are commonly found in the skin. Using an in silico platform to determine human leukocyte antigen (HLA)–peptide complex binding affinity, we observed that early genes of cutaneotropic HPV types within the same species can generate multiple conserved, homologous peptides that bind with high affinity to HLA class I alleles. Interestingly, we discovered that commensal β, γ, μ, and ν HPVs contain significantly more immunogenic peptides compared with α-HPVs, which include high-risk, oncogenic HPV types. Our findings indicate that commensal HPV proteins have evolved to generate peptides that better complement their host’s HLA repertoire. Promoting higher control by host T cell immunity in this way could be a mechanism by which HPVs achieve widespread asymptomatic colonization in humans. This work supports the role of commensal HPVs as immunogenic targets within epithelial cells, which may contribute to the immune regulation of the skin and mucosa.
Introduction
Human papillomavirus (HPV) is a non-enveloped, icosahedral, double-stranded, circular DNA virus that can infect cutaneous and mucosal epithelia [1].There are approximately 200 different HPV types, which are divided into five genera: α, β, γ, µ, and ν.HPV taxonomy is based on the nucleotide sequence of the most conserved gene in the HPV genome, L1, which encodes for the capsid protein.The original classification of HPVs establishes that if two HPVs have more than 70% nucleotide homology in the L1 gene, they belong to the same type, whereas if they share more than 60% nucleotide homology, they belong to the same species [2].More recently, it has been determined that a novel HPV type shares less than 90% similarity to any other existing HPV type [3][4][5].Despite multiple studies on their potential link to carcinogenesis, commensal HPVs belonging to β and γ genera have not been shown to directly cause cancer in skin or mucosal sites.A "hit-and-run hypothesis" suggests that β-HPVs are involved in the initiation of cutaneous squamous cell carcinomas (cSCC) but become dispensable in the progression of carcinogenesis.In fact, β-HPVs have been found to be more abundant in cSCC precursors, actinic keratoses (AK) than in cSCCs [6].Interestingly, we have found that mouse papillomavirus (MmuPV1) infection plays a protective role against cSCC development in immunocompetent hosts [7].Our research has demonstrated that T cell immunity to commensal papillomaviruses suppresses skin cancer development in immunocompetent hosts, and the loss of this immunity, rather than the oncogenic effect of cutaneotropic HPVs, causes the increased risk of skin cancer in immunosuppressed patients [7].Consistent with these findings, β-HPV E7 peptides activate skin resident CD8 + T cells in the normal skin of immunocompetent individuals [7].Cutaneous virome research is a growing field, but there are limited data on the viral populations that colonize the human skin.High-throughput sequencing of skin swab samples from healthy individuals as well as immunosuppressed patients such as organ transplant recipients (OTRs) and DOCK8 deficiency shows that even across individual samples, there is great diversity in HPV types [8,9].Immunosurveillance prevents an outburst of cutaneous warts caused by HPVs by keeping these viruses at bay with low-copy replication in the basal layer.When this immunosurveillance is lacking, such as in OTRs [10], WHIM syndrome patients [11], DOCK8-deficient patients [12], epidermodysplasia verruciformis patients [1], and other inborn errors of immunity [13,14], susceptibility to warts and expansion of the viral flora, including HPVs, become evident.
Understanding the relationship between commensal HPVs and tissue homeostasis requires the study of the interactions between HPV and host immunity.However, this understanding is very limited compared to the well-studied high-risk α-HPVs [15].In this study, we used NetMHCpan 4.1 as a sequence-based platform for the following peptide: human leukocyte antigen (HLA) class I binding affinity predictions.It is a well-established, experimentally validated platform with high accuracy that uses artificial neural networks (ANNs) and has been used in the study of peptide vaccine design, immunogenicity predictions, and host immunity to pathogens [16].Using an in silico-based approach, we sought to determine the immunogenicity of commensal HPVs compared to high-risk α-HPVs.Interestingly, we found that βand γ-HPVs have evolved to generate significantly more immunogenic peptides compared with α-HPVs, including high-risk α-HPV types.
Sequences and HLAs
HPV sequences were obtained from the National Institute of Allergy and Infection Diseases "PaVE: The Papillomavirus Episteme" database.HPV sequences are included in Supplementary Material Table S6 "HPV Sequences.csv".Cutaneotropic HPV types found in common or recalcitrant warts of OTRs, as well as α-HPVs were selected for this study.
We chose 18 HLA class I alleles for our study, 6 HLA-A, 6 HLA-B, and 6 HLA-C, that had the highest frequency in the Caucasian-identifying US population.Frequencies and HLAs are listed in Table S1.These were obtained from the "Be The Match" HLA frequency registry (https://bioinformatics.bethematchclinical.org/hla-resources/haplotypefrequencies/high-resolution-hla-alleles-and-haplotypes-in-the-us-population/ (accessed on 1 January 2022)).
HPV Protein Homology
For each of the four early HPV proteins (E1, E2, E6, and E7), we performed pairwise alignments between all possible pairs of viral protein sequences from two different HPV types (excluding comparisons within the same virus), resulting in a total of 1225 alignments (50 sequences from each virus, giving 50 × 49/2 alignments).The alignments were obtained using the Needleman-Wunsch dynamic programming algorithm implemented in Biopython 1.79 [17].We used the BLOSUM62 substitution matrix [18][19][20][21][22] and set the gap existence penalty to −11 and the gap extension penalty to −1.The percent identity between two sequences was calculated as the number of identical amino acids in the aligned regions.
HPV Immunogenic Peptide Predictions
To identify immunogenic peptides that bind to at least one of the 18 HLA complexes, we utilized the local Darwin version of NetMHCpan-4.1 [16].For ease of use, we employed a Python wrapper called mhctools 1.8.1, available at https://github.com/openvax/mhctools (accessed on 1 January 2022).The list of immunogenic peptides was sorted based on their binding affinities, and peptides with affinities stronger than 500 nM were considered.To avoid duplication, any peptides that showed strong binding affinities to multiple HLA complexes were included only once.Although a more recent platform for peptide-HLA binding affinity predictions exists [23] that takes into consideration structural interactions in addition to sequence information, we found NetMHCpan to be sufficient for our comparisons of relative binding affinities of selected HPV proteins.
HPV Immunogenic Epitope Clustering
For each early HPV protein analyzed (E1, E2, E6, and E7), a peptide match was defined as having >88% identity, meaning at least 8 out of 9 amino acids were identical and in the same positions.The total number of matches was tallied.Our methodology allowed for the same peptide from one virus to have multiple matches in other viruses, and each match was counted independently.
Phylogenetic Tree Construction
To construct the phylogenetic tree, we used the Multiple Sequence Comparison by Log-Expectation (MUSCLE) tool, available at https://www.ebi.ac.uk/Tools/msa/muscle/ (accessed on 1 January 2022), for multiple sequence alignment.Next, we computed the distance between sequences using the percent identity metric.We then constructed the phylogenetic tree using the neighbor-joining algorithm available in Biopython's Phylo module.The neighbor-joining algorithm produces an unrooted tree, but for the sake of clarity, we rooted our tree.Additionally, we plotted the distance from the hypothetical common shared ancestor (root) against the number of predicted immunogenic peptides.If the same peptide was predicted to be immunogenic with multiple HLA complexes, it was counted only once.The data and code used for this analysis can be found at https://github.com/alyakin314/hpv-immunogenicity.
Early HPV Proteins Have High Amino Acid Sequence Homology at the Species Level
HPV early proteins are required for viral maintenance and are expressed in the basal epithelial layer during persistent infection [24,25].To determine the amino acid sequence homology between HPV early proteins, E1, E2, E6, and E7, we examined 67 HPV types, which include cutaneous and mucosal types commonly found in the skin and warts, as well as high-risk HPVs (Table S2).Using the Biopython implementation of the Needleman-Wunsch dynamic programming algorithm, we obtained global pairwise alignment between sequences to depict protein homology between the HPV types (Figure 1).There was a significantly higher homology between viruses within each species compared to viruses that belong to different species for E1 (p-value of 2.58 × 10 −99 ), E2 (p-value of 9.87 × 10 −100 ), E6 (p-value of 1.43 × 10 −97 ), and E7 (p-value of 1.83 × 10 −98 ) (Table S3).Heatmap of percent amino acid sequence homology between early proteins of selected HPV types.Using Biopython implementation of the Needleman-Wunsch dynamic programming algorithm, we obtained global pairwise alignment between two sequences to depict protein homology between the HPV types we have selected.Each heatmap represents one early protein (E1, E2, E6, and E7).Each small square on the four panels depicts the percent homology between two HPV types.Darker shades of blue represent a higher percentage of homology.For each protein, there were statistically significantly higher homologies when comparing in-species HPV pairs to out-ofspecies HPV pairs (see text for p-values).
Early HPV Genes within a Species Generate Multiple Conserved, Immunogenic Peptides
After determining the overall protein homology, we investigated the conservation of HPV protein sequences at the peptide level.We used an in silico approach to identify HPV immunogenic peptides that could be presented to CD8 + T cells and trigger an immune response.We selected 18 of the most common HLA class I alleles for HLA-A, HLA-B, and HLA-C in the Caucasian population (Table S1).Using the NetMHCpan software (version 4.1) and 500 nM as the binding affinity threshold, we obtained a list of 9mer peptides that bind to the selected HLAs with the highest affinity.We defined a shared immunogenic Figure 1.Heatmap of percent amino acid sequence homology between early proteins of selected HPV types.Using Biopython implementation of the Needleman-Wunsch dynamic programming algorithm, we obtained global pairwise alignment between two sequences to depict protein homology between the HPV types we have selected.Each heatmap represents one early protein (E1, E2, E6, and E7).Each small square on the four panels depicts the percent homology between two HPV types.Darker shades of blue represent a higher percentage of homology.For each protein, there were statistically significantly higher homologies when comparing in-species HPV pairs to out-of-species HPV pairs (see text for p-values).
Early HPV Genes within a Species Generate Multiple Conserved, Immunogenic Peptides
After determining the overall protein homology, we investigated the conservation of HPV protein sequences at the peptide level.We used an in silico approach to identify HPV immunogenic peptides that could be presented to CD8 + T cells and trigger an immune response.We selected 18 of the most common HLA class I alleles for HLA-A, HLA-B, and HLA-C in the Caucasian population (Table S1).Using the NetMHCpan software (version 4.1) and 500 nM as the binding affinity threshold, we obtained a list of 9mer peptides that bind to the selected HLAs with the highest affinity.We defined a shared immunogenic peptide as one with at least 88% homology, in other words, with 8/9 identical amino acids in the same order [26].HPV types within each species shared significantly more immunogenic peptides than HPV types that do not belong to the same species (p-value of 7.98 × 10 −99 , Figure 2).This holds true for each early protein: E1 (p-value of 1.37 × 10 −94 ), E2 (p-value of 6.06 × 10 −95 ), E6 (p-value of 3.64 × 10 −60 ), and E7 (p-value of 8.73 × 10 −31 ) (Table S4).As opposed to protein homology, immunogenic peptide homology was present to a much lesser degree at the genus level, especially for E6 and E7 (Figure 2).These findings indicate the potential cross-reactivity of T cells primed against antigens of certain HPV types to antigens from other HPV types in the same species.
Figure 2.
Heatmap of conserved peptide matches between HPV types that bind with high affinity to HLA class I haplotypes.Using the NetMHCpan software (version 4.1), we obtained a list of 9mer peptides that bind to our selected HLAs with less than 500 nM binding affinity.We defined a shared immunogenic peptide as one with at least 8/9 or 9/9 identical amino acids, where order matters.Darker shades of blue represent a higher number of conserved peptides between the HPV pair.The scale in this figure has been log-transformed.Each small square on the four panels depicts how many immunogenic peptides are shared between each pair of our selected viruses.Each heatmap represents one early protein (E1, E2, E6 and E7).For each protein, there were statistically significantly more peptide matches when comparing in-species HPV pairs to out-of-species HPV pairs.The statistical significance is preserved when the counts are added for all four proteins (see text for p-values).
Commensal HPVs Generate More Immunogenic Peptides Compared with α-HPVs
β-HPV evolution is largely unexplored but could give us important clues with regard to its potential for carcinogenesis, as well as its role in cutaneous immune homeostasis.As such, we sought to compare the immunogenicity of α-HPVs compared to other genera, including β, γ, µ, and ν, which are the etiologic agents of benign cutaneous warts.We obtained a list of immunogenic 9mer peptides as previously described and constructed a phylogenetic tree to determine the relationship between phylogenetic distance and im-munogenicity between the HPV genera examined in this study.Statistically, when using a linear model and regressing the number of immunogenic peptides on the phylogenetic difference, we obtain a p-value < 0.001, demonstrating that the number of immunogenic peptides is dependent on the phylogenetic distance of the respective virus from a common ancestor.Our findings also show that HPV genera that are largely cutaneotropic-β, γ, µ, ν-have a larger phylogenetic distance from a common ancestor than α-HPVs, including the high-risk, carcinogenic HPVs.In addition to phylogenetic distance, these viruses cluster by the number of immunogenic peptides that they generate through in silico predictions of peptide-HLA binding affinities (Figure 3).Our findings suggest that commensal HPVs have evolved to generate more immunogenic peptides compared with α-HPVs (p-value of 1.31 × 10 −7 ) (Table S5).It can also be observed that high-risk α-HPVs, including ones included in the current HPV vaccines, have a similar number of immunogenic peptides.However, the HPV vaccines use L1 capsid peptides for the purpose of generating neutralizing antibodies against infection with high-risk α-HPVs [27].Interestingly, our findings reveal that novel T cell-directed immunotherapies may be effective in the treatment of cancers associated with certain high-risk α-HPVs (e.g., HPV 51) with a higher number of immunogenic peptides.(MUSCLE) as the multiple sequence alignment tool.We then used the standard procedure of first computing the distance using the percent identity homology and then using the neighbor joining tree construction algorithm, both available as a part of Biopython's Phylo module.The y-axis represents the number of 9mer peptides that have a high binding affinity to the human leukocyte antigens (HLAs) selected in our study.This analysis was performed using the NetMHCpan software (version 4.1) and 500 nM as our binding affinity threshold.The high-risk carcinogenic α-HPVs, as defined by the NIH National Cancer Institute, include HPV 16,18,31,33,35,39,45
Discussion
Using an in silico approach, we report that early HPV protein homology translates to the inter-species conservation of immunogenic peptides that bind several HLA class I alleles with high affinity.We also show an evolution of cutaneotropic HPVs toward a more immunogenic phenotype.An evolutionary and epidemiological framework has been used by many in the field of HPV and has been argued to be crucial in the study of HPV functional differences and carcinogenic properties [28].We propose that investigating immunogenicity at the peptide level across HPV genera using an evolutionary framework is essential to support functional studies in elucidating mechanisms of HPV immune evasion, carcinogenesis, and commensalism.
Cutaneous warts in OTRs represent a unique scenario that sheds light on the relationship between human host immunity and papillomaviruses in adults.In fact, OTRs develop immunosuppression later in life on an immunocompetent background for years, during which they have developed strong T cell responses to the HPV population colonizing their skin.T cells generate an immunodominance hierarchy wherein T cell immunity to some epitopes is stronger compared to others [29].It can, therefore, be hypothesized that the specific HPV types found in the warts of OTRs overrepresent the epitopes for which this immunodominance has developed.Future research looking into the skin virome, especially the HPV diversity in the skin of OTRs is crucial.Our in silico data suggest that regardless of the variability in HPV types colonizing different individuals' skin, the diversity of T cell clones recognizing HPV antigens is limited.This is an important concept when considering vaccine development against cutaneotropic HPVs to boost HPV-specific T cell responses which may protect patients from wart and cSCC development [7].Our findings suggest that although the diversity of HPVs colonizing the skin is high, we may be able to achieve a broad anti-HPV protective immunity by immunizing individuals to a few representative peptides of the major HPV species found in the population.Our in silico-based approach using the NetMHCpan for peptide-HLA binding affinity determination has identified immunogenic peptides that have been verified to produce a cytotoxic CD8 + T cell response experimentally, increasing confidence in the validity and applicability of our findings [30].
In conclusion, our findings reveal that commensal HPVs may be more than quiet riders in our epithelial cells and may serve as immunogenic targets within the normal human epithelia that can be harnessed as therapeutic targets.Investigating host immunity to these viruses in human skin and mucosa is essential to understand their relationship with cancer development, especially in OTRs, as well as understanding the susceptibility of epidermodysplasia verruciformis and other immunodeficient patients to wart and cancer development.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/vaccines12070749/s1,Table S1.HLA class I alleles that are included in the study.Table S2.HPV types that are included in the study.Table S3.Summary statistics for the homology of in-species and out-of-species pairs of HPV viruses.Table S4.Summary statistics for the immunogenic peptide matches of in-species and out-of-species pairs of HPV viruses.Table S5.Summary statistics for the immunogenic peptide counts for alpha versus commensal (β, γ, µ, ν) HPV viruses.Table S6.HPV Sequences.
Figure 1 .
Figure1.Heatmap of percent amino acid sequence homology between early proteins of selected HPV types.Using Biopython implementation of the Needleman-Wunsch dynamic programming algorithm, we obtained global pairwise alignment between two sequences to depict protein homology between the HPV types we have selected.Each heatmap represents one early protein (E1, E2, E6, and E7).Each small square on the four panels depicts the percent homology between two HPV types.Darker shades of blue represent a higher percentage of homology.For each protein, there were statistically significantly higher homologies when comparing in-species HPV pairs to out-ofspecies HPV pairs (see text for p-values).
Vaccines 2024 , 10 Figure 3 .
Figure3.Relationship between HPV immunogenicity and phylogenetic distance.The x-axis represents the phylogenetic distance.We first used multiple sequence comparison by log-expectation (MUSCLE) as the multiple sequence alignment tool.We then used the standard procedure of first computing the distance using the percent identity homology and then using the neighbor joining tree construction algorithm, both available as a part of Biopython's Phylo module.The y-axis represents the number of 9mer peptides that have a high binding affinity to the human leukocyte antigens (HLAs) selected in our study.This analysis was performed using the NetMHCpan software (version 4.1) and 500 nM as our binding affinity threshold.The high-risk carcinogenic α-HPVs, as defined by the NIH National Cancer Institute, include HPV 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59.Two ellipsoids with centroid cluster centers were obtained by unsupervised gaussian mixture
Figure 3 .
Figure 3. Relationship between HPV immunogenicity and phylogenetic distance.The x-axis represents the phylogenetic distance.We first used multiple sequence comparison by log-expectation , 51, 52, 56, 58, and 59.Two ellipsoids with centroid cluster centers were obtained by unsupervised gaussian mixture model clustering with two components and full covariance.There is a linear relationship of y = 231.76x+ 132.77, with a statistically significant coefficient (t-statistic: 5.653, p-value < 0.001).Comparing α-HPVs versus other HPV genera yields a Mann-Whitney U test p-value of 1.31 × 10 −7 .
|
2024-07-10T15:19:48.868Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "e5a42020da10a9dae51b29b3ea8c8247ee02f8eb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/vaccines12070749",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "09507e7fe2d4d01d78b99888ae5ac5420a574119",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
245320797
|
pes2o/s2orc
|
v3-fos-license
|
Machine learning in image analysis in ophthalmology
Macular degeneration is the leading cause of irreversible blindness in developed countries in individuals aged over 50 years.(1) Aiming to better diagnose and monitor the disease, algorithms have been developed to detect lesions in optical coherence tomography (OCT). The ability of machine learning algorithms to detect OCT lesions may already be comparable to that of retina specialists.(2) The theory of machine learning consists of simulating tiny synapses of a human brain. Neural networks were inspired by human synapses and are mathematical models applied in pattern classification and recognition. As in human learning, computers must be exposed to data to learn through examples. Neural networks enable this learning and the application of knowledge in the classification of unknown images. The main features taught to the computer for image analysis are colors, shapes, location, and contrast. The neural network is trained to activate different outputs for various images presented during training. After each image presented, an internal weight is provided, which strengthens certain “synaptic connections”. The training process includes presenting images that are randomly separated into three groups: training, validation, and verification. The training group is used to adjust the weight of connected networks. The validation group is used to determine the best moment to finish training, and the verification group is subsequently applied, defining the performance of the algorithm.(3) Studies on this new interpretation of computational patterns can improve the understanding of diseases, besides increasing the confidence of physicians in the diagnosis aided by machine learning techniques. A study developed by Xu et al., required 654 spectral domain OCT images of patients with macular degeneration to have 96% accuracy in the identification of intraretinal fluids.(4) Another study, developed by Chakravarthy et al., required 155 spectral domain OCT images to achieve 93% accuracy in the identification of intraretinal fluids.(5) The study by Kermany et al., required 207,130 OCT images to have 96.6% accuracy in the identification of drusen.(6) The study by Khalid et al., required 6,800 OCT images to achieve 98% accuracy in the detection of drusen(7) (Table 1). How to cite this article: Martins TG, Schor P. Machine learning in image analysis in ophthalmology. einstein (São Paulo). 2021;19:eED6860.
Macular degeneration is the leading cause of irreversible blindness in developed countries in individuals aged over 50 years. (1) Aiming to better diagnose and monitor the disease, algorithms have been developed to detect lesions in optical coherence tomography (OCT). The ability of machine learning algorithms to detect OCT lesions may already be comparable to that of retina specialists. (2) The theory of machine learning consists of simulating tiny synapses of a human brain. Neural networks were inspired by human synapses and are mathematical models applied in pattern classification and recognition.
As in human learning, computers must be exposed to data to learn through examples. Neural networks enable this learning and the application of knowledge in the classification of unknown images. The main features taught to the computer for image analysis are colors, shapes, location, and contrast. The neural network is trained to activate different outputs for various images presented during training. After each image presented, an internal weight is provided, which strengthens certain "synaptic connections".
The training process includes presenting images that are randomly separated into three groups: training, validation, and verification. The training group is used to adjust the weight of connected networks. The validation group is used to determine the best moment to finish training, and the verification group is subsequently applied, defining the performance of the algorithm. (3) Studies on this new interpretation of computational patterns can improve the understanding of diseases, besides increasing the confidence of physicians in the diagnosis aided by machine learning techniques.
A study developed by Xu et al., required 654 spectral domain OCT images of patients with macular degeneration to have 96% accuracy in the identification of intraretinal fluids. (4) Another study, developed by Chakravarthy et al., required 155 spectral domain OCT images to achieve 93% accuracy in the identification of intraretinal fluids. (5) The study by Kermany et al., required 207,130 OCT images to have 96.6% accuracy in the identification of drusen. (6) The study by Khalid et al., required 6,800 OCT images to achieve 98% accuracy in the detection of drusen (7) (Table 1).
It is therefore understood that the algorithms required more images of drusen than of intraretinal fluid to learn the pattern. This can possibly be explained by the size, location, and contrast difference between these two groups of lesions studied in OCT images. The recognition of machine standards involves attribution techniques with as little human intervention as possible. (3) Computer vision uses pattern recognition. The classification model is usually based on the availability of a set of patterns that were used in the group of training images. The algorithm learning methodology occurs with the determination of random weights for learning, using the characteristics of the objects employed in the training set. The model adjusts the weights to get a correct image rating. The weight interaction is adapted according to the principle "punishment/reward". (8) This method is used in humans from birth to recognize the objects that surround us. This learning capacity has been developed over thousands of years of evolution, and has allowed humans to recognize food and predators appropriately. In the process of image recognition by the computer, an initial image segmentation occurs and, later, the extraction of the characteristics to be analyzed. In image segmentation, the object to be recognized is isolated from the rest of the image, and during the extraction of the characteristics, attribute vectors are assigned, decreasing the amount of information to classify it. (9) It is interesting to be able to learn from algorithms how to identify patterns that are not naturally valued by humans.
As an example of the image segmentation methodology, there is the methodology of segmenting grayscale thresholds, used to establish the limits of the image. (9) After the image segmentation process, algorithms begin to extract relevant resources to decrease the computational power required during the classification process. This information embedded into the process of developing vector attributes, provides the development of algorithms with less computational power to learn how to classify images.
Algorithms require a large number of images from which to be learned, requiring data from different populations, which creates a current problem in training algorithms for detecting rare diseases. (10) In the unsupervised learning process, in which there is no teacher to determine whether the output response is satisfactory, it is possible to learn from algorithms, improving understanding of how the weights they provide for certain decisions are different from humans. This may be assessed by heat maps that indicate how important each image location for the algorithm classification. This technique enables visualizing the parts of the image that are most important for the classification by the deep neural network. This provides further confirmation that the algorithm is, in fact, identifying the area of the photo that is important for diagnosis ( Figure 1).
Advances in the development of algorithms for image analyses have therefore proven promising in many areas of medicine, such as ophthalmology.
|
2021-12-19T16:09:08.674Z
|
2021-12-14T00:00:00.000
|
{
"year": 2021,
"sha1": "991eeef99abbdcaa3d2a3620fd053610c331b208",
"oa_license": "CCBY",
"oa_url": "https://journal.einstein.br/wp-content/uploads/articles_xml/2317-6385-eins-19-eED6860/2317-6385-eins-19-eED6860.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab3ea13d7b322ae6993aa593c13d7e268e7b8e75",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9151487
|
pes2o/s2orc
|
v3-fos-license
|
DNA-Controlled Excitonic Switches
Fluorescence resonance energy transfer (FRET) is a promising means of enabling information processing in nanoscale devices, but dynamic control over exciton pathways is required. Here, we demonstrate the operation of two complementary switches consisting of diffusive FRET transmission lines in which exciton flow is controlled by DNA. Repeatable switching is accomplished by the removal or addition of fluorophores through toehold-mediated strand invasion. In principle, these switches can be networked to implement any Boolean function.
O ne of the driving forces in the field of nanotechnology is the development of highly compact information processing devices. A potential method for nanoscale circuit construction is the use of individual molecules as circuit elements with an emphasis on bottom-up fabrication techniques and self-assembly. 1 Molecular photonic devices show promise as a means for information processing at the nanoscale. 2,3 Diffusive energy transfer in molecular photonic devices may be achieved between neighboring molecules through fluorescence resonance energy transfer (FRET), which involves the direct transfer of excitonic energy between fluorophores via the dipole−dipole coupling. 4 One challenge associated with the implementation of FRET in devices is the precise nanometer scale positioning of fluorophores into arrangements that promote efficient energy transfer. DNA nanotechnology provides a well-defined, programmable framework for manipulating fluorophores at the molecular level. 1,5−15 Multiple studies have reported spectroscopic techniques for obtaining information concerning the structure and photonic properties of fluorophores bound to DNA molecules. 16−19 For instance, FRET has been used as a means for measuring distances in DNA and RNA helices by binding donor and acceptor fluorophores to specific nucleotides and extrapolating their separation distance from the measured FRET efficiency. 16,19 DNA origami techniques have been used to arrange fluorophores as well, introducing greater structural rigidity and design flexibility to DNA-based FRET devices. 14,15 The ability to dynamically control FRET is essential if it is to be used effectively in circuit design. Hannestad et al. recently reported a FRET-based photonic network in which the excitation energy can be directed to either of two outputs based on the presence of an intercalating dye. 20 Here, we report two DNA-controlled FRET-based switches that were devised to enable programmable dynamic control of excitonic energy flow. The strand invasion process 21 that turns one switch off through the removal of a fluorophore turns the other switch on through the removal of a quencher. A second strand invasion process restores the chromophores, allowing the switches to be repeatedly cycled through their on and off states. The two switches are complementary in that one accomplishes the logical negation of the function carried out by the other switch. Logical AND functionality can be implemented by cascading such switches in series, and logical OR functionality can be implemented by combining such switches in parallel. In principle, such switches can be networked to implement any Boolean function where the absence or presence of excitonic energy transfer through a switch corresponds to a logical zero or one, respectively, and the output is the absence or presence of a fluorescence signal on the output fluorophore.
To explore the viability of switching in molecular scale photonic circuits, two distinct approaches were employed to enable dynamic control over the switch emission state. The designs, labeled Switch 1 and Switch 2, are illustrated in Figure 1 panels a and b, respectively, and the strand sequences and dye details are provided in the Supporting Information S1. Both switches consist of a serpentine DNA scaffold strand (black) hybridized with three staple strands using eight independent sequence domains, each 14 nucleotides (nt) long, that are separated by crossovers. A fourth "control" strand regulates excitonic energy flow within the switch, as discussed below. One of the staple strands (blue) contains the input dye FAM. Another staple strand (red) contains the output dye Cy5. In Switch 1 (Figure 1a) To dynamically regulate energy flow, the control strand possesses a 14 nt long toehold sequence that allows it to be removed from the switch by toehold-mediated strand invasion. 21 For Switch 1, the removal strand (Removal 1 in Figure 1a) is complementary to the toehold of the control strand and to 10 of the 14 nucleotides binding the control strand to the switch. When the removal strand fully hybridizes with the control strand, only four nucleotides bind the control strand to the switch scaffold, and the control strand spontaneously dissociates from the scaffold. 22 As illustrated in Figure 1a, with TAMRA removed, FRET-based energy transmission is possible only by direct transfer between FAM and Cy5. On the basis of the ∼5 nm separation and low spectral overlap, the coupling efficiency for direct FAM to Cy5 transfer is low, and Switch 1 is in its OFF state. In order to restore the switch to its ON state, the removal strand contains a 10 nt long toehold allowing it to be separated from the control strand by a second strand invasion, producing an unreactive waste product. The return strand (Return 1 in Figure 1a) is complementary to all but five of the removal strand nucleotides. Although this design requires the control strand to spontaneously dissociate from the removal strand, it minimizes the sequence commonality of the control and return strands to only five nucleotides. Thus, direct interaction of the return strand with the scaffold of the switch should be minimal. Once the control strand is displaced from the removal strand, the control strand can rehybridize with the switch scaffold and restore Switch 1 to its ON state. Switch 2 ( Figure 1b) is slightly more complex than Switch 1 and was designed to exhibit the complementary (inverse) behavior. In Switch 2, the third staple strand (green) contains the intermediate TAMRA dye, while the control strand (brown) contains an Iowa Black Red Quencher (IBRQ) at its 3′ end. The IBRQ is positioned within two nucleotides of the output Cy5. Thus, when all five strands are hybridized, Switch 2 is in the OFF state; energy flow from FAM to TAMRA to Cy5 is allowed, but emission from Cy5 is suppressed by energy transfer to IBRQ. Similar to Switch 1, the control strand can be removed by strand invasion with a removal strand (Removal 2 in Figure 1b). With the IBRQ removed from the switch, Switch 2 is in its ON state: excitonic energy can flow from FAM through TAMRA to Cy5, and Cy5 emission is allowed. Restoration of Switch 2 to the OFF state is achieved by strand invasion with a return strand (Return 2 in Figure 1b), similar to Switch 1. On the basis of these complementary switch designs, logical high transitions in one switch correspond to logical low transitions in the other.
All oligonucleotides for the switches were purchased lyophilized from Integrated DNA Technologies, rehydrated in filtered ultrapure water (Milli-Q Water, Millipore), and used without further purification (sequences and manufacturer purification methods are listed in Supporting Information S1). The switches were synthesized through self-assembly by combining the scaffold strand with the 20% molar excess of the staple strands in a solution of 1×TAE, Mg 2+ (40 mM tris, 20 mM acetic acid, 2 mM ethylenediaminetetracetic acid (EDTA), and 12.5 mM magnesium acetate; pH 8.0). TAE, magnesium acetate tetrahydrate, and filtered ultrapure water were purchased from Sigma Aldrich. For both switches, synthesis Removal 1 strand (dark green) hybridizes with the control strand, removing the TAMRA dye from the scaffold and interrupting FRET, which switches the device to its OFF state. To restore FRET and return the device to its ON state, the Return 1 strand (orange) hybridizes with the Removal 1 strand, releasing the control strand and allowing the TAMRA dye to rejoin with the scaffold. (b) When Switch 2 is in its OFF state, the IBRQ(quencher)-functionalized control strand (brown) is attached to the scaffold, quenching Cy5 emission. When the control strand is displaced by the Removal 2 strand (pink), emission is no longer suppressed and the device enters its ON state. When the control strand is restored via the Return 2 strand (dark orange), emission is once again suppressed, returning the device to its OFF state. The lengths of all strands and toeholds are drawn approximately to scale.
Nano Letters
Letter dx.doi.org/10.1021/nl3004336 | Nano Lett. 2012, 12, 2117−2122 was performed without the control strand, which resulted in better switch performance. Thus, Switch 1 was synthesized in the OFF state, and Switch 2 was synthesized in the ON state. Once combined, the DNA solution was annealed at 90°C for 5 min then cooled to room temperature at ∼0.3°C/min using a thermal cycler (Mastercycler, Eppendorf). The synthesized switches were purified using a 3% agarose gel at 100 V for 120 min. To identify the switch bands, the completed gels were imaged using a multiplexed fluorescence detection and gel documentation system (FluorChemQ, ProteinSimple). The excitation source was selected to excite the FAM dye, and the detection filter was chosen to pass only Cy5 emission, thus allowing clear identification of the band of well-formed FRETbased transmission lines for Switch 2, as shown in Supporting Information S2. By comparing gel bands and using Switch 2 in a control lane, Switch 1 could be located as well, even in the OFF state. Identified switch bands were excised from the gel, and the switches were extracted using Freeze 'N Squeeze columns (Bio-Rad Laboratories). Once extracted, the concentration of switches was quantified by measuring the absorption at 260 nm (BioPhotometer, Eppendorf). On the basis of the measured concentration, a stoichiometric amount of control strand was added to the scaffold solution and allowed to hybridize with the scaffold at room temperature for 30 min. With the control strand added, Switch 1 was in the ON state and Switch 2 was in the OFF state.
Dynamic optical switching of the FRET-based transmission lines was characterized using a Cary Eclipse fluorescence spectrophotometer (Agilent Technologies). The transmission lines were excited at a wavelength of 450 nm (falling within the FAM excitation spectrum but outside of the TAMRA and Cy5 excitation spectra), and fluorescence intensity at the Cy5 emission wavelength of 667 nm was monitored over time. This measurement provided a direct probe of the state of the transmission lines. Cyclic transitions between states were achieved by adding removal and return strands in increasing excess concentrations according to m(1.5) n , where m is the number of moles of the switch and n is the strand injection number. Thus, the first removal strand is injected with a molar excess of 50%. To determine FRET efficiencies for the switches, the FAM dye was excited at 450 nm, and emission spectra for each device state were recorded from 500 to 800 nm. Figures 2 and 3 summarize the results for the switching processes. In order to ensure switching of every available tile, the switching reactions shown in Figure 2 were performed with exponentially increasing concentrations of removal and return strands, as described above. Thus, each switch reaction was nonstoichiometric and involved competing reactions with the previous strands. However, to determine the control strand removal and restoration rate constants listed in Table 1, switching reactions were performed using stoichiometric amounts of all strands, and the data were fit to second-order reaction kinetics, as described in Supporting Information S3. In Figure 2, switching reaction kinetics experiments demonstrate cyclic switching of the transmission state for both switches. For Switch 1 (Figure 2a), the Cy5 fluorescence intensity decreased as the removal strand displaced the control strand and removed the TAMRA from the transmission line. When the TAMRA strand was restored, the fluorescent intensity increased to just below its original level. Conversely, for Switch 2 the Cy5 intensity increased when the control strand was displaced and the IBRQ was removed (Figure 2b). Restoring the control strand to Switch 2 caused Cy5 intensity to decrease to approximately its original level. Table 1 lists the average loss in the ON state signal (operational performance) for repeated ON-OFF-ON state transitions, calculated using the stepwise ratios of ON state Cy5 emission intensities as switching was repeated and adjusting for dilution, described in Supporting Information S3. Additionally, state transition rates were calculated using equations for second-order reaction kinetics, as described below and in Supporting Information S3.
The emission spectra for each switch in both ON and OFF states are shown in Figure 3. To ensure proper stoichiometry for FRET efficiency calculations, the switches were prepared with all strands required for each state and then purified by agarose gel electrophoresis, as described above. The switch spectra demonstrate emission peaks for each dye in the transmission line and the peak intensities vary between ON and OFF states. Without TAMRA in Switch 1, the TAMRA peak vanishes and the Cy5 peak is diminished (Figure 3a). When IBRQ is absent from Switch 2, the Cy5 peak is more intense relative to Switch 2 with IBRQ ( Figure 3b). Least-squares fitting of the emission spectra with individual dye spectra was used to calculate the overall FRET efficiencies for each switch, as described in ref 10 and in Supporting Information S4 and summarized in Table 1.
Dynamic control of energy transfer was clearly observed for both switch designs ( Figure 2). The ratios of ON state Cy5 fluorescence to OFF state fluorescence are listed in Table 1, where it can be seen that Switch 2's ON/OFF ratio is over twice that for Switch 1. This large difference in ON/OFF ratios between switches results from differences in switch designs. For Switch 1, the OFF state was achieved by removal of the intermediate TAMRA leaving the FAM and Cy5 separated by 14 nt. Despite the small overlap of the FAM and Cy5 emission and excitation spectra, a separation of only 14 nt was insufficient to completely prevent FRET between FAM and Cy5. Evidence for FRET between FAM and Cy5 was also observed in the OFF state spectrum for Switch 1 (Figure 3a), where Cy5 emission was observed when TAMRA was absent from the tile. In contrast, very little Cy5 emission was detected in the OFF state of Switch 2 (Figure 3b). The presence of the IBRQ in the OFF state effectively quenched the Cy5 fluorescence, resulting in a much darker OFF state than that of Switch 1. The dark OFF state can be attributed to the proximity of IBQR to Cy5 (2 nt), which leads to highly efficient FRET.
In addition to displaying a low ON/OFF state ratio, Figure 2a shows that the ON state intensity for Switch 1 decreased noticeably per cycle. The average ON state intensity decrease per cycle was 6% beyond the 7% decrease expected for dilution when removal and return strands were injected. Since the OFF state intensity decrease matched the expected dilution decrease, the overall ON/OFF ratio of Switch 1 decreased per cycle. This overall intensity decrease may reflect incomplete restoration of the control strand, which could result from incomplete hybridization during two steps: (1) if the return strand did not hybridize with 100% of the removal strands, some control strands may have remained bound to removal strands; (2) if some control strands did not fully rehybridize with the scaffold strands after being released from the removal strands. To ensure complete ON and OFF state transitions, removal and return strands were injected with a 50% molar excess over the strands of the previous state. This process should ensure removal or restoration of every possible control strand, yet unintended interactions or secondary structure formation may be inhibiting control strand restoration. A similar inhibition to control strand restoration was observed for Switch 2, where the ON and OFF state intensities slightly exceeded the values expected from dilution. Although the order of the sequence domains for the control strands of Switch 1 and Switch 2 are reversed, both switches use identical toeholds. Thus, Switch 2 can be expected to display a similar lack of control strand restoration. However, since the control strand of Switch 2 contains the IBRQ, the effect was reversed compared to Switch 1, and the overall fluorescence intensity increased per cycle as an increasing fraction of Switch 2 tiles remained in the ON state. This effect is seen in the cycle gain listed in Table 1, which shows a 2% increasing dilution corrected fluorescence per cycle.
Differences in the switch designs were also reflected in the state transition rates produced by control strand removal or restoration. The removal rate for Switch 2 was almost six times greater than for Switch 1, while the restoration rates were within a factor of 2 ( Table 1). The higher removal rate for Switch 2 may reflect the fact that the control strand scaffold domain for Switch 2 is four nucleotides shorter than for Switch 1. However, in both cases removal of the control strand is a three strand branch migration process that can be described as a one-dimensional random walk with a mean completion time of n 2 τ, 23 where n is the number of base pairs and τ is the mean step time. Estimating τ to be 50 μs or less 23 yields a maximum walk time of ∼10 ms, which is significantly less than the halftime for state transitions for both switches. Thus, it is unlikely that differences in the control strand removal rates are fully accounted for by the four base pair difference for binding the control strand to the scaffold. An additional key difference in the control strands is that Switch 1's control strand is internally functionalized with TAMRA while Switch 2's control strand is functionalized with IBRQ at its 3′ end. The TAMRA functional group may interact more strongly with the switch scaffold, impeding the branch migration process and reducing the removal rate. Further studies with changes in ion species and concentration, 23 as well as modified control and scaffold domain sequences, are necessary to determine the mechanism for the differences in switch control strand removal.
In both switches, control strand removal was observed to proceed at a slower rate than control strand restoration, by roughly an order of magnitude for Switch 1 and a factor of 2 for Switch 2. These results are surprising since control strand removal involves only a single strand displacement process, whereas control strand restoration requires both strand displacement and subsequent hybridization. Additionally, as the return strands share part of their sequences with the control strands, it is possible for the return strands to interfere with the restoration of the control strands. Furthermore, the control strand of Switch 1 (Switch 2) hybridizes to the scaffold by 14 (10) bp, while the removal strand binds to the control strand with 24 (22) bp, requiring a longer branch migration process (still estimated to be less than ∼30 ms). 23 In the case of Switch 1, the TAMRA functional unit can still be expected to interact with the removal strand in the same way it would with the 24,25 For both switch reactions, the Gibbs free energy for toehold hybridization is sufficiently high that dissociation reactions can be neglected. 23 For both switches, the differences in removal and restoration rates may reflect sequence dependencies, as well as differences in the local reaction environments since the removal process occurs on the three-helix tile while the restoration process occurs on a single double-helix. Further studies beyond the scope of this report are required to elucidate the underlying mechanisms influencing the reaction rates. The switch designs possess key differences that are evident in the emission spectra from both switches in the ON and OFF states shown in Figure 3. The primary difference between the switches is that for Switch 1, exciton transmission is controlled by the presence or absence of a mediating TAMRA dye, while for Switch 2, output dye emission is controlled by the presence or absence of a quencher. Comparison of the ON and OFF spectra of Switch 1 illustrates the manipulation of the FRET processes between the dyes. In the ON state, emission peaks are observed from all three dyes. In the OFF state, the TAMRA peak is absent, the FAM peak is increased, and the Cy5 peak is decreased, as expected. Without TAMRA, the increase in FAM emission is expected since excitonic energy transfer from FAM is less efficient to Cy5 than to TAMRA, based on both spectral overlap and relative proximities. Similarly, the decrease in Cy5 emission is expected without the mediating TAMRA. The behavior of Switch 2 is quite different from Switch 1. Since the FRET processes within the switch remain intact for both ON and OFF states, almost no change in the FAM and TAMRA emission peaks is observed between the ON and OFF spectra, and only the emission from Cy5 changes based on the presence or absence of the IBRQ.
On the basis of comparison of the ON state emission spectra from the two switches, Switch 1 displayed an overall higher transmission efficiency. The ON state transmission efficiencies were quantitatively determined using least-squares fits to the spectra obtained from linear combinations of switch tile spectra for the individual dyes, similar to the procedure described in ref 10. The fitting coefficients were used to calculate the overall efficiency of energy transfer for each switch in the ON state, as described in Supporting Information S4. The calculated ON state efficiencies are listed in Table 1 and confirm that Switch 1 exhibited higher transmission efficiency than Switch 2. The primary structural difference between the ON states of the two switches is that the transmission dyes in Switch 1 are located on a single double helix with approximately 2.38 nm between donor−acceptor pairs. In contrast, the dyes in Switch 2 are not on a single double-helix, thus the distance from the TAMRA to the input and output dye is slightly longer at approximately 3.11 nm. With the Forster radii of the FAM to TAMRA and TAMRA to Cy5 processes estimated to be 4.98 and 4.6 nm respectively, 26,27 this increase in distance should produce only a ∼14% decrease in the overall transfer efficiency. However, changes in dye orientation and nonradiative losses may be sufficient to account for the differences in transmission efficiency.
Despite the lower overall transmission efficiency of Switch 2, the ON/OFF ratio (i.e., switching efficiency) is significantly higher (Table 1). This difference results from the differences in effectively suppressing Cy5 emission in the OFF state, as seen in Figure 2. Exciton transfer to the IBRQ quencher nearly eliminates emission from Cy5 in the OFF state of Switch 2. However, in the OFF state of Switch 1, there remains significant direct energy transfer between FAM and Cy5. Despite their low spectral overlap, Cy5 emission in the OFF state of Switch 1 is nearly equal to the Cy5 emission in the ON state of Switch 2, as seen in Figure 3. Thus, although the overall transmission efficiency is lower for Switch 2, the quenching mechanism of Switch 2, dictated by a large spectral overlap between fluorophore and quencher and the proximity of the pair, does provide significantly greater control over the emission state of the transmission line.
Molecular photonic circuits show promise for information processing in nanoscale devices, and FRET is one means for directing excitonic energy flow. In this study, two methods were reported for creating switchable FRET-based excitonic transmission lines using DNA self-assembly. The switches were assembled using DNA origami techniques with a functionalized control strand that was both removable and restorable through toehold-mediated strand invasion. In the complementary switch designs, the control strand either mediates the FRET process or quenches emission from the output dye, making it possible to switch between on and off emission states. It was found that the switch design with quenched output emission exhibited a lower overall transmission efficiency but a significantly greater contrast between the on and off states. Following the work of Vyawahare et al., 10 extension of these switch designs to longer multi-FRET transmission lines and networks should be possible. A switch design in which the FRET process is controlled by simultaneous removal or restoration of multiple intermediate dyes should yield a high efficiency transmission line with high contrast between states. Synthesis of two complementary dynamic transmission lines using DNA selfassembly indicates that it is possible to form nanoscale photonic circuits whose operation can be controlled through molecular programming. These programmable FRET-based switches could enable dynamic control over lasing in optofluidic FRET lasers 28 as well as reaction control in photochemical networks. 29 In principle, the switches reported here can be networked to implement arbitrary Boolean functions, facilitating nanoscale information processing with molecular circuitry.
* S Supporting Information
Strand sequences and schematic, dye information, gel purification, reaction rate calculations, switch spectra, and FRET efficiency calculations. This material is available free of charge via the Internet at http://pubs.acs.org.
Notes
The authors declare no competing financial interest.
|
2016-05-12T22:15:10.714Z
|
2012-03-08T00:00:00.000
|
{
"year": 2012,
"sha1": "decf89bf474f0acd55c5fb0e3571305df72113aa",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://doi.org/10.1021/nl3004336",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8b269942aa6e5b50a24a26809d1142711af5e97",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
258136000
|
pes2o/s2orc
|
v3-fos-license
|
Plant chemical variation mediates soil bacterial community composition
An important challenge in the study of ecosystem function is resolving how plant antiherbivore chemical defence expression may influence plant-associated microbes, and nutrient release. We report on a factorial experiment that explores a mechanism underlying this interplay using individuals of the perennial plant Tansy that vary genotypically in the chemical content of their antiherbivore defenses (chemotypes). We assessed to what extent soil and its associated microbial community versus chemotype-specific litter determined the composition of the soil microbial community. Microbial diversity profiles revealed sporadic effects of chemotype litter and soil combinations. Soil source and litter type both explained the microbial communities decomposing the litter with soil source having a more important effect. Some microbial taxa are related to particular chemotypes, and thus intra-specific chemical variation of a single plant chemotype can shape the litter microbial community. But we found that ultimately the effect of fresh litter inputs from a chemotype appeared to act secondary as a filter on the composition of the microbial community, with the primary factor being the existing microbial community in the soil.
Understanding what controls the structure and function of terrestrial ecosystems has been greatly enhanced by considering aboveground (plant-based) and belowground (detritus-based) food chains as coupled systems 1 . This conception has given rise to the appreciation that variation in plant functional traits (e.g., nutrient content and anti-herbivore defense expression) can determine variation in the community composition of different trophic compartments (i.e., microbial decomposers, herbivores, carnivores) within ecosystems [2][3][4][5][6] . Compounding this complexity is the growing realization that intraspecific variation in plant functional traits can explain as much variation in food web structure and ecosystem functioning as interspecific plant trait variation [7][8][9][10][11][12] . But understanding the community-and ecosystem-wide consequences of intraspecific variation in plant trait expression remains rudimentary 7,13,14 ; especially how soil bacterial communities and their functioning might respond to variation in plant traits 15 .
We report here on an experiment aimed at understanding how intraspecific variation in the nature and concentration of plant volatile chemicals that ward off insect herbivory affect soil microbial communities and their decomposition of plant litter containing volatile chemicals. The study is motivated by previous evidence that interspecific variation in plant chemical defense composition (aka plant chemotype) can influence the trophic structure of food-webs [16][17][18][19] . Our previous work, in particular, demonstrated that plant chemotype can determine both arthropod and soil microbial communities 20 . This study complements that work by resolving how plant chemotype can alter soil microbial community composition. We test whether soil microbial community composition is shaped most by the original plant chemotype with which the microbes are naturally associated or by differences in litter inputs from alternative chemotypes using chemotypes of the perennial herb Tansy (Tanacetum vulgare) as our system of study. www.nature.com/scientificreports/ Our research combined the use of next-generation DNA sequencing (16S rRNA gene amplicon sequencing) to assess soil microbial community composition with a litter decomposition experiment to address the following questions: (1) Does a soil microbial community associated with a particular plant chemotype have a different ability to decompose litter from its own chemotype vs litter from another chemotype? (2) Does soil microbial diversity change when subjected to its own chemotype's litter vs. another chemotyope's litter?
Methods
Study system. Tansy (T. vulgare) is a perennial plant originating in Europe and Asia 21 . Large populations can be found in disturbed, well-drained, nutrient poor soils 22 , where it often forms isolated patches. It also frequently occurs alongside river valleys, railway tracks and on abandoned lands. Tansy genotypes can be classified according to their volatile chemical content (chemotypes): most frequent are β-thujon, camphor, and borneol 23 . Breeding experiments with these chemotypes using molecular markers have confirmed that the volatile chemical content of a particular Tansy plant is determined genetically 21,22 .
Tansy chemotypes determine their associated arthropod communities that include three specialised aphid species (Macrosiphoniella tanacetaria (Kaltenbach), Metopeurum fuscoviride Stroyan and Uroleucon tanaceti L.) and many predators specialised on Tansy aphids, the most important being the 7-spotted ladybird beetle (Coccinella septempunctata), the generalist nursery web spider (Pisaura mirabilis) and the minute pirate bug (Orius spp.) 24 . Together, these properties of Tansy make an ideal model system for studying effects of intraspecific plant variation on ecosystem functions.
The experiment reported here used individuals drawn from Tansy populations that belong to different genetic types with different chemical defense profiles (chemotypes) 20 . These chemotypes were identified in previous work which surveyed and evaluated the chemical composition and genotypes of 100 tansy plants from populations along a 120 km transect in Transylvania, Central Europe 20 . That previous survey revealed that chemotypes where comprised of different compositions of four key volatile chemicals: (1) Camphor (2) Borneol (3) Carvone, (4) β-Thujon (see 20 for details). We used soil and litter associated with hybrid chemotypes that were comprised of a mixture of 40% or more of a dominant volatile chemical and 20% or less of the other volatiles. For example, a hybrid with 40% or more Camphor comprised the Camphor treatment, a hybrid with 40% β-Thujon comprised the Thujon treatment, etc. (Fig. 1). When possible, we used litter and soils from multiple individual plants of each chemotype taken from points along the 120 km transect. We obtained soils and litter associated with Camphor, Borneol and Thujon hybrids (n = 3 plants for each hybrid chemotype), and Carvone hybrid (n = 1 plant).
We collected soils associated with the individual plants by extracting soil from a 50 cm diameter area around each plant to a 15 cm depth. This soil horizon contained 3.26% humus, a mobile potassium content of 408 ppm and nitrogen which varied between 0.143% and 0.101%. The base saturation of the upper layer was 77.85%, and the pH (H2O) 6.38 20 . We collected aboveground biomass of each individual plant by clipping them at the soil surface.
Litter decomposition experiment. The litter decomposition experiment evaluated how soil and litter from each chemotype shaped the soil microbial community. We further evaluated whether transplanting litter from a chemotype to soils associated with another chemotype influenced the microbial community. We deployed a factorial design, crossing soil and litter sourced from each of the four hybrid chemotypes plus the control (Fig. 1).
We created treatment soils ( Fig. 1) by bulking and homogenizing soil from the replicate hybrids plants for a chemotype treatment. We also collected and homogenized leaf material from each of the treatment chemotypes for the decomposition assay. We further created a control by collecting and homogenizing soil and plant material from field locations covered in monocots without tansy plants. Thirty kilograms of soil from each hybrid chemotype or control were filled into five individual 40 × 40 × 30 cm boxes per chemotype (Fig. 1).
We put a homogenized mixture of 33 g of litter and 66 g of soil from each chemotype or control into individual standard 0.2 mm mesh litterbags 25 . We added the soil to each litter bag that was from the same chemotype as the litter soil. We created 5 replicate litterbags for each chemotype or control for each soil treatment (n = 125 litter bags with n = 25 litter bags per each of the 5 soil treatments or control) At the end of November 2020, we buried the five replicate litter bags for each litter-soil treatment combination 10 cm below the soil surface within each box (Fig. 1).
All boxes were kept outdoors under natural conditions until the end of May 2021. Litterbags were then collected from each chemotype box and samples were placed into sterile tubes and stored at − 70 °C until subject to DNA analyses.
Total genomic DNA was extracted with the DNeasy PowerSoil Pro Kit (Qiagen) from the mixture of litter and soil remaining in each buried litter bag in May 2021. Then, the V3-V4 region of the 16S rRNA gene was amplified with Bacteria-specific PCR using the following primers: B341F (5′-CCT ACG GGN GGC WGC AG-3′ 26
Data analyses.
Microbial community data were rarefied to 19,000 reads per sample before we created an average distance matrix for analysis using 100 random draws from each of our sequenced communities (n = 25). www.nature.com/scientificreports/ First, the 13 dominant bacterial phyla and genera were compared between plant chemotypes and the control; here, only proportional differences of bacterial distributions were presented between samples using microbial sequences data. Then, we produced diversity profiles of the entire set of bacteria genera (i.e., all OTUs) to examine differences in the community diversity in different soil and litter combinations. Next, we used non-Metric Multidimensional Scaling (NMDS) to compare the composition of bacterial phyla and genera and tansy chemotype. Groupings were based on relative proportions of different chemical volatiles in each Tansy plant. Finally, we tested for a significant effect of soil type and litter type on the bacteria community using the multivariate analysis of variance (vegan::adonis2). Analyses were run in R Studio v0.97.314 using R v3.0.1 30 (R Core Team 2013).
Permit statement.
Experimental research and field studies on tansy, including the collection of plant material, complied with institutional, national, and international guidelines and legislation. Permissions were not required for Tanacetum vulgare collections because tansy is a wild weed with moderate expansion in Transylvania, included between plants that has to be controlled with plant protection methods. Voucher specimens were not deposited as only leaves and stems were collected for analyses, entire specimens were not collected.
Results
Bacterial communities in the litter bags from different chemotypes had a different composition of phyla ( Fig. 2A) and genera (Fig. 2B) across our treatments, with changes in the abundance of phyla less pronounced than those in genera.
Diversity profiles of bacteria genera revealed that soil and litter combinations had similar species richness (Fig. 3) Control V. S5 C ontrol V. S10 C ontrol V. S15 C ontrol V. S20 C ontrol V. S25 A B C D Figure 1. Tansy field (A), Tansy plant (B), U. tanaceti aphids on tansy leaves (C) (Photos made by Adalbert Balog), litter decomposition and soil bacterial community assay design (D). Numbers represents sample orders, which were used consequently in labelling samples for genetic analyses. The first four treatments representing the main chemotype soils (I, II, III, IV), the control with no tansy (V) soil were used in boxes, within each, litter additions from the different chemotypes (I-IV) or non-Tansy plants (V) were placed. Marks as S1… S25 represents particular chemotype samples in particular chemotype soil (i.e. III.S3 means Carvone litter in Camphor soil, versus III S8 Carvone litter in Borneol soil). www.nature.com/scientificreports/ plants, but not the controls, had fewer rare species as indicated by lower diversity values as the scale parameter increased (Fig. 3). Borneol soil had the largest effect on community evenness (i.e., high scale parameter) with the most diversity retained by litter from Borneol plants when it was decomposed in the soil from beneath Borneol plants. In fact, Borneol litter decomposed in soil from beneath Borneol plants was the only combination where diversity was unambiguously different-higher in this case-than other treatments (Fig. 3). Both the source litter and the soil in which it was buried had a significant influence on the composition of the bacterial community. Overall, the soil in which that litter was decomposed had a strong effect on the composition of the bacterial community than by the type of litter being decomposed. This was true for both bacterial phyla and genera (Fig. 4A,B).
Discussion
Our experiment revealed significant variation in the bacteria communities decomposing litter from different Tansey chemotypes in a common garden experiment. The variation was driven by both soil and litter sources, indicating that community assembly was significantly affected by both processes. Yet, the soil source played a dominant role, explaining twice the variation in community composition as did litter type. www.nature.com/scientificreports/ Relative changes in the microbial community across chemotypes could indicate differences in function, but our data do not support this interpretation. For example, we have already demonstrated that plant and soil nitrogen increase from Thujone to Borneol to Camphor plots in the field 20 . Here, we found microorganisms, such as Pseudomonas, Massilia, and Sphingomonas, that have been described as important genera for litter degradation and mineralization. Their role in litter early decomposition has been demonstrated 31 . Yet, their relative abundance, individually or in total, rank sporadically across soil and litter combinations, suggesting a limited link between relative abundance and functional outcomes in the field. www.nature.com/scientificreports/ So, the microbial community decomposing litter varied by both the soil source and litter type across different chemotypes from the same plant species. This result suggests an important role of plant chemical defense on microbial community composition. However, patterns of diversity and potential links to microbial function were inconsistent. This inconsistency occurred because the ranking of different microbial taxa across chemotypes did not correspond to our understanding of function and neither soil nor litter source blocked together when we sorted treatments by the relative abundance of individual or functionally similar taxa. The significance of the changes in microbial community composition will therefore likely require an analysis of functional outcomes (i.e., nutrient cycling, enzyme activity) to be understood.
|
2023-04-15T06:17:46.929Z
|
2023-04-13T00:00:00.000
|
{
"year": 2023,
"sha1": "6c44041f51559d97458e4847129a9195d9106254",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7892c9d67d0dd9c643dd2b5fe3e065a2cd40dc45",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244536446
|
pes2o/s2orc
|
v3-fos-license
|
Modified Application of Cardiac Rehabilitation in Older Adults (MACRO) Trial: Protocol changes in a pragmatic multi-site randomized controlled trial in response to the COVID-19 pandemic
Background Older adults are at higher risk for cardiovascular disease and functional decline, often leading to deterioration and dependency. Cardiac rehabilitation (CR) provides opportunity to improve clinical and functional recovery, yet participation in CR decreases with age. Modified Application of CR in Older Adults (MACRO) is a National Institute on Aging (NIA)-funded pragmatic trial that responds to this gap by aiming to increase enrollment of older adults into CR and improving functional outcomes. This article describes the methodology and novel features of the MACRO trial. Methods Randomized, controlled trial of a coaching intervention (MACRO-I) vs. usual care for older adults (age ≥ 70 years) eligible for CR after an incident cardiac hospitalization. MACRO-I incorporates innovations including holistic risk assessments, flexible CR format (i.e., helping patients to select a CR design that aligns with their personal risks and preferences), motivational prompts, nutritional emphasis, facilitated deprescription, enhanced education, and home visits. Key modifications were necessitated by the COVID-19 pandemic, including switching from a performance-based primary endpoint (Short Physical Performance Battery) to a patient-reported measure (Activity Measure for Post-Acute Care Computerized Adaptive Testing). Changes prompted by COVID-19 maintain the original intent of the trial and provide key methodologic advantages. Conclusions MACRO is exploring a novel individualized coaching intervention to better enable older patients to participate in CR. Due to COVID-19 many aspects of the MACRO protocol required modification, but the primary objective of the trial is maintained and the updated protocol will more effectively achieve the original goals of the study.
Background: Older adults are at higher risk for cardiovascular disease and functional decline, often leading to deterioration and dependency. Cardiac rehabilitation (CR) provides opportunity to improve clinical and functional recovery, yet participation in CR decreases with age. Modified Application of CR in Older Adults (MACRO) is a National Institute on Aging (NIA)-funded pragmatic trial that responds to this gap by aiming to increase enrollment of older adults into CR and improving functional outcomes. This article describes the methodology and novel features of the MACRO trial. Methods: Randomized, controlled trial of a coaching intervention (MACRO-I) vs. usual care for older adults (age ≥ 70 years) eligible for CR after an incident cardiac hospitalization. MACRO-I incorporates innovations including holistic risk assessments, flexible CR format (i.e., helping patients to select a CR design that aligns with their personal risks and preferences), motivational prompts, nutritional emphasis, facilitated deprescription, enhanced education, and home visits. Key modifications were necessitated by the COVID-19 pandemic, including switching from a performance-based primary endpoint (Short Physical Performance Battery) to a patient-reported measure (Activity Measure for Post-Acute Care Computerized Adaptive Testing). Changes prompted by COVID-19 maintain the original intent of the trial and provide key methodologic advantages. Conclusions: MACRO is exploring a novel individualized coaching intervention to better enable older patients to participate in CR. Due to COVID-19 many aspects of the MACRO protocol required modification, but the primary objective of the trial is maintained and the updated protocol will more effectively achieve the original goals of the study.
Introduction
The US population aged 65 years and over will almost double between 2020 and 2050 [1]. As adults survive into old age, the biology and physiology associated with aging predisposes them to cardiovascular disease (CVD) in a context of high clinical complexity [2]. CVD prevalence increases from ~40% in men and women 40-59 years, to 70-75% in those 60-79, and to 79-86% in those aged ≥80 years [3]. Age-related CVD complexity stems from pervasiveness of comorbid diagnoses, frailty, cognitive decline, sensory deficits, incontinence, and associated sequelae of polypharmacy, falls, diminished adherence, poor quality of life, and suboptimal procedural outcomes [4]. While cardiac rehabilitation (CR) has been proven to benefit older adults who participate [5], only 24.4% of CR-eligible Medicare fee-for-services beneficiaries attend even a single session, and just 26.9% of those complete the full program [6]. Barriers to participation include the encumbering effects of geriatric conditions as well as logistical obstacles and lack of motivation amidst mounting health conditions and lifestyle limitations.
Functional capacity also declines with age and disease, with additional risks of rehospitalizations, disability, and mortality among CVD patients. Exercise training and structured physical activity through CR are especially beneficial for older adults, as many tend to become sedentary long before the incident CVD event occurs, with deconditioning that often then accelerates during incident hospitalizations. Overcoming exercise intolerance and fears associated with restarting activity are critical elements of successful recovery [7]. Contemporary CR also includes components of risk factor reduction, nutrition, lifestyle modifications, medication adherence, education, and stress relief to further advance clinical stability and well-being, which in turn can provide disproportionate benefit amidst the high mortality and morbidity risks of old age [8].
Given the compelling evidence for the value of CR as part of currentday CVD therapeutics, it has been elevated by the Centers for Disease Control and Prevention (CDC) as a healthcare priority. The CDC's Million Hearts initiative to improve cardiovascular health targets enrollment of 70% of all eligible patients into CR by 2022 [9]. Nonetheless, most older adults do not attend, and it remains unclear if CR implementation can be improved to enhance the incentive for and process of CR for the untreated majority.
Modified Application of Cardiac Rehabilitation in Older Adults (MACRO; NCT03922529) is a pragmatic [10], randomized controlled trial (RCT) funded by the National Institute on Aging (NIA) that investigates the effectiveness of an intervention, MACRO-I, designed to increase participation of older adults in CR as a key means to reduce disability by improving their physical function. The overarching intent of the MACRO-I is to enhance the health and wellbeing of older adults by overcoming barriers to CR participation and by broadening the scope of CR to address distinctive (and relevant) geriatric challenges. In this paper, we describe the original MACRO clinical trial designed pre-COVID, and modifications for the trial and MACRO-I delivery necessitated by the COVID-19 pandemic.
Design
Adults aged ≥70 years are randomized to a MACRO intervention (MACRO-I) versus usual care. MACRO-I is a person-centered coaching intervention to facilitate CR. Coaching incorporates innovative techniques to better align CR with the priorities and capacities of eligible older adults. These innovations (explained below) include holistic risk assessments, flexible conceptualization of the CR format (i.e., choosing a CR design that responds to each patient's risks and preferences), motivational prompts, nutritional emphasis, facilitated deprescription, enhanced education, and integrated home visits.
Specific aims
Aim 1 of MACRO is to establish effectiveness, safety and acceptability of MACRO-I versus usual care, with the hypothesis that patients randomized to MACRO-I would achieve greater improvements in function. The primary outcome is the change in the Short Physical Performance Battery (SPPB) [11] from baseline to 3 months, with lower scores indicating more severe functional impairment. Complementary performance and self-reported functional measures are also assessed to more fully characterize physical and cognitive function.
Aim 2 of MACRO is to demonstrate the sustainability of the functional benefits of the MACRO-I at 6 and 12 months. Aim 3 is exploratory and aims to delineate characteristics of patients who benefit the most from MACRO-I. It is hypothesized that patients who are relatively more burdened by frailty, multimorbidity and other vulnerabilities of age may benefit more than patients who are relatively robust and/or less clinically complex.
Recruitment procedures
Eligibility for MACRO includes hospitalization for coronary heart disease (CHD), acute myocardial infarction (AMI), coronary artery revascularization (PCI or CABG), heart failure with reduced or preserved ejection fraction (HFrEF or HFpEF), valve repair or replacement (surgical or transcatheter), or heart transplant. The enrollment window to MACRO is up to 10 days after the incident event, with the intent to initiate the MACRO-I promptly (for those in the intervention arm) to mitigate hospital-related functional decline and disability [12].
Potential participants are pre-screened daily through inpatient records to assess for eligible diagnoses and absence of contraindications. Candidate participants are approached while inpatient, at a follow-up visit, by phone, and/or by letter after discharge. Study personnel describe the study to the patient. The Short Blessed test [13] is used to screen for dementia. Patients with severe cognitive impairment (i.e., diagnosis of dementia in the medical record or Short Blessed ≥13), unstable medical conditions, life expectancy less than 12 months, residing in a long-term care living situation prior to the time of hospitalization (with no plans to return to independent living after the hospitalization), or inability or unwillingness to consent are excluded. The consent process varies according to each hospital's governance. At the VA Pittsburgh Healthcare System, consent is obtained either in person on hard copy forms or over the phone. At Barnes-Jewish, Missouri Baptist, and Shadyside Hospitals, consent is obtained either in person or by mail on hard copy forms, or by electronic consent.
MACRO is approved by the Institutional Review Boards at the University of Pittsburgh, Veterans Affairs Pittsburgh Medical Center, and the Washington University School of Medicine, and all participants provide written informed consent.
Participants, recruitment, randomization
Based on the SPPB, the original MACRO study enrollment goal is 480 participants, which yields 80% statistical power to detect a between treatment arm difference of 0.77 point in SPPB with a two-tailed α = 0.05, assuming a retention rate of 80% and a standard deviation of 2.7 for SPPB change [14][15][16][17][18]. This corresponds to a moderate effect size. Randomization is stratified by site and baseline SPPB score (i.e., 0-6, 7-9 or 10-12) to ensure a balance between the treatment groups with respect to the baseline value of the primary outcome.
Enrollees are randomized to MACRO-I or usual care in a 1:1 ratio using a blocked scheme with random block size. All patients are eligible for CR as determined by their physicians, but CR is facilitated by coaching only in the MACRO-I arm. MACRO-I coaches supplement care to increase the accessibility, sustainability and patient-centeredness of CR such that it may become more available and pragmatic for eligible older patients.
MACRO intervention
MACRO-I coaches first meet study patients during the incident hospitalization when deemed stable by their medical teams. MACRO-I coaches follow these participants daily while they are still hospitalized. Coaches ensure that inpatient CR (phase I) is ordered and also provide additional supportive coaching.
MACRO-I innovations include novel holistic risk assessments that link functional and psychosocial risks to risks from CVD. This perspective is used to guide recommendations for different formats of outpatient CR, i.e., site-based, remote-based (aka home-based), or hybrid CR (sitebased that transitions to remote-based). For holistic risk assessment, details of the patient's current illness, past medical history, physical functional assessment at baseline (e.g., gait speed [19], SPPB score, grip strength) [20], and psychosocial factors assessed at baseline (e.g., Readiness to Change [21], Patient Health Questionnaire [PHQ]-9 [22] score) are integrated with one another. Table 1 shows the elements incorporated into the holistic risk assessment. This risk assessment is used to enrich the rationale for CR, and to inform the coach's patientspecific recommendations for site-based, remote-based or hybridbased approaches to CR. High risk in any category increases consideration for site-based or hybrid CR, as it may provide greater supervision and support. However, if high-risk patients still prefer remote-based CR, risk assessment provides opportunity to develop pragmatic approaches that specifically mitigate CVD and non-CVD dangers.
MACRO-I innovations also include flexible conceptualization of CR.
Whereas most hospital systems promote their own CR programs, there are significant differences between programs (e.g., site-based versus remote-based, but also intensive CR and other variations) and some programs are better suited to some patients than others. MACRO-I coaches conceptualize CR more broadly, and suggest formats that best match each patient's risk profile and preferences. Site-, remote-, and hybrid-based CR formats are all described and potentially facilitated. The MACRO-I coach explains how different formats of CR may better align with each patient's health risks and preferences, and includes issues of logistics, costs, and the home environment. A patient prone to fall risks may, for example, benefit most by starting with site-based CR, with the plan to prioritize learning chair-based exercises such that (s)he can then transition safely and effectively to remote-based CR. Regardless of which type of CR program the patient selects, the MACRO-I coach serves as a common denominator to supplement each patient's experience with feedback and reinforcement to best ensure that (s)he derives a personally-centered experience from CR that helps him/her recover. MACRO-I enhances motivation for CR by applying each patient's goals of care as an important motivational stimulus. To clarify these goals of care, the MACRO-I coach uses a standardized goals assessment process wherein (s)he displays a series of images that convey a broad range of life-goal choices. Once each patient's goals are identified, the MACRO-I coach applies them with the premise that CR is the principal means for goal attainment. This goals assessment technique was adapted from Enhanced Medical Rehabilitation (E-MR) developed by Lenze et al. [23] Coaches also employ novel approaches to nutrition, education, and deprescription of sedating medications. MACRO-I coaches ensure that referral to dietary assessment and education is achieved irrespective of the CR format and whether or not a nutritionist is part of the patient's CR program. In contrast to standard precepts of dietary restrictions in most cardiac programs, MACRO-I coaches encourage sufficient caloric intake for patients prone to sarcopenia, frailty and malnutrition.
To enhance education, MACRO-I transition resources were developed and are utilized by the coaches both during the inpatient and the immediate post-hospitalization phases of care. MACRO-I transition educational booklets are disease-specific (e.g., MI, PCI, CABG, HF) and concise to provide basic information about the event that occurred, essentials of therapeutics, and the utility of CR as part of recovery. The transition documents highlight the centrality of CR in recovery, with language, font, and pictures designed for an older population.
MACRO-I includes deprescription as a means to augment functional recovery [24,25]. MACRO-I coaches identify benzodiazepines and anticholinergic/antihistamine medications, as these drug classes predispose to fatigue, falls, poor functional recovery, and other risks in an older population and are considered potentially inappropriate [26]. Medical regimens in these patients are then reviewed by a MACRO-I geriatric psychiatrist and/or a pharmacist with deprescribing expertise. If the participant and the PCP both agree, deprescription guidance is provided by the MACRO experts (i.e., benzodiazepines and anticholinergics are usually tapered slowly, decreased ~25% every two weeks). MACRO-I coaches also provide follow-up with the patient to identify potential adverse effects of deprescribing. Whereas co-I Dr. Lenze brings distinctive expertise to refine deprescription in this research protocol [27] based on his prior research efforts, ultimately MACRO aims to refine the role of a pharmacist as a more practical and generalizable deprescription staffing model.
In MACRO-I, the safety and organization of the home environment are regarded as important aspects of recovery. The MACRO-I coach provides two home visits, at one and four weeks after enrollment, with a primary objective to ensure that exercise training can be completed safely and effectively. At home visit 1, the MACRO-I coach emphasizes safety of the home environment for function and exercise and goals of care. The criteria of assessment and recommendations were adapted for the MACRO-I from Chiu et al.'s Safer-Home [28] checklist. At home visit 2, the MACRO-I coach confirms the success of changes made to improve the environment or takes steps to mitigate residual barriers. Home assessments occur irrespective of enrollment into or type of CR.
MACRO-I coaches follow patients by telephone after discharge with weekly calls for 3 months and monthly calls for the subsequent 9 months. Phone calls last 30-60 min and, in addition to continuing with the innovative elements (physical activity, motivation, nutrition, education, and deprescription), the coaches also specifically review the patient's overall health course and symptoms since the prior call and their adherence to medications, activity, dietary recommendations, and CR participation, and address lapses, with the goal to achieve solutions. Coaches follow the MACRO-I patients whether or not they enroll in CR.
Usual care
Patients randomized to usual care receive standard care for the hospital in which they are treated. Patients are eligible to receive all treatments, including CR, as prescribed by their medical team, but they do not receive the MACRO-I. While CR is an indicated therapy for CVD, implementation is inconsistent.
Assessments
Assessments for MACRO-I and usual care study patients are completed at baseline, 3, 6 and 12 months (Fig. 1). The SPPB is a composite of balance, gait speed, and strength that correlates directly with capacity for independent living in older adults. Poor function measured with the SPPB is a harbinger of eroding independence and institutionalization, as well as poor quality of life, increased hospitalizations, and increased mortality in older populations [11,29]. Using the SPPB as the primary outcome of interest contrasts with most prior CR trials that tend to focus principally on cardiorespiratory fitness measured as peak oxygen uptake (VO 2 ) and/or 6-min walk distance [30]. Nobably, SPPB was used in the recently published REHAB-HF Trial [31].
Assessments complementary to SPPB include performance metrics (grip strength [32] and accelerometry [33]) and patient-reported evaluations (Duke Activity Status Index [DASI] [34] and Activity Measure for Post-Acute Care with Computerized Adaptive Testing [AM-PAC-CAT]) [35]. The DASI questionnaire is used to quantify each participant's self-report of function immediately prior to their incident cardiac event or hospitalization and to also provide an estimate of cardiorespiratory fitness. The AM-PAC-CAT is used to quantify basic patientreported daily activities using a scoring system. The initial assessment of the AM-PAC-CAT is collected one week after discharge to quantify post-event daily activity.
Other MACRO assessments include frailty (Survey of Health, Ageing and Retirement in Europe-Frailty Index [SHARE-FI]) [36]; cognition (Trails A&B [37] and Brief Test of Adult Cognition by Telephone [BTACT]) [38]; mood (Patient Health Questionnaire [PHQ-9]) [22]; quality of life (Veterans RAND 12 Item Health Survey [VR-12]) [39]; cardiac self-efficacy [40]; literacy (Rapid Estimate of Adult Literacy in Medicine [REALM]) [41]; and Readiness for change [21]. Other evaluations include diet (Rate Your Plate [42] and 3-day food diary), as well as comprehensive assessments of medications, comorbidities, hospitalizations and CR participation. Stringent methods for fidelity of assessments and quality control are maintained. We plan a multivariable analysis that takes into account pre-randomization covariates, multiple post-randomization time points, potential for intervention effect to vary at different follow-up time points, multiple follow-up assessments over time from each participant and the resulting stochastic nonindependence of observations. In addition, we plan to employ multiple imputation to mitigate any biases potentially arising from missing data.
Data from all enrolling sites are stored in a common REDCap database provided through University of Pittsburgh. An independent Data and Safety Monitoring Board approved the MACRO study protocol and meets regularly with the study investigators to ensure safety and appropriateness of the study procedures, and to monitor the progress of the study.
MACRO during COVID
COVID profoundly disrupted the original MACRO protocol because face-to-face engagements were no longer feasible. Thus, methods had to change to be practical and safe in a pandemic environment. Per Data and Safety and Monitoring Board (DSMB) decision, all 43 study patients who enrolled in the trial before March 2020 were released. The MACRO investigators were challenged to modify process without undercutting the essence of the original aims and the embedded innovation. A related challenge was the need to develop remote or virtual alternative approaches that were safe, feasible and effective for an older adult population that is inherently prone to geriatric syndromes (e.g., mild cognitive impairments, sensory limitations, movement disorders and multimorbidity) and related technology limitations that may preclude computer-and app-based strategies. Therefore, the decision was made to revise the MACRO protocol and re-start the trial using assessments that could be achieved entirely by telephone, since all MACRO candidates had access and capacity to use a telephone. Furthermore, a related decision was made that all telephone-based evaluations be limited to one hour in total, as brevity was deemed essential for participants who were also prone to fatigue and inattention. Given these major constraints in format and time, considerable revision and paring of the pre-COVID protocol was essential. All modifications to the MACRO protocol were reviewed and approved by the DSMB.
Overall, the updated MACRO protocol responds to the COVID pandemic, but still remains an RCT of older adults with CVD, aiming to increase physical function by facilitating the use of CR. While the inclusion and exclusion criteria are the same, the window of eligibility expanded to 24 days initially, and then to 90 days to increase flexibility and time, given the more limited access to patients on the hospital wards during COVID. Nonetheless, it is a priority to enroll patients into MACRO as close as possible to their incident events to activate MACRO-I expeditiously for those randomized to the intervention arm.
While the aims of the updated MACRO protocol did not change, the use of the SPPB performance measure as a primary endpoint was no longer feasible, as it was neither safe nor reliable to administer remotely. In contrast, the AM-PAC-CAT Basic Mobility Scale and Daily Activity domains in the original assessment battery could still be assessed at baseline, 3, 6, and 12 months. Change in AM-PAC-CAT Basic Mobility Scale from baseline to 3 months was selected to replace SPPB as the primary outcome.
AM-PAC-CAT has been used mostly by physical and occupational therapists when assessing patients with medical, orthopedic, and neurologic impairments, and it has not previously been applied to CR interventions. The Basic Mobility and the Daily Activity Domains use specific subsets of AM-PAC-CAT criteria to characterize categories of function. The Basic Mobility Scale quantifies basic movement and physical functioning activities, such as bending, walking, carrying, and climbing stairs. The Daily Activity domain quantifies difficulty of daily activities (reaching, dressing, turning locks, opening jars). The computer adaptive technology (CAT) selects items that correspond to the participant's previous responses, thereby reducing the number of total questions while increasing the test's sensitivity and validity [43]. Whereas the AM-PAC Basic Mobility Domain draws on 101 potential criteria to assess capacity, with CAT, the selection narrows to about a dozen, and the assessment can usually be completed in less than 2 min. Although the CAT relies on a computer interface, the computer is used only by the investigator administering the test, with no technological demands on the participant.
AM-PAC-CAT is reliable for a wide range of patient capacities and provides high capacity to discriminate change [35]. Assuming a standard deviation of 7.97 for the baseline to follow-up change in AM-PAC-CAT Basic Mobility Scale [35] and a minimally clinically important difference based on minimum detectable change of 2.60 points [35], a sample size of 374 was calculated as adequate to retain the same level of statistical power as the original protocol. Moreover, to improve flexibility when the would-be trajectory of the pandemic was unknown, randomization in the updated protocol is now stratified only by the enrolling site.
Despite the novel attributes of AM-PAC-CAT, a patient-reported index still provides less reliability than a performance measure [44]. Therefore, in the updated protocol, accelerometry is also prioritized as a complementary performance measure (secondary endpoint). Accelerometers are watch-like devices that record the frequency and intensity of movements throughout the day as an objective measure of physical activity patterns. Prior studies have shown that participants with cognitive and physical challenges can use them reliably. Accelerometers are mailed to participants and returned via mail after 7 full days of wear time. A novel accelerometry index of gait acceleration has been shown to correlate with the SPPB [45] and is being computed from the raw accelerometer data collected at 80 hz using an ActiGraph Link device (GT9X, ActiGraph, LLC, Pensacola, FL) on the non-dominant wrist. Cadence (steps-per-second) will also be extracted from the free-living raw accelerometry signals and evaluated as another objective indicator of physical performance [46].
In addition to AM-PAC-CAT, the updated protocol includes as many components of the original protocol as possible within a one-hour constraint for composite evaluations at baseline, 3, 6 and 12 months ( Table 2). The DASI, VR-12, PHQ-9, Readiness for change, and cardiac self-efficacy assessment questionnaires are all amenable to remote administration and are also part of the updated protocol. Similarly, the Short Blessed cognition assessment tool ≥13 is still being used remotely to screen patients for severe cognitive impairment, but the additional assessments of cognitive function that had been used in the pre-COVID MACRO protocol are not feasible in respect to remote administration (Trails A&B) and length (BTACT), and are no longer being used. Likewise, the pre-COVID assessment of frailty utilizing the SHARE-FI tool depended on grip strength assessments that are now only optional (i.e., deferred until deemed practical in respect to COVID safety concerns); therefore, the Morley Frail scale has replaced the SHARE-FI. REALM literacy assessments also require face-to-face interaction and are now optional.
Nutrition assessments using the Rate your Plate questionnaire and 3day food diaries proved to be too cumbersome and impractical to administer remotely. The Rapid Eating Assessment for Participantsshortened version (REAP-S) [47] has been added as a shorter and more practical tool in the updated protocol.
Preserving the MACRO-I coaching innovations during COVID
While face-to-face interactions with inpatients became less certain amidst fluctuations in prevailing COVID prevalence and risk, the MACRO-I coaches now meet with patients in the hospital when feasible and by telephone when in-person contact is not feasible. The key principles of innovation that enrich MACRO-I coaching were adapted for virtual administration.
1. Holistic risk assessment. Risk assessments are still completed by MACRO-I coaches, but functional risk stratification based on the SPPB, gait speed, and grip strength is no longer feasible. Therefore, the revised MACRO protocol integrates AM-PAC-CAT as the primary index for functional risk determination (i.e., AM-PAC-CAT <34 for high risk, 34-52 for moderate risk, and > 52 for low risk). Most other elements used to categorize medical and psychosocial risks remain accessible from the medical record and phone-based assessments (i. e., DASI, Readiness for Change, PHQ, Short Blessed). Only literacy had to be removed as a risk criterion, as the REALM assessment requires face-to-face engagement. 2. Flexible conceptualization of CR. MACRO-I coaches still align CR formats with each patient's circumstances and preferences. Yet amidst COVID, concerns regarding infectivity often dominate preference for CR care format. As a pragmatic trial, MACRO incorporates the shift in site-based CR availability dictated by the pandemic, which has decreased, while remote-based options have escalated. Whichever CR program the patients utilize, the MACRO-I coaches continue to follow them regularly. 3. Goals of Care as a motivational prompt: While CR is still applied as a means for goal attainment, MACRO-I coaches now use specific phrases instead of images to identify life-goal choices. Using methodology that the coaching team standardized and rehearsed, this virtual approach to goal clarification was practiced and refined.
MACRO-I coaching also includes most other innovation elements as previously described. Emphasis on transitions, nutrition, and education are all especially topical during the prevailing circumstances of a pandemic. Implementation of deprescription of benzodiazepine and anti-cholinergic medications continue as previously described.
A key change in the updated protocol pertains to the home visits in the MACRO-I. Home visits remain curtailed until COVID risks have been sufficiently mitigated. In lieu of home visits, a Centers for Disease Control and Prevention checklist [48] is offered to MACRO-I participants, and they are encouraged to review it for home safety. MACRO-I patients are also asked if they have ever had a home visit from clinical services and made any changes to best attain successful aging in place. These options are encouraged if they can be achieved safely amidst COVID risks.
Net effect of COVID on MACRO
The COVID pandemic placed enormous pressure on the MACRO study team to pivot midway through a trial to preserve the essence of the protocol while avoiding the hazards of the virus.
MACRO started as a 2-site study. Original recruitment extended over 30 months (11/2019 to 4/2022) and aimed to recruit 16 subjects per month and achieve 12 months of follow-up for 480 participants. Post-COVID MACRO expanded to include 2 additional sites: i.e., Shadyside Hospital in Pittsburgh, and Missouri Baptist Hospital in St Louis. The post-COVID MACRO has a 27-month recruitment window (Oct 2020-Dec 2022) and aims to recruit 14 patients per month and achieve 12 months of follow-up for 374 participants. Assessments for the MACRO intervention and Usual Care study participants in the original and post-COVID protocols are completed at baseline, 3, 6 and 12 months. A current study status diagram (Fig. 2) hightlights the success of post-COVID MACRO progress.
In some respects, the restrictions imposed by COVID served to exacerbate logistical challenges already inherent to many older adults with CVD. Limited access to providers and hospitals, limited communication, and the propensity to social isolation are common barriers of older CVD patients struggling with frailty, cognitive decline, and other debilitating factors, and are only compounded by the pandemic. Therefore, the challenge to overcome barriers attributable to COVID also helped to overcome barriers related to geriatric conditions. Furthermore, COVID has catalyzed increased reliance on virtual approaches to care that are likely to continue even after current infectious risks have diminished [49]. It is anticipated that MACRO-I will serve as a model for assessment and manangement for future therapeutics and trials. Thus, not only does the updated protocol enable safe resumption of the trial despite COVID, but it enables the study team to better recruit candidates who were previously unable to carry out the travel and logistic demands required for the endpoint assessments. MACRO recruitment now extends to a more diverse pool of candidates who are more willing and able to participate using the abbreviated telephone-based format, including many who are relatively more frail as well as many who live in more remote locations. Likewise, recruitment has been expanded to multiple new sites.
Summary
In summary, MACRO is an innovative, multi-center NIA-funded trial that seeks to transform the concept of CR by integrating an enriched coaching model into the current paradigm of care using greater flexibility in CR models to better enable each patient to participate in a CR program that is responsive to their personal needs and preferences. After the successful launch of MACRO, COVID had an overwhelming impact that necessitated modifying the protocol and starting anew. Whereas many aspects of the trial methods had to be modified, the investigative team believes that the updated protocol is in many respects superior and will more effectively achieve the original goals of the trial.
|
2021-11-24T14:08:10.163Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "9f809a74551361e5ab17c7e34d0fd2dea7588ff1",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.cct.2021.106633",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "78126b4097daff099874622ef9167cf521bdf825",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226464382
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of the Possibility of Measuring the Electron Plasma Density of the T-15MD Tokamak by Probing with Electromagnetic Waves of the Submillimeter Range
The passage of probing beams of the submillimeter range in the T-15MD tokamak chamber along various chords is considered in this paper. Microwave generators with submillimeter radiation and HCN lasers with a radiation wavelength of λ = 337 μm are widely used in interferometers in many plasma systems as sources of probing beams. The possibility of using a solid-state microwave generator and an HCN laser in the form of probing radiation sources with wavelengths λ = 915 and 337 μm, respectively, for measuring the electron concentration in the T-15MD tokamak plasma is considered. The paper does not present full-fledged designs of T-15MD tokamak interferometers but examines the passage of the probing beams in the vacuum chamber in order to determine the influence of refraction. Schematic diagrams of interferometers with indicated wavelengths are also presented and discussed, and the results of calculations of the relative error of the phase shift due to deviation of the probe beam are presented. A design of emitting and receiving antennas is proposed. Introducing and receiving the probe beam is carried out through the T-15MD tokamak equatorial pipe and through the upper and lower pipes. In the probing channels passing through the equatorial pipe, the beams are reflected from the mirror fixed on the inner wall of the vacuum chamber. The possibility of measuring the main plasma parameter—the average electron density—when probing with electromagnetic radiation with a wavelength of λ = 915 μm, as well as the possibility of multichannel phase measurement when probing a plasma cord with electromagnetic radiation with a wavelength of λ = 337 μm, in the T-15MD tokamak is shown.
INTRODUCTION
In studies of high-temperature plasma carried out on tokamaks, the average electron concentration across the cross section of the plasma cord is a key characteristic. Interferometric sensing of plasma by millimeter and submillimeter electromagnetic waves is a traditional method of its measurement [1,2]. Another important plasma characteristic is the spatial distribution of electron density in the cord. Multichord interferometry is used to solve this problem [3][4][5].
The interferometry method is based on probing the plasma with ordinary electromagnetic waves along different chords of the cross section. As the probing beam passes through the plasma, an additional phase shift is acquired, whose magnitude depends on the density of the probed plasma [1,2,6]: where e is the charge of the electron, m e is the mass of the electron, ε 0 is the vacuum dielectric constant, and c is the speed of light in a vacuum.
It is necessary that the probing radiation frequency ω greatly exceed the plasma frequency ω pl (x) along its entire path: By measuring the magnitude of the phase shift caused by the presence of the plasma, we can determine the chord-averaged density It is worth noting that the phase shift is determined at an intermediate frequency for convenience and unambiguity of measurements [1,4,6]. = Presently, the T-15MD tokamak is under construction at the National Research Center Kurchatov Institute [7] (Fig. 1). This is the first large Russian facility with an elongated cross section and divertor configuration similar to ITER, which will make it possible to carry out the research aimed at supporting the ITER and DEMO projects [8,9]. The key parameters of the prospective T-15MD tokamak are as follows:
ARRANGEMENT OF PROBING CHORDS
Emitting and receiving conical antennas are usually used for plasma probing in the tokamak chamber. It is convenient to use identical receiving and radiating antennas. The plan is to beam the plasma along both the vertical and the horizontal and inclined chords. Radiation input and its absorption after the plasma irradiation along the vertical channel are carried out through the upper and lower sockets, respectively, and for the horizontal and inclined channels, this is done via the equatorial socket, in which case the probing rays are reflected from flat mirrors mounted on the inner wall of the vacuum chamber of the T-15MD tokamak (Fig. 2 -3 Up to 1 × 10 20 The design of the proposed mirrors makes it possible to make adjustments within a 7° apex cone by rotating the supporting discs shaped as solid cylinders. The design proposed includes the possibility of adjustment of the horn antennas by the bellows block ( Fig. 3). Alignment is possible within a 1.5° apex cone. The angle of the cone part of the antenna has an upper limit [10] to preserve the radiation phase front shape. For probing at a wavelength of λ = 915 μm, the apertures of the horn antennas were chosen with a diameter of 50 mm; for λ = 337 μm, this size was 40 mm. At the same time, the diameter of the mirror reflector is about 100 mm, which significantly exceeds the diameters of the antenna apertures in both cases. It is worth noting that the antennas are located in the vacuum chamber of the tokamak, and the soldered glass vacuum windows of resonant thickness installed on the bellows units of the antennas are used to separate the vacuum part from the atmospheric one. torial nozzle, the antennas placed on it are separated along the toroidal path.
It is possible to increase the number of sounding channels through the considered flanges by using nozzles of smaller nominal diameter and reducing the apertures of the antennas. However, because of refraction, this approach will increase the number of channels for which the center of the probing beam is misaligned with the receiving antenna because of its decreased aperture.
When propagating in plasma with a transverse density gradient, the probing beam acquires an axial deflection, owing to which only a part of the probing radiation reaches the receiving antenna.
The following procedure is used to calculate the refraction. First, the ray trajectory in a medium with a refractive index gradient was determined using the equation [11] where N is the normal vector to the radiation trajectory; R is the radius of curvature of the radiation trajectory; and μ is the plasma refractive index for an ordinary wave, which is determined by the following formula: In calculations, the plasma electron concentration distributions were given in the parabolic profile approximation with maximum concentrations of 2 × 10 19 , 5 × 10 19 , and 1 × 10 20 m -3 for the cases of circular and elongated plasma cross sections. Then, the refraction equation was solved numerically by the Euler method. As a result of the calculations, the values of the axial deflection of radiation for each chord were obtained for the wavelengths used.
In addition, as the beam deviates from the rectilinear trajectory, its optical path changes, which leads to a change in the phase of the radiation arriving at the receiving antenna. The magnitude of the relative change in the phase of the probing radiation caused by the change in the trajectory due to refraction was also calculated.
T-15MD TOKAMAK INTERFEROMETER
WITH THE PROBING RADIATION WAVELENGTH λ = 915 μm The source for the probing radiation with wavelength λ = 915 μm in the T-15MD tokamak interferometer is a synthesizer which generates signals with frequencies of 7.45898 GHz, 7.45713 GHz, and 77 MHz using one quartz resonator REF (Fig. 5a).
Also, the synthesizer multiplies the frequency of 7.45898 GHz by 44, and thus the generated frequency of 328.195 GHz is radiated to probe the plasma in the tokamak along one of the chords. The signal coming from the vacuum chamber arrives at the mixer and is fed to the mixing unit, which also receives a 7.45713 GHz signal with a preliminary frequency multiplication by 22 inside the mixer. At the output of the mixing block, a signal with an intermediate frequency of 81.4 MHz is produced. In order to transfer the measurements to the baseband frequency, this signal is down-converted by another mixer by mixing with the 77 MHz signal coming from the synthesizer. As a result, a signal with frequency of 4.4 MHz is obtained, the phase of which carries information about the average electron density of plasma probed. It is worth noting that, when probing along the central chords through the equatorial nozzle in the case of the maximum plasma density of 1 × 10 14 cm -3 in the center, the phase shift reaches 70 × 2π radians when taking into account the double passage through the plasma. This signal is fed into the Tokamak T-15MD VCO VCO X2 X11 X2 Synthesizer Mixer data acquisition and processing system (DAPS), where it is digitized and subsequently processed with FPGA boards. The method is based on comparing the current phase of the signal from the detector with the initial phase at t = 0.
The results of calculations of the magnitude of axial deviation Δ and the magnitude of the relative change in phase of the probing radiation with a wavelength of λ = 915 μm due to the change in the trajectory caused by the refraction deviation of the beam dϕ/ϕ are given in Table 1 for the circular plasma configuration and in Table 2 for the elongated configuration.
It can be seen that, because of refraction in the case of a circular plasma configuration starting from the maximum density of n 0 = 5 × 10 19 m -3 , the center of probing radiation does not get to the receiving antenna tration through the equatorial nozzle of the installation, while in the case of a circular configuration of the plasma cord or with an elongated configuration, one needs the vertical channel, which works only in regimes with low electron density of the plasma. In other cases, it is necessary to use shorter wavelength radiation to reduce the negative effect of refraction.
T-15MD TOKAMAK INTERFEROMETER WITH THE PROBING RADIATION WAVELENGTH λ = 337 µm
As the source of radiation with a shorter wavelength, it is proposed to use a gas-discharge HCN laser with high-frequency pumping. Previous measurements with HCN laser interferometers were performed on tokamaks at the Kurchatov Institute [5,12]. Figure 6 shows the proposed block diagram of the interferometer for the T-15MD tokamak [13].
The HCN-laser generates radiation with a wavelength of 337 μm with a power of several tens of milliwatts. It is then separated into a measuring channel and a reference channel. In the measuring channel, the plasma is illuminated along the chords, while in the reference channel the laser beam passes through a frequency shifter, causing its frequency to change by 106 kHz. Then the probing beam of each chord is mixed with the reference beam and gets to the detector, from where the received signal of intermediate frequency of 106 kHz follows to the data acquisition system of the DAPS. The intermediate frequency signal unrelated to the plasma formed in the reference interferometer also gets to the DAPS, where the signals from detectors are digitized and further processed using FPGA boards.
By comparing the signals of the measuring interferometer and the reference interferometer at each time point, the phase shift caused by the presence of the plasma-induced phase shift in the measuring interferometer is determined at each point in time. In the case of the maximum plasma density of 1 × 10 14 cm -3 at the center, the expected value of the phase shift reaches 30 × 2π radians when probing along the central chords through the equatorial nozzle with allowance for double passage through the plasma. It is worth noting that there is a significant influence of vibrations when probing plasma with waves of this range; that is why most modern HCN laser interferometers are two-color. Since the 1980s, tokamaks at the Kurchatov Institute have used hollow dielectric supersized beam guides and other quasi-optical elements, which are also rigidly attached to the flange of the installation, such as those described in [14], to transmit the probing radiation in the interferometers. Such a solution significantly reduces the influence of vibrations on the phase measurements [12] and will make it possible to conduct them without resorting to probing with two wavelengths. The use of hollow dielectric supersized beamlines is also assumed for the transmission of probing radiation in the interferometers of the T-15MD tokamak.
The results of calculations of the magnitude of the axial deviation Δ and the magnitude of the relative phase change of the probing radiation with a wavelength λ = 337 μm due to the change in the trajectory caused by the refractive deflection of the beam dϕ/ϕ are given in Table 3 for the circular plasma configuration and in Table 4 for the elongated configuration.
As can be seen from the results of the calculation, in the case of both circular and elongated configurations of the plasma cord at densities at the center up to n 0 = 1 × 10 20 m -3 , the signal arrives from most of the chords without significant deflection. Thus, it is possible to measure the electron plasma concentration in the T-15MD tokamak practically along all proposed chords and at substantially higher densities than when probing by radiation with a wavelength of λ = 915 μm.
CONCLUSIONS
The axial deflections of the probing beams and the relative differences in phase overflows were estimated for the T-15MD tokamak plasma for the cases of circular and elongated configurations with allowance for refraction. Calculations were performed for probing beams with wavelengths λ = 915 μm and λ = 337 μm for a parabolic plasma electron concentration distribution with maximum values of 2 × 10 19 , 5 × 10 19 , and 1 × 10 20 m -3 .
The design of emitting and receiving horn-type antennas has been proposed for radiating and receiving the signal ref lected from the internal wall of the T-15MD tokamak via its equatorial nozzle and through vertical probing via the top and bottom nozzles of the T-15MD tokamak.
The possibility of measuring the main plasma parameter-the average electron concentration along the central chords when probing by radiation with a wavelength of λ = 915 μm-is shown.
Phase measurements when the plasma is probed by radiation with a wavelength of λ = 337 μm in the T-15MD tokamak can be carried out to determine the chord values of electron plasma concentration over practically all of the considered chords. FUNDING This work was financially supported by the Rosatom State Corporation.
|
2020-09-03T09:13:12.792Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "9266ec24dd73f7bc370159e845673e5e2db99653",
"oa_license": null,
"oa_url": "https://doi.org/10.21517/0202-3822-2020-43-2-49-56",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "842ea2cdbfad79fc608b7c3d8a9e9e774ed32a12",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
17193583
|
pes2o/s2orc
|
v3-fos-license
|
Haemonchus contortus Acetylcholine Receptors of the DEG-3 Subfamily and Their Role in Sensitivity to Monepantel
Gastro-intestinal nematodes in ruminants, especially Haemonchus contortus, are a global threat to sheep and cattle farming. The emergence of drug resistance, and even multi-drug resistance to the currently available classes of broad spectrum anthelmintics, further stresses the need for new drugs active against gastro-intestinal nematodes. A novel chemical class of synthetic anthelmintics, the Amino-Acetonitrile Derivatives (AADs), was recently discovered and the drug candidate AAD-1566 (monepantel) was chosen for further development. Studies with Caenorhabditis elegans suggested that the AADs act via nicotinic acetylcholine receptors (nAChR) of the nematode-specific DEG-3 subfamily. Here we identify nAChR genes of the DEG-3 subfamily from H. contortus and investigate their role in AAD sensitivity. Using a novel in vitro selection procedure, mutant H. contortus populations of reduced sensitivity to AAD-1566 were obtained. Sequencing of full-length nAChR coding sequences from AAD-susceptible H. contortus and their AAD-1566-mutant progeny revealed 2 genes to be affected. In the gene monepantel-1 (Hco-mptl-1, formerly named Hc-acr-23H), a panel of mutations was observed exclusively in the AAD-mutant nematodes, including deletions at intron-exon boundaries that result in mis-spliced transcripts and premature stop codons. In the gene Hco-des-2H, the same 135 bp insertion in the 5′ UTR created additional, out of frame start codons in 2 independent H. contortus AAD-mutants. Furthermore, the AAD mutants exhibited altered expression levels of the DEG-3 subfamily nAChR genes Hco-mptl-1, Hco-des-2H and Hco-deg-3H as quantified by real-time PCR. These results indicate that Hco-MPTL-1 and other nAChR subunits of the DEG-3 subfamily constitute a target for AAD action against H. contortus and that loss-of-function mutations in the corresponding genes may reduce the sensitivity to AADs.
Introduction
Throughout the world, successful livestock production of ruminants is hampered by gastro-intestinal nematodes. Haemonchus contortus in particular is responsible for substantial losses to the global sheep industry [1]. Haemonchus contortus is a blood-feeding nematode that inhabits the abomasum of sheep, producing in acute infections, severe anemia that can lead to the death of infected animals.
Broad spectrum chemotherapy against gastro-intestinal nematodes is restricted to 3 anthelmintic classes: the benzimidazoles, such as albendazole and oxfendazole, the imidazothiazoles, including levamisole and tetramisole and the macrocyclic lactones (e.g. ivermectin, moxidectin, abamectin and doramectin). The increased usage of anthelmintics has contributed to the spread of resistant nematodes with increasing reports of nematodes insensitive to most if not all of the available classes of anthelmintics [2][3][4][5][6][7][8][9][10]. In some countries in the southern hemisphere, sheep farming is severely endangered by such populations [4], further increasing the need for a new class of anthelmintic [11].
Recently, a new class of compounds, the Amino-Acetonitrile Derivatives (AADs) was discovered [12] with good tolerability in mammals and promising activity against drug-resistant nematodes. The AADs are low molecular mass compounds bearing different aryloxy and aroyl moieties on an amino-acetonitrile core [13]. Further studies [14] have allowed the selection of a drug candidate, AAD-1566 (monepantel). In order to investigate the mode of action of this new class of compounds, AAD-resistant Caenorhabditis elegans mutants were generated by EMS mutagenesis. Classical forward genetics revealed that the majority of recuperated AAD-resistant mutants carried mutations in the gene acr-23, a member of the nematode-specific DEG-3 subfamily of nicotinic acetylcholine receptor (nAChR) alpha subunits [12]. Preliminary data had already indicated an involvement of similar acetylcholine receptors in AAD action against H. contortus [12]. Here we report the identification of the gene monepantel-1 (Hco-mptl-1, formerly named Hc-acr-23H) and other members of the DEG-3 subfamily of ACR genes from H. contortus. A panel of different mutations, mis-splicing in particular, in Hco-mptl-1 transcripts from AAD-resistant worms indicates that Hco-MPTL-1 is a target for monepantel action against H. contortus.
Haemonchus contortus isolates
The drug-susceptible H. contortus CRA (Hc-CRA) was received in 1984 from the Veterinary Institute of Onderstepoort, Republic of South Africa and has since been passaged in sheep 75 times. The H. contortus Howick isolate (Hc-Howick) was received from the same institute in 2001. This is a multidrug-resistant isolate that is completely resistant to albendazole, rafoxanide, morantel, ivermectin and trichlorfon [6,15]. The isolate has been passaged in sheep 9 times since being received. The mutant lines Hc-CRA AAD M and Hc-Howick AAD M were selected from Hc-CRA and Hc-Howick, respectively, by in vitro exposure to increasing doses of AAD-1566 alternatively with propagation in sheep [12].
Collection of nematode eggs
Haemonchus contortus isolates were propagated in 3-6 month old sheep ('Blanc des Alpes'), which had been experimentally infected with the nematode. The sheep were kept in groups of 4 and housed indoors off pasture to prevent natural infection. After 14 days, they were transferred to individual cages. Starting on day 21 after infection, eggs were collected from homogenized feces and filtered several times through a 32 mm sieve. Eggs were further purified by floating on 50% sucrose solution, rinsed with water and counted microscopically.
In vivo determination of drug sensitivity
Sheep studies were performed with approval of a Cantonal animal welfare committee (permit number FR 25A/05). Anthelmintic efficacy tests in sheep were performed according to the guidelines of the World Association for the Advancement of Veterinary Parasitology [16]. Each animal was infected intraruminally on study day 221 with 3000 L 3 -larvae of H. contortus (cultivated in coprocultures). On study day 0, the sheep were treated with single anthelmintics or combinations thereof as an oral drench at the recommended dose. A sheep was classified as 'cured' when no more eggs were counted in the feces and no adults were found in the abomasum at necropsy.
Recovery of adult Haemonchus contortus and isolation of nucleic acids
Adult worms were recovered from the abomasum of freshly euthanized sheep, washed in Hank's Buffered Salt Solution (HBSS; Invitrogen) and immediately shock-frozen in liquid nitrogen. While frozen, the worms were crushed with a Kontes pellet pestle (Fisher Scientific). The powder was resuspended in 600 ml of lysis buffer (10 mM Tris pH 7.5, 1 mM EDTA, 100 mM NaCl, 0.5% SDS, 100 mg/ml RNase A) and incubated at 37uC for 1 hour. Pronase (100 mg/ml) was added to the mixture and the tubes were incubated at 37uC until the solution became clear. The samples were extracted with equal volumes of phenol:chloroform (1:1) and chloroform. The DNA was ethanol precipitated, washed and resuspended in 50 ml of Tris-Cl (pH 7.5). For RNA extraction, worms were homogenized in TRIzol and processed according to the instructions of the supplier (Invitrogen). To remove DNA contamination, the RNA samples were treated with a TURBO DNA-free kit (Ambion). To generate cDNA, 1 mg of total RNA was reverse transcribed to cDNA using a d(T) 30 primer and a Moloney Murine Leukemia Virus Reverse Transcriptase (MMLV RT; SMART cDNA library construction kit from Clontech).
Construction and screening of a Haemonchus contortus cDNA library
A total of 4 mg of mRNA was isolated from a mixture of male and female Hc-CRA using a Oligotex kit from Qiagen. A cDNA library was constructed with the ZAP-cDNA Cloning kit and Gigapack III Gold packaging kit. The library was screened at high stringency (hybridization at 65uC in 56SSC, 56 Denhardt's solution, 0.1% SDS, 0.1% sodium pyrophosphate, 100 mg/ml salmon sperm DNA; final wash at 60uC in 0.26SSC, 0.1% SDS) with a 32 P-labeled 456 bp fragment of Hco-mptl-1. This fragment had been amplified from cDNA with the primers Hco-mptl-1_frw3 and Hco-mptl-1_rev1 and cloned into pCRH2.1-TOPOH (Invitrogen). Positive phages were taken through 3 rounds of plaque purification with this probe and the phagemid (pBluescript SK+) was excised using the ExAssist helper phage in the E. coli SOLR strain. Inserts were sequenced in both directions with standard M13 forward and reverse primers and the internal primers Hcomptl-1_frw4 and Hco-mptl-1_rev3. The sequences were read and assembled using 4Peaks (by A. Griekspoor and T. Groothuis; http://mekentosj.com).
PCR
The primers used for PCR-amplification, real-time PCR or for cDNA first strand synthesis of H. contortus nAChR genes are summarized in Table S1. For nested PCR on cDNA with spliced leader (SL) primers, the primary products were diluted 50-fold and 2 ml were used for the second PCR with nested primers. The annealing temperature was fixed at 55uC for cDNA and 58uC for genomic DNA template. PCR products were gel purified using the NucleoSpinH ExtactII kit (Macherey-Nagel) and cloned into either pGEM-T easy (Promega) or pCRH2.1-TOPOH (Invitrogen). Plasmid DNA was purified using the QIAprep Spin Miniprep Kit (Qiagen) and sequenced using the standard primers M13 forward and reverse and, if necessary, an additional internal primer to cover long products. For rapid amplification of cDNA ends by PCR (RACE-PCR), an internal reverse primer (Table S1)
Author Summary
Worldwide, sheep and cattle farming are threatened by anthelmintic-resistant gastro-intestinal nematodes. A novel chemical class of synthetic anthelmintics was recently discovered, the Amino-Acetonitrile Derivatives (AADs), which exhibit excellent efficacy against various species of livestock-pathogenic nematodes and, more importantly, overcome existing resistances to the currently available anthelmintics. Haemonchus contortus, the largest nematode found in the abomasum of sheep and cattle, is a blood-feeding parasite that causes severe anemia that can lead to the sudden death of the infected animal; H. contortus is highly susceptible to AADs. In order to elucidate the mode of action of the AADs, we have developed 2 independent H. contortus mutants with reduced sensitivity to monepantel (AAD-1566). Both mutants were affected in their acetylcholine receptor (ACR) genes of the DEG-3 subfamily. In particular, we discovered a panel of mutations in the gene monepantel-1 (Hco-mptl-1) including deletions leading to mis-splicing, insertions and point mutations leading to premature termination of translation of the protein. These findings support the notion that Hco-MPTL-1 and other nAChR subunits of the DEG-3 subfamily are targets of the AADs. The fact that the DEG-3 subfamily of acetylcholine receptors is nematode-specific may explain the good therapeutic index of AADs in mammals.
was combined with splice leader sequence (1 or 2) to obtain the 59 UTR, or an internal forward primer combined with a poly-dT primer for the 39 UTR of the transcript.
For real-time PCR, 1 mg of total RNA from adult H. contortus was used to synthesize first-strand cDNA by random priming using Superscript II reverse transcriptase (Invitrogen) in a final volume of 20 ml following the manufacturer's instructions. Reversetranscribed material corresponding to 40 ng RNA was amplified in 25 ml MESA GREEN qPCR MasterMix Plus for SYBR Assay (Eurogentec) by using the ABI SDS7000 Sequence Detection System under the following conditions: 1 cycle of 95uC for 15 minutes followed by 40 cycles of 95uC for 15 seconds and 60uC for 1 minute. The primer pairs used for the amplification are listed in Table S1 and target the following genes: b-tubulin, Hco-mptl-1, Hco-des-2H and Hco-deg-3H. Three independent total RNA extractions were performed and each was tested in duplicate.
Relative expression values were calculated according to Livak and Schmittgen [17]; a 136 bp region within the phosphoglucose isomerase gene was used for normalization, a 122 bp region within the b-tubulin gene was used as a (presumably) non-affected control, and no reverse transcriptase and no template reactions as negative controls. The specificity and identity of individual amplicons were verified by melt curve analysis and visualized on a 2% agarose gel.
In vivo sensitivity of Haemonchus contortus AAD mutants
In order to study the mode of action of the AADs, we used 2 mutant isolates, Hc-CRA AAD M and Hc-Howick AAD M selected from parent Hc-CRA and Hc-Howick isolates, respectively. Both mutant isolates showed reduced sensitivity to AAD-1566 (monepantel) in vitro [12]. To test whether the observed loss of susceptibility to AAD-1566 in vitro was relevant for the situation in vivo, Hc-CRA, Hc-Howick and their AAD M derivatives were challenged in vivo with single compounds or combinations thereof; AAD-1566 and the commercial compounds were applied at their recommended doses to sheep. Sheep were infected intraruminally with Hc-CRA AAD M . Following treatment with AAD-1566 at the proposed minimum dose rate of 2.5 mg/kg body weight [18] eggs were found in the feces and adults seen at necropsy (Table 1). Likewise, nematode eggs and adults were also found in sheep infected with Hc-Howick AAD M larvae when treated either with AAD-1566 or albendazole or a combination of AAD-1566 and ivermectin ( Table 1). The offspring from the Hc-Howick AAD M isolate that survived the AAD-1566 and ivermectin treatment were cultured and challenged with albendazole and levamisole over the following generations (data not shown). Finally, Hc-Howick AAD M was able to survive a full simultaneous in vivo treatment with albendazole, levamisole, ivermectin and AAD-1566, administered at their recommended doses (Table 1). Thus the reduction of sensitivity to AAD-1566 induced in vitro was also relevant in vivo for the mutant lines. The AAD-mutant H. contortus apparently did not show any alterations in motility, infectivity to sheep (determined by the numbers of adult H. contortus recovered at necropsy) or egg production, and did not exhibit any phenotype with respect to the ultrastructure (by electron microscopy) of the cuticle, head or tail.
Cloning of Haemonchus contortus Hco-mptl-1
To obtain the full length coding sequence of the Hco-mptl-1 gene, a lambda phage cDNA library from mRNA of adult H. contortus was constructed and screened at high stringency with a radioactive probe from a partial Hco-mptl-1 sequence. After 3 rounds of selection, a clone with the full-length coding sequence, Hco-mptl-1, was isolated and sequenced. The Hco-mptl-1 mRNA is composed of at least 17 exons and 16 introns (1992 bp) with a short 59 UTR and 39 UTR (21 bases and 42 bases, respectively). The transcript is trans-spliced as the splice leader 1 (SL1) is present at its 59 end. Interestingly, a start codon (AUG) is present in exon 1 but is followed after 8 amino acids by a stop codon in frame (UGA). This is a feature found in many other organisms [20][21][22] and it is assumed to play a role in the regulation of translation efficiency. In most cases, upstream AUGs decrease mRNA translation efficiency and have a strong, negative regulatory effect [23]. The longest open reading frame (ORF) in the Hco-mptl-1 gene is obtained when the translation is initiated at the second AUG codon in exon 3 and extends over 1695 bases. Overlapping long range PCR was performed in order to estimate the total size of Hco-mptl-1. The gene was found to be approximately 18.5 kb long with a large intron (about 7 kb) between exons 3 and 4 (see below). The predicted Hco-MPTL-1 protein consists of 564 amino acids and possesses motifs typical for Cys-loop ligand-gated ion channels, including an N-terminal signal peptide of 18 amino acids [24], 4 transmembrane domains and the Cys-loop (2 cysteines separated by 13 amino acids). Loops A to F, which are involved in ligand binding [25] are also present in the protein ( Figure S1). In loop C, there are 2 adjacent cysteines, defining Hco-MPTL-1 as a nAChR alpha subunit.
The predicted Hco-MPTL-1 protein shares 48.5% identity and 66.8% similarity with C. elegans ACR-23 and 60.2% identity and 70.7% similarity with C. elegans ACR-20. The novel H. contortus nAChR was originally named Hc-ACR-23H based on a partial sequence that was most closely related to C. elegans ACR-23 [12].
In the light of the full-length sequence, this nomenclature seems to have been premature since the Haemonchus nAChR turned out to be more closely related to C. elegans ACR-20 ( Figure 1). In the absence of a complete record of ACR paralogues from H. contortus, and in analogy to levamisole-insensitive (lev-) mutants in C. elegans [26], we propose to name the gene monepantel-1 (Hco-mptl-1) due to its apparent involvement in monepantel sensitivity.
Hco-mptl-1 mutations associated with the AAD-mutant phenotype
In order to compare the Hco-mptl-1 sequences from the AADsusceptible isolates and their AAD-mutant progeny, primers were designed at each extremity of the ORF (Hco-mptl-1_59_frw3 and Hco-mptl-1_39end_rev1) and the full length Hco-mptl-1 coding sequences amplified from cDNA from adults. A product of about 1800 bp was obtained for all isolates apart from the Hc-CRA AAD M , which produced a shorter product of 1650 bp ( Figure 3B). Sequencing clones of the latter revealed that they lacked either exon 4 or exon 15 (Figure 4, Hco-MPTL-1-m2 and m3). This was confirmed with primers flanking either exon 4 (Hco-mptl-1_59_frw2 and Hco-mptl-1_rev8; Figure 3C) or exon 15 (Hco-mptl-1_frw6 and Hco-mptl-1_rev6; Figure 3D). PCR with a SL1 forward primer and a reverse primer in the Hco-mptl-1 coding sequence (Hco-mptl-1_rev1, product of about 1200 bp; Figure 3A) also produced shorter products (1000 bp and 850 bp; Figure 3A Mutation cause mis-splicing of the Hco-mptl-1 transcript in Hc-CRA AAD M mutants To understand the molecular basis of exon loss in the Hc-CRA AAD M isolate, PCR primers Hco-mptl-1_frw8 and Hco-mptl-1_rev6 (Table S1) were designed to flank the mis-spliced exon 15. PCR was performed using genomic DNA as a template. Sequencing of cloned PCR products revealed a 10 bp deletion upstream of exon 15 in the Hc-CRA AAD M mutant that encompasses the predicted splice acceptor site (UUUCAG; Figure 5). Presumably, the splicing machinery is not able to identify the end of intron 14 and uses the next splice acceptor site (intron 15). This would explain why exon 15 is skipped (Figure 4, Hco-MPTL-1-m3). Joining of exon 14 to exon 16 causes a frame-shift leading to a premature stop codon. With primers flanking exon 4 (Hco-mptl-1_frw10/gDNA and Hcomptl-1_rev8; Table S1), a 323 bp deletion was detected consisting of the end of intron 3 (206 bp) and most of exon 4 (117 bp). Again, loss of the predicted splice acceptor site at the end of intron 3 may explain the observed loss of exon 4 (Figure 4, Hco-MPTL-1-m2), since the splicing machinery will use the next available splice acceptor site (intron 4), joining exon 3 and exon 5. The resulting frame-shift causes a premature stop at codon 19 (TGA), terminating translation after the signal peptide (Figure 4, Hco-MPTL-1-m2). Sheep were treated orally with commercial compounds at the recommended doses. An animal was considered to have been effectively treated when no more eggs were counted in the feces and no adults were found in the abomasum at necropsy. doi:10.1371/journal.ppat.1000380.t001 Detection of the Hco-mptl-1 E93* point mutation in the Hc-Howick AAD M nematodes No obvious mutations such as mis-spliced exons were detected in the Hc-Howick AAD M isolates. When sequencing the Hco-mptl-1 coding regions (SL1 and Hco-mptl-1_rev6) from both susceptible and AAD-1566-mutant Howick isolates, a transversion from G 277 to T in exon 6 of the Hco-mptl-1 gene was observed that led to a premature stop codon (E93*; Figure 6). Direct sequencing of RT-PCR products (using Hco-mptl-1_frw4 and Hco-mptl-1_rev1 primers) revealed that about 80% of the Hc-Howick AAD M cDNAs, as estimated from the electropherogram [27], carried a T at position 277 ( Figure 6A). The point mutation underlying E93* creates a restriction site for the endonuclease BfrI (recognition site: CTTAAG) that lent itself for RFLP analysis. Only the PCR product amplified from cDNA of Hc-Howick AAD M was digested by BfrI ( Figure 6B). As expected from the sequencing, a small proportion (about 20%) of the product was not cut, indicating that not all of the Hco-mptl-1 genes from Hc-Howick AAD M population carried the G277T mutation. When this BfrI-unrestricted product from Hc-Howick AAD M was excised from an agarose gel, cloned and sequenced, a further polymorphism was detected that led to skipping of exon 8 (Figure 4, Hco-MPTL-1-m6). As this exon is very short (22 bases), it was impossible to discriminate between mutant and parental wild type PCR products (Figure 3). Loss of exon 8 causes a frame-shift leading to a premature stop codon and a predicted Hco-MPTL-1 protein truncated at amino acid 166 ( Figure 4). A minority of the Hco-mptl-1 PCR products obtained from Hc-Howick AAD M did not contain any major mutations. These sequences could come from AAD-susceptible individuals within the H. contortus Howick AAD M populations or from AADmutant individuals that carry other, yet to be identified mutations.
An insertion in the 59 UTR of the des-2 homologue of Haemonchus contortus AAD mutants
As the DEG-3 subfamily gene Hco-des-2H has also been implicated in AAD action in H. contortus [12], we cloned and sequenced the full-length Hco-des-2H coding sequence from H. contortus cDNA by RACE-PCR. Using primers NheI_des2_frw1 and XhoI_des2_rev1 (Table S1), 2 products were obtained from the four H. contortus isolates. Cloning and sequencing revealed the smaller transcript to lack 168 bases coding for part of the internal loop between TM3 and TM4, possibly indicating alternative splicing of the Hco-des-2H gene. The predicted protein (full version) consists of 534 amino acids and shows 69% identity and 80% similarity with C. elegans DES-2. Hco-DES-2H possesses motifs typical for Cys-loop ligand-gated ion channels (4 transmembrane domains, a Cys-loop and loops A to F) and the 2 adjacent cysteines in the C-loop, defining Hco-DES-2H as a nAChR alpha subunit ( Figure S2). When comparing Hco-des-2H coding sequences (Table 2) obtained from Hc-CRA and Hc-CRA-AAD M , respectively Hc-Howick and Hc-Howick-AAD M , no mutation was found to correlate perfectly with AAD-susceptibility. Nevertheless, using the SL1 primer and 2 internal reverse primers (Hco-AcRa_rev3 and Hco-AcRa_rev2) in a nested PCR experiment, an insertion of 135 bp was detected in the 59 UTR of the Hco-des-2H gene from the Hc-CRA AAD M and Hc-Howick AAD M isolates, creating 2 additional start codons. Both start codons are followed by an early stop codon in frame.
In the C. elegans genome, DES-2 and DEG-3 are encoded on the same operon and both subunits are co-expressed to form a functional channel [28,29]. Performing RACE-PCR on H. contortus (adults) cDNA we identified Hco-deg-3H encoding a protein of 569 amino acids that shows 68.4% identity and 78% similarity to C. elegans DEG-3. Again, Hco-DEG-3H carried all the hallmarks of nAChR alpha subunits ( Figure S3). No mutations were detected for Hco-deg-3H in the AAD-mutant H. contortus isolates compared to the parental isolates. The Hco-deg-3H mRNA carries a spliced leader type 2 (SL2) sequence at its 59 end. To test whether Hco-des-2H and Hco-deg-3H are also on an operon in H. contortus, a long range PCR was performed using a forward primer designed at the end of Hco-des-2H (Hco-des2_frw11) and a reverse primer at the beginning of Hco-deg-3H (Hco-deg3_2r). A band of approximately 6 kb was obtained for the 4 isolates confirming that Hco-des-2H and Hco-deg-3H are encoded on a single operon. However, the distance between the 2 genes is 10 times larger in H. contortus than in C. elegans.
Discussion
A new chemical class of synthetic anthelmintics, the AADs, was recently discovered [12]. The AADs exhibit excellent efficacy against various species of livestock-pathogenic nematodes and more importantly, can control nematodes resistant to the currently available anthelmintics [30,31]. To get insights into the mode of action of the new AADs, a classical 'forward genetic' screen for AAD-resistant C. elegans mutants was performed previously [12]. As a result, AADs were proposed to act through the nAChR ACR-23, a member of the nematode-specific DEG-3 subfamily [32]. By screening the currently available (but incomplete) H. contortus genome sequence for DEG-3 nAChR homologues, it was found that this subfamily is conserved between C. elegans and H. contortus. Six paralogous proteins out of 8 in C. elegans or C. briggsae were identified (Figure 1), in contrast to only 2 in the genome of B. malayi [33]. The AADs possess a unique mode of action: the nAChR subunits involved in AAD action are different from those targeted by imidazothiazoles [34,35] and there is no crossresistance between the 2 chemical classes [12].
Two independent AAD-mutant H. contortus lines were used to screen for mutations in ACR genes of the DEG-3 subfamily. Two genes were found to be affected: The H. contortus des-2 homologue Hco-des-2H, where all AAD-mutant H. contortus carried an insertion in the 59 UTR introducing 2 additional, out-of-frame start codons, and the gene monepantel-1 (Hco-mptl-1), for which a panel of different mutations were detected in AAD-mutant (AAD M ) H. contortus. Apart from 1 nonsense mutation discovered in Hc-Howick AAD M nematodes (Hco-MPTL-1-m5; Figure 4), the detected mutations all involved mis-splicing resulting in loss of exon(s) from the mRNA as indicated by shortened reverse transcriptase-PCR products (Figure 3). This unusual mechanism has not been described before in H. contortus. In the genetic screen performed on AAD-resistant C. elegans [12], 2 mutants bearing a G-to-A transition of the conserved G nucleotide in the 39 splice acceptor sites of either the second or third introns were described; these mutations are predicted to cause an increase in the size of the mRNA due to the lack of splicing of the affected intron. In another study [36], a single base pair change in the first intron of the lev-8 subunit gene was identified in a partially levamisole-resistant C. elegans mutant. This mutation leads to alternative splicing and introduction of a premature stop codon. In the case of mutations Hco-MPTL-1-m2 (loss of exon 4), Hco-MPTL-1-m3 (loss of exon 15) or Hco-MPTL-1-m6 (loss of exon 8), exon skipping creates a frame-shift that leads to a premature stop codon ( Figure 4). These mutations, including the Hco-MPTL-1-m5 (stop codon) are predicted to result in a truncated, non-functional Hco-MPTL-1 protein and/or, if the mutant mRNA is recognized by the nonsense-mediated mRNA decay (NMD) machinery [37], degradation of the mRNA (some known genes of the NMD machinery in C. elegans have orthologues in the H. contortus genome; Rufener and Mä ser, unpublished). Measuring the expression levels of the 3 DEG-3 subfamily genes Hco-mptl-1, Hco-des-2H and Hco-deg-3H in adult H. contortus, we found statistically significant differences in the steady state level of mRNA in AAD mutant worms. In the Hc-CRA AAD M isolate, a significant increase of the Hco-deg-3H transcript was observed. A possible explanation may be that this compensates for the loss of the Hco-MPTL-1 subunit since no more full-length Hco-mptl-1 transcript was detectable in Hc-CRA AAD M . In the case of Hc-Howick AAD M , all 3 nAchR genes were down-regulated compared to Hc-Howick. Although we cannot give a result-based explanation, we interpret the finding that the expression of DEG-3 subfamily nAChR genes is affected in H. contortus as further evidence for the involvement of these genes in AAD susceptibility.
The mutations Hco-MPTL-1-m1 (loss of exon 2 and 3) and Hco-MPTL-1-m4 (partial loss of exon 4 and 15) did not cause a frameshift, but the loss of the signal peptide and the first 39 amino acids of the extracellular loop for the first mutation, and a truncated protein for the second mutation. Interestingly, 1 of the previously identified AAD-resistant C. elegans mutants also carried a mutation in the signal peptide of the Cel-ACR-23 protein [12]. Receptors are assembled in the endoplasmic reticulum (ER) [38] and interference with the signal peptide could result in mis-localization of the protein or in improper interactions with ER-resident, ACRspecific chaperones [25,[39][40][41]. Furthermore, it is known that the expression, assembly and transport to the surface of ACR subunits is subject to stringent quality controls that guarantee the functional competence of the final product [42][43][44]. Truncated nAChR proteins are likely to be targeted to the lysozyme and degraded.
In summary, we have detected a large number of different mutations affecting the Hco-mptl-1 gene and transcript, respectively, from AAD mutant H. contortus (Table 2). For the benzimidazoles, a variety of different mutations in the target protein ßtubulin are associated with drug resistance, 3 so far from H. contortus [15,45,46] and many more from phytopathogenic fungi [47]. These are point mutations, that are thought to interfere with benzimidazole binding while preserving microtubular function. The mutations have less drastic effects on the predicted protein than those described here for Hco-mptl-1 of H. contortus. At present, we do not know whether Hco-mptl-1 is an essential gene in H. contortus, but our findings indicate that it may not be. There were no mutations in common between H. contortus CRA-AAD M and n.d.
Howick-AAD M , indicating that the genetic screen was not saturated. However, for Hco-des-2H, an insertion of 135 bp creating 2 additional start codons was present in the 59 UTR from both AAD M isolates. While Hco-des-2H mRNA levels were significantly lower in Hc-Howick AAD M (compared to Hc-Howick), no effect was observed on Hco-des-2H expression in Hc-CRA AAD M . It is interesting to note that in C. elegans, mutant worms lacking a functional DES-2 did not exhibit any AAD resistance [12]. The in vitro protocol used to breed AAD-mutant H. contortus is very focused using a large number of individuals and a border line subcurative exposure concentrations over extended time period. This protocol is different from the situation in the field, e.g. after multiple generations exposed to subcurative treatment in sheep, we have so far not been able to obtain AAD-resistant H. contortus (Pradervand and Kaminsky, unpublished data).
In conclusion, several independent mutations were found in the Hco-mptl-1 gene from H. contortus mutants with reduced sensitivity for monepantel, implicating Hco-MPTL-1 as a likely target for AAD action against H. contortus. However, this hypothesis is difficult to test since H. contortus is not readily amenable to genetic manipulation [48]. The AADs are very well tolerated by sheep or cattle [14]. The absence of DEG-3 subfamily acetylcholine receptors in mammals might explain the selective toxicity of AADs to nematodes. Text S1 All the sequences as submitted to GenBank.
|
2016-05-12T22:15:10.714Z
|
2009-04-01T00:00:00.000
|
{
"year": 2009,
"sha1": "02d60e4fe6bc4c80f4087d76eb9e8f8e3327402c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1000380&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "02d60e4fe6bc4c80f4087d76eb9e8f8e3327402c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
252789872
|
pes2o/s2orc
|
v3-fos-license
|
A Digital Mental Health Intervention for Children and Parents Using a User-Centred Design
Te number of children with mental health problems is ever-growing; as a result, nearly 850,000 children in the UK are believed to have clinically signifcant problems, and only a quarter show evidence of mental illness. Family members often have a hard time dealing with children with mental health problems. As a result, digital mental health interventions are becoming popular for people seeking professional mental health services. Previous studies in this area have also shown that parents who are divorced or working away from home struggle to maintain contact with their children. Tis lack of communication between the parents and their children can worsen the children’s mental health conditions and prevent early diagnosis. Human-centred design thinking is applied step by step in this paper to provide an intuitive understanding of the design process. Five stages of the design thinking process were examined to follow a correct path. Te results were promising, and the feedback received assured that the product helps parents to better monitor their children’s mental health and provides support when needed. Te design thinking process was followed in concordance with the user needs identifed from previous studies in this area, which led to a working solution that benefts both parents and children in tackling these problems.
Introduction
Child and adolescent mental health problems are associated with a wide range of functional disorders, and their prevalence has increased signifcantly [1]. Te "1998 Australian Child and Adolescent Mental Health Survey and Welfare" is one of the few national studies in the world, which provides the frst national picture of child and adolescent mental health to date [2]. Te survey noted that mental health problems are relatively common in about 14% of children and adolescents. In addition, it was observed that only one out of every four children with mental health problems and one out of every four adolescents attended professional services [3].
Depression and other mental health problems can afect people's social and interactive lifestyle. Although there are few people looking for professional help in recent years, it can be seen that there is an increase in health-related applications delivered through mobile and desktop devices, to support the management of chronic health conditions [4]. Tus, digital mental health interventions are getting popular for low-help-seeking people to help professional mental health services. Undoubtedly, technological advances in treatment are not limited to Web-based programs, yet socially assistive robotics (SAR), which is among the new and emerging technology-based treatment options, has emerged. Every 1 out of 5 children has a major mental health problem. However, only 20% of them receive assistance [5]. Approximately 41% of parents show that they believe their children have recently experienced a behavioural or emotional adjustment problem, but less than 2% took action against it [6]. Previous studies have shown that divorced parents or parents working away from home have difculty maintaining contact with their children. According to [7], working parents have a certain disadvantage arising from trying to fulfl their work and family responsibilities at the same time. Difculties in balancing work and family occur in 75% of parents [8]. Tere are many available interventions currently that help parents overcome this. However, these interventions are not enough. It has been reported that they have difculties in keeping track of their children's mental health while working, and therefore, they are more likely to quit their job [9]. For this gap to be closed, this research is trying to fnd a working solution that will beneft both parents and children in overcoming these issues. Te solution would need to tackle this issue from two diferent sides: the children's side and the parents' side. For the former, a friendly physical product that uses an AI system that is able to use speech and emotion recognition to detect diferent feelings and analyses is proposed. Te product should be a safe haven for children and a friend that is always present when no one else is around. A mobile application complementary to this product will have to be designed for the parents' use. To obtain a solution to this problem, a wide range of research has been done using existing data from previous studies, rather than having preliminary qualitative data to comply with COVID-19 regulations and avoid faceto-face interactions.
Background
Digital interventions often come to mind as a way to increase access to evidence-based treatment without assistance. Various online programs have been produced for the scope of mental health issues and tried with children and adolescents. Tese days, digital mental health interventions (DMHIs) are picking up expanding considerations as a solution to the low rates of looking and selecting profcient mental health interventions. Studies think about tending to assist the reasonability of computerised or other self-help mental well-being programs; this will be as viable as face-toface programs [10].
Lately, the expanding prevalence and usefulness of electronic devices hold an incredible guarantee for improving the delivery of mental health services. As [11] states that in this way, there has been an extraordinary intrigue in collaborating with digital interventions and numerous defnitions of support have developed. Writing tends to focus on "engagement as a usage," and in computer science and human-computer interaction (HCI) writing, on "engagement as fow." "Young people report feeling more comfortable discussing sensitive and personal issues in the relative anonymity of an online context and use the Internet as a major source of mental health information" [12]. Terefore today, young people with health problems feel more comfortable getting help for their mental health than they prefer face-to-face, taking into account the development of the Internet, but they use the Internet as a major source of mental health information [13].
"Mental health is a state of successful realisation of mental function, and the ability to engage in productive activities, to fulfl relationships with other people, to adapt to change, and to cope with the individual's cultural challenges" [14], which means that mental well-being enormously infuences people's activities, choices, and ways of life in daily life. Mental illness can infuence an individual's thought process, recognition, feelings, and judgments and result in destitute concentration, destitute organisational abilities, and failure to complete ventures and make choices. According to [14], the most common mental health disorders are depression, anxiety and panic, bipolar disorder, behavioural disorders, obsessive-compulsive disorder, and eating disorders. Studies have shown that the treatment of mental health should be treated at an early age.
According to [15], characterising terms that recognize the mental health of children and adolescents is essential because the involvement of sentence clarity can lead to confusion and uncertainty about the persistence of inclusion, the curability of problems and confusion, and the need to allocate resources according to the Welfare Alert.
Te number of children with mental health problems is increasing; 850,000 children in the UK are believed to have clinically signifcant problems, and only a quarter show evidence of mental illness. "One in 8 children have a diagnosable mental health disorder, that is, roughly 3 children in every classroom" [16]. In 2004, one in ten children aged 5-16 years were diagnosed with a mental disorder. Percentage rates are as follows: 3% anxiety disorder, 1% depression, 6% behavioural disorders, 2% hyperkinetic disorder, and 1% less common disorder such as autism and eating disorders. In some children, more than one disorder has been diagnosed as a result of research [17]. Many grownups are comfortable with a direct face-to-face dialogue in the treatment of these disorders, but this is not the case with children and adolescents. Numerous children discover it troublesome to precise themselves in words alone, so research has been done on ways to engage children using distinctive communication strategies and recreations [14]. Some examples of tools used are storybooks, artwork, puppets, and board games [18]. Tese materials provide a way to include children in an indirect communication.
According to studies involving parents, or families in the treatment of a child, it can afect treatment participation and treatment outcomes [19]. Most working parents reported that it was personally important for them to balance work and family [6]. Te work-and-family confict that occurs because of this afects the relationship between the parents of young children. Children who experience social disadvantage are highly likely to experience emotional and behavioural problems [20].
AI and Emotion Detection.
Artifcial intelligence (AI) is a recreation of human insights in machines, which are modifed to think and act like humans. Emotional AI refers to innovations that use efective computing and artifcial intelligence techniques to detect, learn about, and associate with human emotional life. Based on current data, the high prevalence of mental sickness and consequently the requirements for efective mental health care combined with later progress in AI have driven to an increment in research into how the machine learning (ML) feld can help in diagnosis, determination, and treatment [21].
Feelings play a central role in most shapes of common human interaction, so we may anticipate that computational methods for the preparation and expression of feelings will play a developing part in human-computer interaction [22]. Psychologist Paul Ekman [23] showed that there are six universal basic emotions: anger, disgust, fear, happiness, sadness, and surprise. Tese emotional classes, as translated from facial expressions, are imperative key variables infuencing social interaction between individuals. In case artifcial intelligence robots can oversee complex social scenarios with people, it is important that they can see diferent emotional categories. It was found that children learned to express the concepts of happiness, sadness, and anger before the concepts of fear, confusion, and disgust at an early age [24].
Mental sickness is one of the foremost treated health issues around the world. In the past, it was thought that there were extraordinarily strong clues to mental illness in the voice. Although these clues in early detection make it prominent in more extreme forms of the disease, "ordinary speech" still raises controversy on this issue [25]. Emotional Facial Action Coding System (aka EmFACS), which incorporates mapping facial muscle setups, particularly to diferent emotional categories, is utilised in numerous feelings, acknowledgements, and classifcations [26]. For instance, "emotion recognition in speech" is used in call centres to detect anger in the voice of employees and to give them appropriate feedback. According to [27], this is useful for the prescribed application that needs to detect negative moods such as anger, hopelessness, fear, and anxiety to protect the user from depression or put users in a positive, happy, and calm mood. Tis AI-based system can be used to detect the voice of the person using that intervention, and it will become increasingly accurate as over time you will learn the characteristics of the person's voice and can use the voice perception as a therapist, give insight into the users' mood, and perhaps give some tasks while feeling depressed or sad. Recently, robots for communicating with users have also been introduced, and this area has advanced enough.
Methods
As long as product design is a part of our lives, there have been eforts to formulate and validate the patterns of design processes. Tese models difer in their content, structure, or graphical representation depending on the discipline or academic background [28]. Tey also show signifcant similarities. Most process models easily ft into the analysis to defne, design, and fnalise. Despite the diferent visual representations, all models follow the basic design process framework, resulting in fve stages. One of the important diferences between the models is that some of them are developed to be general and comprehensive, while others are developed to provide an intuitive understanding of the design process. One of these models, human-centred design thinking, is applied step by step in this paper.
Design thinking is often described by researchers as an analytical and creative process in which they can have the opportunity to experiment, create, and prototype, collect, feedback, and redesign [29]. Te design process can be described as a temporary process in which the design is changed based on the new information obtained, or on the obtained requirements and specifcations [30]. Tanks to this continuous change, inconsistencies in the problem area expressed by the needs and specifcations are eliminated and the proposed design solution is improved.
Since the design thinking used in this paper is a fexible and iterative process, there is no fxed step to follow (Figure 1). Te frst design cycle was completed after the prototype of the designed solution was evaluated with feedback and suggestions from user tests. After that, the prototype was improved with this feedback and suggestions, and another user test was performed. Te prototype was fnalised with the new feedback and suggestions, so the second design cycle was completed. Te design section discussed the latest version of the prototype and how feedback and suggestions were used to improve the prototype features including twocycle processes. Te user tests performed are discussed in the evaluation section below.
Requirements and Specifcations.
Parents who are away from their children due to work commitment, travelling, and divorce, or for any other reason tend to have difculties in keeping track of their children's mental health while working, and therefore, they are more likely to quit their job [9]. Te research aims to fnd a benefcial solution for both parents and children to overcome the mental health problems caused by the lack of physical time that parents can spend with their children and the inability to be diagnosed early in the future. It is of great importance to fnd the most critical designs and needed features of the fnal product, taking into account both parent and child preferences. It is most important to determine the necessary and optimal functions to realize this idea without exceeding the confdentiality limits of both parents and children, and to create products accordingly. For this reason, 5 stages of the design thinking process were examined to follow a correct path.
Empathise and Defne Stage.
In the frst phase of the design process, an existing data analysis (secondary research) was carried out. Tis can increase the number of data Advances in Human-Computer Interaction available and the likelihood of using these data for research by encouraging a comprehensive understanding of the problem while reviewing papers containing pre-existing data on the research problem. "It has the advantage of not collecting additional data from individuals who require special treatment with respect to safeguards for their well-being and privacy or are challenging to recruit or access." [31]. Due to these advantages, more efective, fast, and extensive research was conducted. Terefore, previous research papers were used to gain an empathetic understanding of what does the target population feel about the existing interventions about mental health that is being used nowadays.
Problem Statement.
Mental health problems that have increased in the past years create difculties for people in many ways. Tese problems start to appear more frequently in children. About 41% of parents believe their children had a behavioural or emotional adaptation problem, but less than 2% took action against it [6]. One reason for this is that most families have a job and do not spend enough time with their children, so early diagnosis is not possible. Parents are more likely to have difculties in keeping track of their children's mental health while working and therefore quit their job [9]. Early diagnosis can help us overcome these problems and help us achieve the desired solution by recording or monitoring the mental health of these children.
Requirements.
Considering the following determined problem statement and the themes that are decided in accordance with this problem statement, the user requirements were determined: (1) Parents need comprehensive education and assistance in terms of mental health. (2) Parents need an accessible and useable product that will not interfere with their busy daily schedule. It is decided that the solution should address this problem from two diferent angles: child side and parent side, because the current problem is caused by the families' inability to spend time with their children due to the intensity of work. Terefore, two diferent solutions should be produced, and these solutions should work in a connected way. For children, a friendly physical product is recommended that uses an artifcial intelligence system that can use speech and emotion recognition to detect and analyse diferent emotions. Te product should be a safe haven for children and a companion to have at all times, even when no one is around. A complementary mobile application for this product should be designed for the use of parents so that parents' involvement in the treatment can increase greatly. Te mobile application provides a more convenient way of being downloaded to their phones that are always with them in their daily lives.
Ideate Stage.
Te ideate stage is an important stage because it is concerned with gathering the feedback of any design product. First, How Might We (HMW) questions were created to determine diferent designs. With the help of these questions, diferent solutions were drawn for the design elements and diferent perspectives were conveyed on the design ideas.
3.6. User Design Ideas. Considering the described requirements and ideas, the main features of the product and application are determined. Children experience diferent emotions during the day due to the situations they encounter. A friendly physical product is recommended that uses an artifcial intelligence system that can use speech and emotion recognition to detect and analyse these emotions. A mobile app displays the results of emotion analysis performed by the AI system in this physical product owned by children and can help parents monitor their children's mental health. It also helps parents to observe their children's diferent behavioural patterns and detect any red fags that may arise and allow parents to communicate directly with children via the physical product from anywhere and anytime. Te main features are the analysis of emotions shown in various activities during the day by the child, help from professional employees (psychologists), and a way of communication using the application through the product provided by the children while far away.
Assumptions.
Te key assumption for the problem statement is that if an organised, useful, improved product is created with the help of recent interventions using artifcial intelligence that has an impact on treatment and matches daily life activities, parents will be more willing to seek help by being more involved in their children's mental health, despite the stress in their lives.
Tis key assumption is the basis on which the prototype is tested because it stems from the problem statement and the resulting user needs and requirements. Advancing to the design section, the following secondary assumptions have been made to have an idea for the design of the product: (1) It is important for the children to feel safe and comfortable with the intervention. Advances in Human-Computer Interaction (4) Children will not feel lonely because their use of smart speakers is wide. (5) If parents have a way to keep track of their children's daily emotions in a structured and understandable way, they can more efectively monitor their children's mental health. (6) If parents have an accessible and useable product that does not interfere with their busy daily schedule and indicates that they should devote time to their children when necessary, they will be able to maintain work and family responsibilities at the same time.
Design
To solve the problem that parents and children are facing, the designed solution for parents was turned into a visual prototype using "Figma," an interface design tool. Te feedback and suggestions obtained from user tests are shown below in the fnal prototype section.
Flower Pot.
Based on the research conducted and the information obtained in this paper, it has been decided that the "smart speaker" feature of the product, which can be used by children when searching for solutions to mental health problems, is the key feature. Te AI system found uses a smart speaker that flters the body of the person's language by digitising their sounds in a machine-readable format and uploading them to their artifcial intelligence system before analysing the meaning of the words, making them both helpful and abstract in their lives. Te smart speaker is used as a "virtual assistant" to answer questions and perform various automated tasks. It can include telling the weather, playing music, chatting, and countless other tasks [32]. Tey will be able to own a product that will positively afect their mental health (Figure 2). Particularly due to the need of having vision, hearing, and movement features, a screen attached to the product will be provided and, on this screen, will have avatars of their own choice ( Figure 3). Te aim of this product is to provide a "virtual friend" that children can enjoy talking to it and at the same time have conversations that beneft their own mental health. Te AI system will be programmed to make children feel better by giving feedback and opinions about the situations they are experiencing. When they provide a connection with the application used by the families, they will defnitely not exceed the privacy rules and will not explain the child's thoughts and related opinions to the parents. In studies on the benefts of plants, it is emphasised that the use of plants is widespread to solve health problems [33]. Based on this, it was decided to use a fowerpot for children to take responsibility for growing plants and to positively change the atmosphere around them. Te mentioned screen will be placed on this fowerpot and will play a great role in communication with children.
Mobile Application.
A mobile application has been designed that can help parents monitor the mental health of their children by displaying the results of the emotion analysis performed by the AI system in the product for children.
Wireframes.
For the frst version of the prototype lowquality wireframes, the two-dimensional framework draft of the application was designed by implementing screens of the primary interfaces, taking into account the sketches made to provide a clear presentation of the layout and fow. Multiple reviews have been carried out to ensure that there are no issues or gaps afecting the consistency of design elements: page structure, layout, information, architecture, user fow, functionality, and intended behaviours. Te layout was planned to design on an iOS device ( Figure 4).
4.4.
Infrastructure of the Model. Te following draft versions were obtained in the prototype stage, which is the fourth step of the design thinking process. Te application has been concretely developed with the feedback and suggestions obtained from user tests for the evaluation and improvement of the developed prototype. Later, as mentioned in the methods, the prototype was developed by going through the design cycle for the second (last) time and another user test was carried out. In the next section, the evaluations made are mentioned.
Final Version of the Prototype
(1) Home Screen. Te main page of the application has a simple design and contains only the name of the application. Tere is a "start" button under the title that is desired to be seen by users so that users can enter the application. Ten, there is a piece of small information in the window that opens to remind the users of the purpose of the application. Te user enters the application by pressing the button below.
(2) Pot ID. On this screen, users are asked to log into the system by typing their distinctive identifcation numbers Advances in Human-Computer Interaction (IDs) on the fowerpot, which is a product prepared for children. In order not to miss the slightest support, help is provided with small question marks.
(3) Current Mode. A practical feature has been added as a practical solution for parents who want to see what the current mood of their children is in their daily work environment. Te emotions children have at the moment are shown on a screen using emojis, indicating the time and date when the application is opened. To be interesting, a colour diferent from white was chosen, not going beyond the colour harmony of the application ( Figure 5). With this feature, the aim is to see the emotions of children at any time without wasting time.
(4) Calendar. One of the frst features, the calendar, was used for a more useful apostle of emotion defnition, which is the main feature of the app. Te purpose of this calendar is to create an efective design by facilitating the selection of days. Te selected day and the current day are clearly shown on the calendar.
(5) Analysis. Te emotions acquired by the system through emotion recognition are shown in the section under the calendar. Te hourly intervals of emotions are shown under emotions as it is thought to make great progress to solve the problem statement. One of the updates made after user testing is that parents' may want to take notes about these feelings their children have and they can provide more detailed and cautious information on their own.
Feedback: "If Tey Can Make Notes about Tese
Feelings". More detailed sentiment analysis is provided by pressing the buttons with a pie chart and a graph icon located at the bottom of each page. A pie chart displays daily emotions clearly, and the graph compares the level of daily emotions ( Figure 6). Under both of the demonstrations, reports of the AI system are shown. Tese reports allow parents to be more involved in the mental health monitoring of their children by contributing greatly to them.
Scheduled Message. To solve the problem of parents being away physically or having long work and not being able to be with their children, a feature has been added so that they can reach the children with the help of a fowerpot. Although this may seem like a simple message interface, the set time and day will facilitate communication with children in any way. Tey also stated that in addition to messages that parents can reach their children if desired, there should be additional features such as voice message, and video and image sharing. Terefore, the message features that they can reach their children have been updated and made in a multidirectional way.
Feedback: "Adding More Interactive Elements between Kids and Parents except for Chat like a Video Call or Video
Sharing." Advances in Human-Computer Interaction (1) Professional Help. As stated in the requirements, it was understood that when children had mental problems, the parents want to feel not alone in this process and wait for a helping hand. To overcome this problem, a feature has been added to the application where they can get help from psychologists ( Figure 7). Te number of psychologists owned by the application was shown as 3+ psychologists, but according to the user testing, this feature was updated by mentioning that there should only be one psychologist and that special and single-focused help could be received. Feedback: "I'm not sure if you can choose the therapist and stick to the same person, or if you have to use the online one. Because I would choose the ability to stay connected to a therapist so you can maintain consistency." A screen was added to this professional help feature upon a perturbation found in one of the survey responses. On this screen, it is selected whether you want to talk to the psychiatrist on the phone or by texting. In this way, the satisfaction of the users has been ensured by communicating in the desired ways ( Figure 8).
Feedback: "I would change the chat part. Since it is about a children's health and I am a mother, I think I would prefer calling to messaging about my children. Because it is too emotional and hard to talk about my children's bad mood, in texting, I believe that the emotions are disappearing." (2) Reminders. Te prototype of the proposed solution includes features that encourage the user to interact with their children while experiencing mental health problems. Although it is not included in the initially suggested design solution for this prototype, it is suggested that a reminder can be used to inform parents of an emergency, based on user responses (Figure 9).
Feedback: "If it is possible, I would love to have an alert message on my phone when my child is crying or in deep sadness when I am not at home. Terefore, I can call him immediately and cool down him." Te most important point of the application is that it does not violate the private lives of children. In the analysis made with the help of AI, the words or sentences used by children are not conveyed with certainty. Emotions are transferred to practice with the analysis made only from the voices of the children. Tese feelings are then shown as a detailed analysis. It indicates whether there is a red fag depending on what may have pushed them to feel these emotions or how long they felt these emotions in that stationary.
It has been determined that the designed solution meets the user requirements and assumptions (Table 1).
Evaluation
Diferent usability tests were carried out to determine whether the application has suitable features for its target users. Tis was frst done using an online survey by asking 15 target users about the following: (1) Teir frst thoughts about the application (2) To what extent do they think the product solves the problem specifed (1) Is the application easy to use? (2) Is it a convenient solution for daily use?
(3) Does the product match its purpose? (4) Can users interact with the application without any assistance? (5) Is the analysis of emotions can be understood clearly? (6) Do they feel like the features are enough to address the solution? (7) Do they think their children can interact with the fowerpot? (8) Are they concerned about their children's privacy?
Participants.
To efectively explore usability issues, 15 participants were reached through the "Prolifc" online survey platform. Te reason for its recruitment of 15 users is that it is thought that a sufcient amount of information can be reached as a result of the survey. Te participants have children aged between 9 and 15 and full-time jobs ( Table 2). Most of the participants stated that they work on rotating shifts including night shifts. It was aimed to achieve a more efective evaluation and conclusion by dividing it into small groups of 5-5-5 people. Te users in the frst two test groups were asked about the design and features of the application, such as what they think about the application, will they use it, and whether it refects its purpose or not. With the answers given, small changes were made in the application and a short video of the prototype was shown to the last group again. Tis time, questions were asked to develop the application further and to empathise with the target users such as how to use the application-for example, for what situations they use this application in their lives, and what problems may their children have with the product. Te solution was developed by looking at the results obtained at the end of each 5-person assessment and evaluated once again.
Results.
Participants watched a short video showing how the prototype worked and had a smooth understanding of its purpose and design process. Te information obtained through the questions asked to the frst 10 participants is as follows: they indicated the style and usability of the design Table 1: Visualization of the design solutions related to the assumptions made.
Main functionalities
Key assumption and secondary assumptions Showing daily and weekly emotion changes using emotion detection It is successfully testing the key assumption as the parents can monitor the emotion change of their children and detect any red fags that may arise.
Screen-based avatar and smart speaker Secondary assumptions 2, 3, and 4 are tested by this feature. Te successful testing of this functionality shows that including a screen that displays the avatar picked by the children creates a long-term relationship between children and the product in a friendly way.
Calendar
Te calendar feature successfully tests the secondary assumption 5 because the analysis of the children's emotions is displayed in an organised and structured way through the calendar for parents' control.
Reminders
Including a reminder functionality successfully tests secondary assumption 6 as it alerts parents of any problems with their children, and, if necessary, devotes time to their children, maintaining work and family responsibilities.
Having a scheduled messaging platform Secondary assumption 7 is tested by including a message feature, as the platform helps parents send any text or voicemail to their kids at the time and date they set.
Ability to get professional help
Secondary assumption 8 is tested by this feature. Te successful testing of this functionality shows that parents do not feel alone when facing their child's mental health problems and helps them get support when needed. 8 Advances in Human-Computer Interaction were positive. Participants are indicated by the letter P. For example, P4, P7, and P8 participants mentioned that the design excited them and wondered what kind of experience the app would create in real life: "I think it's user friendly and has a good interface. I would like to see how children interact with it to get the ratings for the day." All respondents said the design has an easy use; 80% said they could use the application. Later, the design was updated with the answers to the questions asked about how to improve this design. P3 and P4 suggested that the application should have a feature that they can take note of in more detail about the given analysis; P8 and P9 said little features needed to be added to increase interaction with the child, but P1 and P4 expressed dissatisfaction with the design of the professionally assisted messaging feature and expressed concern about this feature.
Ten, user testing was applied to 5 more participants with updated survey questions to get a more detailed answer. Te answers given by looking at the design and general application purpose are as follows-the thoughts on design are very positive, and the design according to P12 and P14 is very eye-catching and attractive: "Te application uses eye-catching colours and is very visual. Every screen is well designed and simple to understand. Te functionality and the purpose of each button is clear from the text on the buttons as well as the icons provided." P15 stated that the design is suitable for daily use and the colours used have harmony. Te rate of use of the application by participants is 90%. Tey argued that the purpose of the design was clearly understandable and said that it was easy to use. However, P14, in her suggestion for the development of the design, stated that the messaging used in communication with children is not suitable for them, and emotional issues should be communicated by call. Based on the responses given by 85% of the participants, it was stated that the solution made was suitable for its purpose. Some parents were asked to give an example of a situation in which they could use the app. Looking at the examples given, he stated that they can beneft from regular follow-up on a weekly basis, even if there is no red fag, mostly due to distance from business trips: "As I sometimes need to travel for work, I would use this application to make sure my kids are fne, and it would provide me with ease of mind." "When I came home later than my child, I would control his mood regularly. Moreover, if there are some special cases when I need to be out for work or etc. I would also check it. I think checking weekly when there is no special case is also good, because maybe he can share his feelings easily with the pot, even we are not separate." However, they think there is a problem with the children part of the design. Because the fowerpot can only analyse the one spoken in its location, the parents stated that a product should be made that their children could carry around all time: "My child cannot take this to his school something like a toy that could be better."
Heuristic Evaluation.
Tis section discusses the user interface design and the use of heuristic methods to evaluate the content of the mobile application. Tus, the purpose of the heuristic evaluation is to improve the user interface design of the mobile application and evaluate the content of the application to ensure it matches the needs of the parents of the mentally healthy ocular. Te evaluation was carried out by a nonexpert by relying on the set of 10 heuristic usability principles of Molich and Nielsen. Tis helped to evaluate the usability of the developed prototype. It is found that the design solution does not provide any means of error prevention. Te results were then used in conjunction with user responses and the user scenarios to improve the application and fx the issues found (Table 3).
Results and Discussion
Te target audience expressed their appreciation towards the application in the surveys and during the testing phase mainly due to its usability and ease of use. Although the application was only presented in a short video that explained the features and the fow of the application, the participants fully agreed that the design matched with its purpose. Te results showed that the users are highly likely going to use the application if they were able to, and this supports the claim made about the application enabling parents to be more involved in their children's lives. Nevertheless, it has been noted by some users that there is uncertainty as to how their children would interact with the physical plant product, which made them unsure to which extent would the application's features be used. For this to be tested, a physical prototype would need to be made and the product should undergo a feld study to accurately measure the success of the whole solution. With that said, the parents involved in the study were still optimistic about the usability and efectiveness of the application and the fowerpot.
Furthermore, 67% of the participants stated that the fowerpot and the application do not invade the privacy of the children. Te remaining 33% still had their concerns about the privacy of their children as they were seeing it from a broader and less accurate perspective. Tey saw that there is a fowerpot that listens to the speech of the children and later on informs the parents, which sounds troublesome for anyone that does not have knowledge about how it is implemented. Firstly, the AI and the smart speaker within the fowerpot would be implemented in a way that does not store any of the dialogues between the child and the fowerpot. Instead, the words and sentences will be analysed, and the speech would be discarded. Secondly, the smart speaker will not be listening to anything going on unless the child gives permission to. In addition to that, the application that the parents will have downloaded on their phone would not show neither the dialogues nor any of the sentences uttered by the child; it would just display the analysis and the overall emotion, which helps identify red fags. Tis is similar to when a parent takes their children to a psychiatrist if they are worried about their mental health. If asked, the psychiatrist never disclosed any of the private information of that Advances in Human-Computer Interaction session, but they would only give advice to the parents based on their analysis of what they have been told by the child. Just as a psychiatrist keeps the private information of their patients hidden, the fowerpot also does not share any of the data it has and instead just shares the overall analysis and advice with the parents only.
Te evaluation of the application was supported with user scenarios and heuristic evaluation to identify any problems and establish a better understanding of the proposed solution. Due to the lack of resources, however, experts were not hired to carry out the heuristic evaluation. It was instead carried out by a nonexpert but strictly adhering to certain heuristic usability steps and common practices. A more successful evaluation would have been carried out if 3 to 4 experts in the feld were available to evaluate the prototype.
Since the test is presented online as a short video that shows how the solution design has a fow and what features it has, the feature usage and interaction of the application are limited. Not all design issues can be explained unless accompanied by additional assessments or additional testing. For this reason, more user tests should be done in diferent environments. Tis will allow users to interact with the prototype and test whether the solution design is fully suited to the users' needs and requirements. Preferably, on a larger project scale, after the tests were described in the assessment section, more participants are recruited and a few more rounds of usability tests are conducted to fnd new problems. In this way, the test completion times and the resulting error rates can be displayed clearly.
Limitations.
Although positive results were obtained in the virtual prototype evaluations made using the target users, some limitations have emerged due to the obstacles encountered in this study. One limitation is that the fowerpot and applications have to be used together. Parents with more than one child are required to buy a pot for each child because the mobile application can only be used to analyse emotions using the analysis of one person's conversations. At last, the fowerpot for children was designed as a product that the children cannot always be using in their daily routines and cannot carry with them when they are out of home; the prototype can only analyse children's speech where they are located. Some participants also expressed this problem in their answers.
Conclusions
Te paper aimed to fnd a useful solution for both parents and children that helps them cope with the mental health problems that they can face. Te design thinking process was followed in concordance with the user needs identifed from previous studies in this area. Te requirements for the proposed solution were defned from the results of previous research papers and studies. Diferent ideas were explored, and an assumption list was created based on the defned requirements. Te assumption list led to a unique prototype design. Te prototype was tested using 3 system usability methods: user testing, user stories, and heuristic evaluation. Participants reacted positively to the prototype's design, ease of use, and its suitability for their daily lives. Te results also showed that the parents can follow the mental health of their children in a better way and can be supported with the developed prototype. Te success of the project requirements that were extracted from previous research papers to solve the problem statement has been achieved as expected.
Future Work.
Tere are several aspects of the paper that can be enhanced by addressing the prototype's limitations and by conducting a more comprehensive evaluation. If Table 3: Visualization of the heuristic evaluation using 10 heuristic usability principles.
Visibility of system status
System always gives appropriate feedback such as by highlighting the selection of a date in a calendar or by highlighting the menu buttons.
Match between system and the real world All icons used are familiar to a normal user, so there is not any concept not following realworld conventions.
User control and freedom Every screen has an exit button at the corner of the screen to allow the user to go back and undo their actions Consistency and standards Te functionality throughout the app and the terminology used are very consistent with platform conventions Error prevention Clicking on call when on the physiotherapist screen does not ask for confrmation before calling.
Recognition rather than recall Every window has indicators for what it is displaying. Te chart windows show on the top left corner the date that was chosen and the day, which helps the user recognize what they are viewing.
Flexibility and efciency of use Te pie and bar charts are on a separate window compared with the calendar, which means that the user needs to go back to the calendar to select a day and then go to the chart windows to see the analysis. It would be better if the date can be changed within the chart windows.
Aesthetic and minimalist design All screens in the application do not contain unnecessary elements and only contain necessary functionality Help users recognize, diagnose, and recover from errors Not applicable
Help and documentation
Mostly, all buttons all over the app have a small question mark icon next to it that plainly describes the functionality.
positive results can be obtained in the additional evaluations, a functional mobile application can be implemented. Hence, the application can be developed by observing the interaction of the parents with the application better. Te single-person analysis system of the application can be further developed, and the limitation can be removed by adding a special voice recognition system for the person who is using and adding multiperson analysis to the application. Terefore, the problem that families with many children may have while using the product will be removed. Another feature that can be added is the video search feature. Te video call, which is a feature identifed by the participants during the evaluation, is added as an improvement that can be made in the future due to the limited time available.
Data Availability
Te data and materials related to surveys and the questions asked on fnding the data are available at https://github.com/ bersanokkali/ResearchPaperData.
Conflicts of Interest
Te authors declare that they have no conficts of interest regarding the publication of this paper.
|
2022-10-11T16:25:04.874Z
|
2022-10-07T00:00:00.000
|
{
"year": 2022,
"sha1": "e453a186a72b98ce91d940ab72234c5d0d7bb1d3",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ahci/2022/4322177.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bdb5e63aa3e03a39441eb513eb2849152773d151",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": []
}
|
8610398
|
pes2o/s2orc
|
v3-fos-license
|
Association between athletic participation and the risk of eating disorder and body dissatisfaction in college students.
OBJECTIVE
Given that females exhibit a greater prevalence of eating disorders, there is of yet no conclusive evidence whether participation in college athletics exacerbates eating disorders or body shape dissatisfaction. This study assessed how gender and participation in collegiate athletics are associated with increased risk for disordered eating attitudes and body shape concerns in college students.
METHODS
This study used a cross-sectional research design. A total of 302 students at a Southern US university fully completed the eating attitudes test and the body shape questionnaire during class time or team meetings. Logistic regression was conducted to determine risk differentials for each group.
RESULTS
Of 302 students, 65.6% were females and 63.2% were non-athletes. Athletic status was significant as well but became slightly less so with adjustment (unadjusted at OR = 3.14, P < 0.001 vs. adjusted OR = 3.22, P < 0.001). Moreover, it was found that that non-athletic female students are slightly more at risk for disordered eating and significantly more dissatisfied with their body shape (OR = 5.95, P < 0.001).
CONCLUSIONS
Although there seems to still be many unresolved issues regarding eating disorders, one thing is clear females are at higher risk, and it remains a significant challenge to college health services. College health practitioners should be made aware of the significant effect stress has on freshman in particular.
Introduction
The dramatic lifestyle change going from high school to college is a stressor that has been shown to influence the risk of weight gain and disrupt eating patterns in college students. 1,2 The stressors a college student faces include the pressure of greater personal responsibility, loss of social support, and increased academic demands, which, along with a college student's desire to maintain or achieve an ideal body, can lead to disordered eating. Greenleaf et al. 3 studied the prevalence of eating disorders and disordered eating among 204 female NCAA Division I college athletes from 3 universities located in the Midwest, Southwest, and Mountain regions of the United States and found that disordered eating can lead to pathogenic eating behaviors, like binge eating. Although there are inconsistent findings, the general position in eating disorder research is that college athletes are more at risk for developing eating disorders than college non-athletes, [4][5][6][7][8] and in other studies conducted with female NCAA Division I college athletes from universities located in Ohio, Missouri, and Texas, female athletes are 2-3 times more likely to meet the criteria for an eating disorder than non-athletes. 4,6,8 This is ironic in that female collegiate athletes show higher nutrition knowledge compared to non-athletes, but athletes are less likely to use this nutrition knowledge in their regular eating patterns compared to non-athletes. 9,10 One possible reason for this finding is that athletes sometimes think of themselves as an "exception" to normal dieting because of the sport they play and their level of fitness compared to the general population.
Although the eating behaviors of male athletes have not been scrutinized as much as female athletes, reviews of literature done by researchers at Harvard University, USA, 11 and Telemark University College, Bø and Norwegian School of Sport Sciences, Oslo, Norway, 12 have shown that about 50% of male athletes would like in some way to change their body shape. Baum recognized that there has been an increase in male athletes who are preoccupied with their International Journal of Health Sciences Vol. 11, Issue 4 (September -October 2017) body image, especially in the sports of football, baseball, and track, and field. 13 Male athletes sometimes suffer from reverse anorexia a condition in which an individual becomes obsessive about increasing muscle mass, which often leads to the use of harmful anabolic steroids. [13][14][15] In a study by Franco and coworkers at the Medical College of Ohio, Toledo, USA, males made up only 5-10% of anorexia nervosa patients, 16 while Bratland-Sanda, from Telemark University College, Bø, Norway, and Sundgot-Borgen from the Norwegian School of Sport Sciences, Oslo, Norway, 12 reported that disordered eating rates are from 0% to 16% in males. 12 Furthermore, a study on female college athletes and non-athletes at Smith College, Northampton, MA, USA, about weight, desired weight, meal patterns, methods of gaining/losing weight, and past or current eating problem behaviors, found that non-athletes were more likely to eat fewer meals per week, reported more often that they were too heavy, and reported using more weight loss methods. 17 Given that females exhibit a greater prevalence of eating disorders there is of yet no conclusive evidence whether participation in college athletics exacerbates eating disorders or body shape dissatisfaction. The object of this investigation was to determine if participation in college athletics independently increased the risk or modified the risk for disordered eating behaviors and body dissatisfaction more than their female and male, college counterparts who did not participate in a collegiate sport.
Participants and sampling
For this study, college students at a Southern US University were recruited using convenience sampling. Participants were included if (a) currently enrolled at the university, (b) they could read English, (c) they were aged 18 years or older, and (d) could provide informed consent. Participants were excluded if did not meet aforementioned inclusion criteria.
Study design and data collection procedure
This cross-sectional study was conducted in the fall semester of 2011. The cross-sectional design is beneficial in being lowcost, easy to administer and implement. Subject recruitment and survey administration occurred for 3 months during class time or team meetings. Coaches and instructors were enlisted by email with a copy of the protocol attached. One of the researchers described the purpose of the study and the design to the group. Students were informed that their participation in the study was optional, and consent was given. Students willing to participate then completed a cover letter with the sport they play in college (if applicable), their college classification, and their gender. If their sport was not listed on the cover letter, an "other" option was available to check. Along with the cover letter, the students also completed a survey.
Instruments
This study compared self-reported eating behaviors and attitudes using the 26-question eating attitudes test (EAT-26) and the body shape questionnaire (BSQ). These instruments used to measure characteristics associated with eating disorders are surveys that have been validated in a variety of populations. [18][19][20] Although the EAT-26 does not have the capacity for diagnosing clinical eating disorders it has proven effectiveness in identifying those who may be at risk for an eating disorder. 4,[18][19][20] The mean test-retest reliability estimate for the EAT-26 was 0.87. 21 To validate the BSQ; the 51 question instrument was given to four all-female groups: (1) Eating disorder patients, (2) family planning clinic attendees, (3) occupational therapy students, and (4) undergraduate students. The BSQ was correlated with the body dissatisfaction subscale of the EDI and with the total EAT score among the patients with bulimia nervosa; and with the EAT total score among the occupational therapy students. Among patients, the BSQ correlated moderately highly with the score on the EAT and very highly with the EDI body dissatisfaction score. Among group 3, the occupational therapy students, the BSQ correlated highly with the score on the EAT. 20 The BSQ has also been validated in Spanish 20 and Swedish 22 and continues to be a reliable measure for assessing body dissatisfaction and low self-esteem. 20 For BSQ, high testretest reliability, very high internal consistency, ranging from 0.94 to 0.97 and high split-half reliability (above 0.93) was reported in the previous studies. 22,23 Both of these instruments have been used jointly to identify the participants at risk for an eating disorder.
Statistical analysis
After the data were entered into a statistical software program, Chi-square (χ 2 ) analysis was used to identify and compare sample populations with the excluded participants. Test scores were then individually calculated for each participant according to the scoring rubrics for the EAT-26 18,24 and the BSQ. 20,24,25 The individual scores were dichotomized into two variables as high and low scores for the EAT-26 and the BSQ. The research followed similar cutoffs used in the prior studies. 17,20,24,26 Dichotomizing the test scores into high and low scores allowed for the use of logistic regression to determine risk differentials for each group, and thus to test our hypothesis. A P-value of 0.05 and less was considered statistically significant. All analyses were performed on a personal computer using IBM® SPSS® Statistics 22. IBM Corporation Armonk, New York 10504-1722 United States.
Results
Overall, 331 students participated in this study, of which, 29 had missing data on one or more of the questionnaires administered. These participants were excluded from the analyses, leaving a sample of 302. Included and excluded samples were divided by gender, athletic participation, and class level. A χ 2 tests were conducted to assess possibility of selective bias due to the exclusion of participants. Results of χ 2 tests were nonsignificant ( Table 1), suggesting that the possibility of selectivity bias introduced by exclusion of those participants was minimal.
Demographics characteristics of the sample are shown in Table 1. Of 302 included participants, 104 were males, and 198 were females. 111 were athletes and 191 were non-athletes. The final sample of students used for this study included 43 freshmen, 86 sophomores, 93 juniors, and 80 seniors.
For EAT-26, findings of logistic regression were presented in Table 2. This finding illuminates an association between gender and eating disorders when not adjusted, with females being more than 2 times at greater risk for disordered eating than males. Athletic status was also significant when not adjusted but then the significance was reduced with adjustment (unadjusted at OR = 2.54, P = 0.02 vs. adjusted OR = 1.90, P = 0.18). To answer if athletic status modified or interacted with the gender an interaction term was created by multiplying gender and athletic participation and added to the model (unadjusted at OR = 4.08, P = 0.06 vs. adjusted OR = 1.11, P = 0.09). The results showed that significance for the interaction is greatly reduced when adjusting for other predictor variables. Finally, it was found that college freshman had the highest observed risk with an unadjusted (OR = 5.87, P = 0.11). Although it is not quite significant, there is a clear trend that places freshman at higher risk levels than other college classification.
With regard to BSQ, results of logistic regression were presented in Table 3. Findings indicate that gender plays the greater role in body shape dissatisfaction with females being more than 5 times at greater risk for being dissatisfied with the body shape than males. Athletic status was significant as well but became slightly less so with adjustment (unadjusted at OR = 3.14, P < 0.001 vs. adjusted OR = 3.22, P < 0.001). To address whether athletic status modified or interacted with the gender, an interaction term was created by multiplying gender and athletic participation (unadjusted at OR = 2.77, P = 0.06 vs. adjusted OR = 0.20, P = 0.05). The results showed that significance for the interaction is greatly reduced when adjusting for other predictor variables. Finally, results revealed that college freshman had the highest risk for body shape dissatisfaction with an adjusted OR of 11.12 and a P = 0.02, that is 11 times greater than their comparison group.
Discussion
This study investigated the relationship between gender and athletic participation using two validated and widely used questionnaires, the EAT-26 and the BSQ. The results indicate that gender and athletic status are independent of each other for increasing the risk of disordered eating but that there was a slight effect modification for body shape dissatisfaction. The findings demonstrate that females were at 5 times greater risk for body shape dissatisfaction and although not significant, there is a trend toward higher risk for eating disorders than male college students, when controlling for other factors. That said, students who did not participate in a collegiate sport were at greater risk for body shape dissatisfaction. These results are consistent with DiBartolo and Shaffer's finding that non-athletes are at greater risk for eating disorders and body shape dissatisfaction. 17 This paper's findings also highlight the perseverance of the relationship, between body shape dissatisfaction and female college students, especially, when compared with Klemchuk et al. study, 27 which also found high levels of body dissatisfaction among female college students. Another important finding is that the interaction between body dissatisfaction among female students who did not participate in a collegiate sport was significantly higher than female students who did participate after adjusting for other factors. A possible explanation for that collegiate athletes are physically active and have a team to belong to, these physical and psycho-social benefits may contribute to a reduction in stress, anxiety and offer an increase to self-esteem which may lead to a healthier bodily satisfaction. 1,28,29 However, Hausenblas and McNally,30 reported that athletes, who had higher rates of activity than lower active non-athletes had a higher prevalence of eating disorders. Our study's conclusion is not in agreement with the prevailing attitude that female athletes are at greater risk. Unfortunately, these results need to be tempered with the fact that this study had a relatively small sample size.
The fact that college classification yielded such high-risk levels for college freshman for body shape dissatisfaction and some risk for disordered eating should be addressed in future studies. The findings of college classification could also point to a deeper underlying behavior or affect, like stress, anxiety or depression as being more of a factor than gender or athletic status. This line of research has been thoroughly discussed and cogently explained in Fragkos and Frangos's paper assessing eating disorder risk: The pivotal role of achievement anxiety, depression and female gender. 31 Their comprehensive path model identifies anxiety, particularly academic anxiety as a significant association for eating disorder risk. Although the results of the EAT-26 were not as significant as those from the BSQ, there was a high degree of concordance and directional similarities.
Limitations
This study employed a convenience sample, which does not guard against the chance of possible selection bias; which could have been a factor in this study and being a cross-sectional design, no causal attributions can be made as to why one group may be at a higher risk than another. Another limitation to note is that all results were selfreported, which could lead participants to under-report or over-report symptoms of an eating disorder, affecting results. Moreover, all participants were on the same college campus, and this could limit the generalizability of the results.
Conclusion
Body dissatisfaction remains a strong and stable affect among female college students. This paper also found that risk for body dissatisfaction is significantly greater in students who do not participate in collegiate athletics than those students who play college sports. When interaction between gender and athletic participation was assessed, it was found that the interaction term significantly modified the risk for body shape dissatisfaction but the significance of the interaction term was reduced when included as a predictor for eating disorders. Another important finding was that freshman students were at great risk for both eating disorders and body shape dissatisfaction. Although there seems to still be many unresolved issues regarding eating disorders, one thing is clear females are at higher risk, and it remains a significant challenge to college health services. College health practitioners should be made aware of the significant effect stress has on freshman in particular.
|
2018-04-03T00:51:01.467Z
|
2017-09-07T00:00:00.000
|
{
"year": 2017,
"sha1": "a736b39e385a075c37776b7318e383c4ad60bb5c",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a736b39e385a075c37776b7318e383c4ad60bb5c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
253281555
|
pes2o/s2orc
|
v3-fos-license
|
Editorials for ‘Advances in Cold Plasma in Biomedicines’
Research in the field of plasma medicine has provided many explanations for various phenomena, as well as the involvement of the chemical elements of plasma; however, it still lacks in biological mechanism analyses [...].
Introduction
Research in the field of plasma medicine has provided many explanations for various phenomena, as well as the involvement of the chemical elements of plasma; however, it still lacks in biological mechanism analyses. In this Special Issue, we called for the identification of mechanisms for biological phenomena induced by cold atmospheric plasma (CAP) to compensate for these problems. Of particular note is that while various journals have previously covered Special Issues concerning plasma medicine, "Biomedicines" is believed to be the first medical journal to have promoted a Special Issue thereof.
A total of 10 papers was published in this Special Issue, of which 2 were review articles and the other 8 research articles; they could be classified as being very high-quality papers, mostly analyzing tissues beyond the cellular level. The topics covered in each paper also varied from anticancer effects to the expression of regenerative factors, from wound healing to the inhibition of dermatosclerosis and differentiation of osteoblast cells, and the review paper also mentioned recent research trends in plasma medicine and the basic concepts allowing plasma scientists to better understand current biological knowledge.
This Editorial briefly introduces the articles comprising this Special Issue.
Two Review Articles Introducing the Role of the Direct Treatment of Cold Plasma and Plasma-Activated Liquid in the Medical Field
One of cold plasma's most known biological properties is its capacity to have a destructive effect on several biological targets. The review article by Yan et al. described the fact that CAP can be used for treating cancer, able to destroy various types of cancer cells by inducing apoptosis, autophagy-associated cell death or necrotic cell death, depending on the plasma treatment properties [1]. By introducing several studies testing CAP's anticancer effect, its possible mechanisms were suggested as well. In addition, they introduced studies that verified how CAP's destructive effect effectively removed viruses as well as various types of bacteria. The authors also introduced the fact that, among CAP's various working elements, not only chemical elements (ROS, RNS, ions, etc.) but also physical elements (electromagnetic elements, light and heat) play an important role in the destructive effect of CAP.
Meanwhile, in their great review, Kim et al. introduced the beneficial role of plasmaactivated liquid (PAL) in the medical field [2]. Although a direct treatment using CAP can be very powerful, its use is limited due to its restricted tissue-penetrating ability. In the medical field, PAL usage can be expanded for accessing body parts where a direct application of plasma would be difficult. This review described the chemical elements of PAL and how to generate it, as well as introducing its beneficial roles in sterilization, disinfection, tissue regeneration and cancer treatment. Furthermore, it highlighted PAL's role in the activation of various solutions that can then be used for medical purposes. Due to the safety of PAL being confirmed using short-term tests on a mouse model, a long-term evaluation using animal models would be needed before the medical use of PAL can be approved.
Articles Introducing the Anticancer Effect of Direct Treatment with Cold Plasma
Several efforts have been contributed to directly treating cancer with the use of CAP. In this Special Issue, Nitsch et al. tested the effect of an argon plasma jet device (kINPen med) on two chondrosarcoma cell lines, W 1353 and CAL 78 [3]. According to the study, the CAP treatment reduced cell viability, migration and metabolism. Furthermore, it was proven that CAP induced apoptotic cell death in both cells. Although this study was conducted in an in vitro system only, the results suggested that CAP could have considerable treatment benefits, considering that surgery is the only treatment option for chondrosarcoma. The research article by Choi et al. suggested novel approaches for using CAP in combination with gold nanoparticles to treat oral squamous cell carcinoma (OSCC) [4]. According to the research therein, they applied a no-ozone cold plasm to SCC25 and HaCaT cell lines in combination with gold nanoparticles conjugated with the antibody-targeting p-FAK protein (p-FAK/GNP); this combinational treatment successfully triggered OSCC-specific immediate cell death. They suggested that CAP's charged particles promoted the surface plasmon resonance activity of gold nanoparticles, so that it could induce the immediate cell death of OSCC. Furthermore, the strong anti-OSCC activity of the combinational use of no-ozone cold plasma and p-FAK/GNP was confirmed in an OSCC xenograft mouse model; its effect was much more powerful than merely using CAP on its own. Based on their results, the authors suggested that the combinational treatment of CAP and p-FAK/GNP can be a novel treatment for OSCC.
Possible Role of Plasma-Activated Liquids for Treating Cancer
As a review by Kim et al. in this Special Issue introduced, there have been many studies conducted elucidating the beneficial role of PAL for treating cancer. The research article by Kong et al. described the anticancer effects of PAL using three tumor animal models [5]. In this study, saline was subjected to a treatment that used a device to generate plasma using ambient air. Since the plasma treatment decreased the pH of the saline, the effect of the plasma-activated saline (PAS) was compared to acidified saline. In the results of the experiments in the xenograft model using A375 melanoma cells, the tumor size was significantly reduced with the PAS injection. On the basis of an ultra-high-performance liquid tandem chromatography quadrupole time-of-flight mass spectrometry analysis of the tumor cell metabolism, the authors insisted that the glycerophospholipid metabolic pathway was the most susceptible metabolic pathway for the PAS-mediated antimelanoma activity. The anticancer activity of PAS was further confirmed in xenograft mouse models for OSCC and non-small-cell lung cancer. The authors also confirmed that the long term use of PAS had few side effects in the three animal models, and suggested that PAL could serve as a potential therapeutic approach for cancer treatment in the near future.
Meanwhile, in a research article by Brito et al., the possible role of PAL for treating peritoneal carcinomatosis using the Ehrlich Ascites carcinoma (EAC) model [6] was suggested. Since EAC mainly grows as a suspension in the peritoneal cavity of mice, this model was suited to test PAL's anticancer activity. In this article, plasma-oxidized saline (POS) was produced by treating the saline with an argon plasma jet, kINPen med. The five rounds of POS injections led to a reduction in a tumor due to the modulation of EAC cell growth and metabolic activity. Furthermore, the POS injection promoted a decrease in the antioxidant capacity of tumor cells and an increase in lipid oxidation in the ascites, while there were no side effects observed. The authors suggested that the POS is a promising candidate for targeting peritoneal carcinomatosis and EAC is a convenient model for analyzing innovative POS approaches.
Articles Introducing the Medical Effects of Cold Plasma Other Than Anticancer Efficacy
In addition to CAP's anticancer effect, there are lots of studies elucidating its beneficial role in several human diseases. In the article by Arndt et al., CAP's beneficial role for treating localized scleroderma was tested using in vitro and in vivo models [7]. The device they used was an argon-based plasma-generating device, the MicroPlaSterβ ® . In this article, although the direct CAP treatment used for human localized scleroderma-derived fibroblasts (hLSFs) failed to reduce fibrotic markers such as collagen type I and alpha smooth muscle actin, cell motility was significantly reduced through the induction of metalloproteinase 1. Furthermore, the CAP hLSF treatment significantly reduced the expression of proinflammatory cytokines. The authors confirmed the anti-fibrotic effect of CAP through a test using bleomycin-induced dermal fibrosis, and suggested that the use of CAP could be an option for treating localized scleroderma.
Several researchers explored various studies to validate the beneficial role of CAP in wounds. In this Special Issue a research article by Choi et al. tested the effect of heliumbased CAP in diabetic wounds infected with Candida albicans using a mouse model [8].
The results of this study showed that the CAP treatment not only reduced the fungal infection, but also accelerated the process of wound healing. Since diabetes mellitus leaves patients susceptible to chronic wounds and various infections, including fungal ones, this fascinating study provides clues to the usefulness of cap for diabetic wounds.
There are many reports elucidating the beneficial roles of CAP in the oral cavity. Two research articles in this Special Issue introduced CAP's oral-tissue-regenerative activity. The study by Eggers et al. tested the effect of an argon-based plasma jet, kINPen med, on CAP's tissue-regenerative activity using human gingival fibroblasts, keratinocytes and human gingival biopsies [9]. In this study, a 30 s CAP treatment led to an increase in woundhealing-related genes and proteins, such as ki-67 and MMP1, whereas a treatment for more than 60 sec induced apoptotic gene expression in cells and superficial damage to the epithelium. Based on their results, the authors suggested that CAP used for a brief amount of time after oral surgery could help with wound healing. The study by Choi et al. reported on CAP's possible role in bone regeneration in the oral cavity [10]. In this study, they used argon-based no-ozone cold plasma to treat periodontal ligament cells and investigated its effect on osteoblastic differentiation and bone formation. Their results clearly showed that the CAP treatment induced osteoblastic differentiation by promoting the expression of osteoblast differentiation-promoting genes (alkaline phosphatases, osteocalcin, osteonectin and osteopontin) and the activation of alkaline phosphatase. Since cell differentiation capable of differentiating into osteoblasts is important for the recovery of periodontitis patients, CAP could be a good treatment.
Conclusions and Future Perspectives
Studies are underway to validate the novel medical efficacy of CAP and to uncover its specific mechanism of action. In this Special Issue, the papers not only reported on CAP's strong bactericidal, wound-healing, tissue-regenerative and anticancer effects, but also its mechanism. In addition, various studies were conducted to develop ways to increase the medical efficacy of CAP to diseases.
Techniques using CAP could be the new answer to diseases that have been difficult to treat in the past, providing a much better quality of life than what is available now. Recently, several startups produced medical devices using CAP. After the process of obtaining medical device approval in each country, this is expected to create a novel medical device market.
|
2022-11-04T19:26:41.961Z
|
2022-10-28T00:00:00.000
|
{
"year": 2022,
"sha1": "97f616bfcc8b01dd9eeda94607819dc6cdc6713a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/10/11/2731/pdf?version=1666946707",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c721576ca6d6ab83969ce3b7d84f60af5fdf703",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
155091145
|
pes2o/s2orc
|
v3-fos-license
|
Needs must: living donor liver transplantation from an HIV-positive mother to her HIV-negative child in Johannesburg, South Africa
The world’s first living donor liver transplant from an HIV-positive mother to her HIV-negative child, performed by our team in Johannesburg, South Africa (SA) in 2017, was necessitated by disease profile and health system challenges. In our country, we have a major shortage of donor organs, which compels us to consider innovative solutions to save lives. Simultaneously, the transition of the HIV pandemic, from a death sentence to a chronic illness with excellent survival on treatment required us to rethink our policies regarding HIV infection and living donor liver transplantation. Although HIV infection in the donor is internationally considered an absolute contraindication for transplant to an HIV-negative recipient, there have been a very small number of unintentional transplants from HIV-positive deceased donors to HIV-negative recipients. These transplant recipients do well on antiretroviral medication and their graft survival is not compromised. We have had a number of HIV-positive parents in our setting express a desire to be living liver donors for their critically ill children. Declining these parents as living donors has become increasingly unjustifiable given the very small deceased donor pool in SA; and because many of these parents are virally suppressed and would otherwise fulfil our eligibility criteria as living donors. This paper discusses the evolution of HIV and transplantation in SA, highlights some of the primary ethical considerations for us when embarking on this case and considers the new ethical issues that have arisen since we undertook this transplant.
AbsTrACT
The world's first living donor liver transplant from an HIV-positive mother to her HIV-negative child, performed by our team in Johannesburg, South Africa (SA) in 2017, was necessitated by disease profile and health system challenges. In our country, we have a major shortage of donor organs, which compels us to consider innovative solutions to save lives. Simultaneously, the transition of the HIV pandemic, from a death sentence to a chronic illness with excellent survival on treatment required us to rethink our policies regarding HIV infection and living donor liver transplantation . Although HIV infection in the donor is internationally considered an absolute contraindication for transplant to an HIV-negative recipient, there have been a very small number of unintentional transplants from HIV-positive deceased donors to HIV-negative recipients. These transplant recipients do well on antiretroviral medication and their graft survival is not compromised. We have had a number of HIV-positive parents in our setting express a desire to be living liver donors for their critically ill children. Declining these parents as living donors has become increasingly unjustifiable given the very small deceased donor pool in SA; and because many of these parents are virally suppressed and would otherwise fulfil our eligibility criteria as living donors. This paper discusses the evolution of HIV and transplantation in SA, highlights some of the primary ethical considerations for us when embarking on this case and considers the new ethical issues that have arisen since we undertook this transplant.
InTroduCTIon
The world's first living donor liver transplant from an HIV-positive mother to her HIV-negative child, performed by our team in Johannesburg, South Africa (SA) in 2017, was necessitated by disease profile and health system challenges in our country. 1 This paper details the context and the ethical reasoning behind our painstaking decision to proceed with this transplant.
The CAse
A 13-month-old child, diagnosed with biliary atresia, was wait-listed for a deceased donor liver transplant at our centre in Johannesburg, SA. Prior to listing, the child had undergone a Kasai portoenterostomy procedure, but this failed to establish biliary drainage. 2 While on the deceased donor waiting list the child suffered numerous life-threatening complications secondary to established liver cirrhosis. These necessitated hospitalisation and intensive care unit admission. Throughout this process, the child's mother requested consideration as a living donor. We initially dismissed this request as the mother was known to be HIV-positive. The policy in our living donor liver transplant programme has always excluded HIV-positive living donors because of the risk of transmission.
As the child's health deteriorated, it became clear that the living donor option, with the child's HIV-positive mother as the donor, was our only hope of saving the child's life. Due to SA's solid organ shortage, it was highly likely that the child would die before a deceased donor liver could be procured. The HIV-positive living donor option was only pursued after all other willing family members had been found ineligible for living donation. The child remained on our deceased donor waiting list until transplant, and at the time of transplant had been listed for 181 days, almost four times the average for our programme.
At the time of writing, 21 months after the procedure, the donor mother is fully recovered and remains in good health. The recipient child is thriving, with indeterminate-possibly negative-HIV serostatus. 1
The ConTexT hIV and transplantation in sA
Over the past 15 years, SA has emerged from a troubling era of AIDS denialism which resulted in the deaths of approximately 330 000 people. In 2002, sustained civil society advocacy culminated in a court ruling that obligated the government to roll out antiretroviral therapy (ART). 3 Today, SA has the highest number of incident HIV infections in any country (national prevalence of 12.6%-±7 million people) and approximately half are on treatment (3.4 million). Through successful implementation of our national policy for HIV, child and infant mortality has decreased by 25% with prevention of mother-to-child transmission. Average life expectancy of HIV-positive people increased from 56 years to 61 years in the period from 2009 to 2012 alone. 3 SA also has a long history of solid organ transplant that spans 50 years. Our health system has the depth and capacity to offer this highly specialised service. Compared with HIV management, access to transplantation is, unfortunately, less equitable and poorly funded-particularly in the state sector. 4
Current Controversy
Numerous factors are thought to influence solid organ availability in SA, and these seem to occur at many different levels simultaneously. In some cases, public perceptions of transplant and its portrayal in the media have damaged the image of organ donation-often creating uncertainty and mistrust-and this has discouraged people from donating. 5 There are also challenges at the level of health facilities, where staff may hold unfavourable attitudes to organ transplant or where training and staffing in transplant specialities is not prioritised. 6 Published research has also found a policy vacuum in transplant, where hospital staff do not understand what is required of them. This may influence transplant numbers and discourage staff from referring potential donors. 7 Wits Donald Gordon Medical Centre is a private academic teaching hospital in the Faculty of Health Sceinces, University of the Witwatersrand, Johannesburg. In response to an unmet need for liver transplantation, we started a paediatric liver transplant programme in 2004. To increase access to transplantation for children with liver failure, we expanded the donor pool by introducing a living donor liver transplant programme in 2013. 4 6 As more children were referred to our programme for evaluation of end-stage liver disease, questions about the feasibility of transplantation from an HIV-positive living donor were frequently raised by our staff and by parents. HIV-positive status precluding parents from donation to their children was a cause of consternation.
HIV infection in the donor, when the intended recipient is HIV-negative, is an internationally accepted contraindication for both deceased and living donation and, in some countries it is illegal. 8 9 However, there are documented case reports of inadvertent transmission of HIV to previously uninfected recipients through deceased donor solid organ transplantation 10 and, we have first-hand experience of this in our own programme (unpublished). Given the circumstances of these inadvertent transmissions, there was no option to administer prophylaxis to prevent HIV seroconversion prior to the transplant procedure. It appears that with ART, overall outcomes and survival in recipients who received HIV-positive deceased donor organs and seroconverted is as good as those who received HIV-negative deceased donor organs, even without prophylactic measures. 10 What does this mean for transplant practice and HIV in SA? Now more than ever, we have increasing numbers of HIV-positive individuals, with good virological suppression and well-preserved CD4 counts, who would be suitable living donors. This raised a number of questions for our transplant team: Was it really appropriate for us to continue denying the option of living liver donation to HIV-positive adults who expressed willingness to do so, and would otherwise be eligible? Was this a decision which adequately facilitated their autonomy and the best interests of their children? Were we doing our best to anticipate the growing transplant need for children with organ failure and their families in SA? As the nature of HIV had been reframed, we realised that our transplant programme had to be appropriately situated within this context.
regulATory mATTers And InsTITuTIonAl reVIew boArd (Irb) ApproVAl
Prior to the transplant, we undertook constructive engagement on the clinical and ethical issues with our Institutional Review Board (IRB). During this process, the IRB carefully considered the ethical issues highlighted later in the article, as well as the context-which suggested that this was the only option to save the life of the child in question. The IRB ultimately gave us authorisation to perform the case under the auspices of Section 37 of the 2013 Declaration of Helsinki. 9 This section states the following: In the treatment of an individual patient, where proven interventions do not exist or other known interventions have been ineffective, the physician, after seeking expert advice, with informed consent from the patient or a legally authorised representative, may use an unproven intervention if in the physician's judgement it offers hope of saving life, re-establishing health or alleviating suffering. This intervention should subsequently be made the object of research, designed to evaluate its safety and efficacy. In all cases, new information must be recorded and, where appropriate, made publicly available.
HIV-positive to HIV-negative living donor liver transplant has now been formalised as a research programme at Wits Donald Gordon Medical Centre (institutional ethics clearance # M170290 and #M171035) and this work has been published 1 4 as per the stipulations of Section 37.
eThICAl Issues risk of hIV transmission versus the benefit of saving a life
Navigating the ethical quandaries presented by this case was not straightforward. Given the serodiscordance of our donor and recipient, we did not have any data on which to base our analysis of risk for mother and child becuase a liver transplant with this HIV profile (HIV-positive living adult donor, HIV-negative recipient child) had not been undertaken before. Some aspects were obvious. This was a therapeutic intervention with the prospect of direct benefit to the recipient child-saving their life. The primary risk was the possibility that the intervention to save the recipient's life would also infect them with HIV. Given that HIV is imminently manageable, 11 and transplant recipients appear to tolerate ART and immunosuppression well, it was unanimously agreed that the immediate benefit outweighed the immediate risk. A further consideration was that we would be able to carefully control possible HIV transmission, by initiating ART prophylaxis in the recipient prior to the procedure, as well as selecting a long-term virally suppressed donor.
Arguments weighing the risk of HIV transmission with the benefit of saving a child's life have taken on new prescience subsequent to our initial publication of this transplant, when the issue was debated in a meeting to establish guidelines for HIV-positive to HIV-negative transplantation. HIV clinicians have argued that, in the context of living with a chronic illness, HIV is perhaps preferable to others such as diabetes mellitus or cancer, given the good survival and simplified treatment regimens. However, the stigma around HIV still persists, and that adds complexity to the ethical issues we faced. Also noteworthy is HIV experts' contention that failure to offer HIV-positive parents the option of donation to their HIV-negative children is an infringement of their autonomy and contrary to the best interests of their children.
Based on available literature, the surgical risk to the mother as an HIV-positive adult with a well-preserved CD4 count and undetectable viral load undergoing living donor hepatectomy was deemed to be no greater than that for other living liver donors. 12 However, a potential donor with active, untreated HIV infection would be excluded from this programme. In the latter scenario, the HIV-infected donor faces a risk of postoperative complications and death that is higher than an HIV-negative living donor. This is because untreated HIV infection is associated with immune dysregulation and CD4 depletion, Current Controversy opportunistic infections-most commonly tuberculosis, and comorbid viral infections such as hepatitis B and C. 13 In this instance, a strong argument against donation can be made, mostly to protect the donor, and to protect recipients who could potentially be at higher risk of HIV infection due to the donor viraemia, and other opportunistic infections.
An uncertain future
Although the risk/benefit analysis of HIV transmission in the face of certain death for the child without a transplant was relatively straightforward, we could not anticipate the nature of indirect future risks related to drug regimens and interactions, or graft rejection. Although our recipient is doing well, the future in this regard remains uncertain, as it does for those who have inadvertently received HIV-positive deceased donor organs. For us, the tangible direct benefit outweighed the risk of an uncertain future. There are, however, questions about how the child may cope facing numerous uncertainties growing up. If our child is HIV-positive, this may have implications for social interactions and future relationships. It may also influence decisions that the child makes getting older. This differs from the small group of transplant recipients who inadvertently received HIV-positive donor organs. These individuals know they are HIV-positive and will require ART in the future. In this case, we do not know if our recipient child is HIV-infected. This creates a dual uncertainty: for the recipient and family, and for the medical team. Until we have a better sense of the child's HIV status, the family will have to negotiate this uncertainty with us, and this could be stressful for all parties.
Information giving, consent and best interests
One important ethical feature of this case was that the child did not have a say in the decision to proceed with this transplant, being too young to provide meaningful consent. This is not unique to our case, where the decision fell to the parents. Parents routinely make medical decisions about management of diseases in their children, and the over-riding ethical principal that should guide the actions of health professionals is the best interests of the child. However, a well-documented phenomenon in living donor transplant is that parents will often assume excessive risks to themselves to save the lives of their children. 14 The intangible quality of the bond between parent and child means that this instinctual response is usually intrinsic to human nature and it cannot be separated from the transplant process. However, it also casts some doubt about the extent to which the decision to become a living donor and potentially infect one's child with HIV is an autonomous one. This consideration influenced the way we communicated risks and benefits to the child's parents.
A core aspect of the information-giving and consent process was ensuring that the parents appreciated the risk to their child. We were acutely aware of entering unknown territory and went to great lengths to ensure appropriate and detailed communication. We emphasised that we were unsure whether the child would contract HIV. We also took care to ensure that both parents had the capacity and social support to care for an HIV-positive child in the future. Our independent donor advocate (IDA), who is a multilingual community social worker, played a vital role in this process. The parents had several preliminary meetings with the IDA, and post-transplant she continued to informally make representations to the transplant team on behalf of the parents if necessary. Although it will never be possible to remove the emotional ties that compel people to make decisions with an unquantifiable risk-like the decision our parents made in this case-the IDA assists parents in their deliberations and in considering their options from all angles.
Although engaging the services of an IDA in living donor transplantation is routine in many countries, this is not the case in SA. We feel that this mechanism was successful in providing an extra layer orfcomfort and protection for our parents as they went through the decision-making process-and that the involvement of the IDA assisted in promoting parental autonomy as far as possible. The IDA was also empowered to engage with the medical team, thus mitigating an often-large power-differential between medical team and patients to some extent. Although it may require additional resources, it seems that all living donor transplant programmes in SA should have access to an IDA to ensure high standards of ethical practice.
Fairness and equity
We endeavoured to position this transplant within an ethical framework that was responsive to our context, fair and equitable. We go some way towards achieving more equitable access to liver transplantation through our programme, which allocates and transplants livers based on need, regardless of payer status. 4 Our programme is the first in SA to offer this level of access. Ability to pay for healthcare coverage is an important determinant of who ultimately receives treatment in SA, and it is known to perpetuate socioeconomic inequalities.
new eThICAl Issues disclosure and diagnosis
Subsequent to the publication of our case report on this transplant, a number of new ethical issues have come to light that warrant careful consideration.
At the outset of this case, we moved forward on the basis that the recipient was going to seroconvert and become HIV-positive due to the transplant. However, at this stage it is unclear whether seroconversion has taken place or not. For a child growing up, this casts a new shadow of uncertainty that we were unable to anticipate. It also raises questions of disclosure, how to disclose and when to disclose. These questions remain unanswered but are especially important given that we may not know the HIV status of the child for some time. This situation begs the question of how and when to disclose an uncertainty, and the implications of doing so for the future management of the child going forward. Especially important is the autonomy of the child as they get older-and the obligation to include the child in decision-making as far as possible. With so much unknown, this may pose a particular challenge.
Another emerging ethical issue is the extent to which the team ought to seek a definitive HIV diagnosis in the child-and this requires a careful risk-benefit analysis. It has been suggested that a provocative discontinuation of ART may be the only way to know for certain. However, discontinuing ART in an HIV-positive individual comes with significant risk. Reactivation of viral replication might occur. This could be at any time point after cessation of ART, necessitating regular screening of the child for an unknown period of time. Reactivation of HIV replication as a consequence of interrupting ART, while on immunosuppressive therapy to prevent rejection, may have life-threatening complications. This risk needs to be weighed against the risk of keeping a person who may be HIV-negative on ART, which might be unnecessary. Although case studies have demonstrated that HIV-positive transplant recipients tolerate both immunosuppression and ART well, ART does have long-term side effects Current Controversy and it would be important to ensure that a person was only taking ART because it was indicated by HIV infection.
The wAy forwArd
HIV-positive living donor liver transplant to HIV-negative paediatric recipients pivots around a number of central questions: Is it ethical to save a child's life through transplant while at that same time knowingly exposing them to HIV? Is it ethical to place the burden of this decision on a parent who may stop at nothing to save the life of their child? Is it ethical to deny HIV-positive people the option of living donation even when they are otherwise eligible to donate? To definitely determine whether the child does or does not have HIV infection, is a provocative treatment interruption of ART ethically justifiable, given the potential consequences? We have interrogated and agonised over these questions in our setting, and we will be faced with difficult decisions in the future. However, while we actively seek answers, the success of our first case is reason for optimism. open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
|
2019-05-16T13:03:46.110Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "fc3e5fd2551880bf383e5a3755fa5cee8baaa819",
"oa_license": "CCBYNC",
"oa_url": "https://jme.bmj.com/content/medethics/45/5/287.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "9334d7a19e79794a4a60954112fbd7358cfe67bc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257636977
|
pes2o/s2orc
|
v3-fos-license
|
Structure, physical properties, and magnetically tunable topological phases in topological semimetal EuCuBi
A single material achieving multiple topological phases can provide potential application for topological spintronics, whereas the candidate materials are very limited. Here, we report the structure, physical properties, and possible emergence of multiple topological phases in the newly discovered, air-stable EuCuBi single crystal. EuCuBi crystallizes in a hexagonal space group P63/mmc (No. 194) in ZrBeSi-type structure with an antiferromagnetic (AFM) ground state below TN = 11.2 K. There is a competition between AFM and ferromagnetic (FM) interactions below TN revealed by electrical resistivity and magnetic susceptibility measurements. With the increasing magnetic field, EuCuBi evolves from the AFM ground state with a small amount of FM component, going through two possible metamagnetic phases, finally reaches the field-induced FM phase. Based on the first-principles calculations, we demonstrate that the Dirac, Weyl, and possible mirror Chern insulator can be achieved in EuCuBi by tuning the temperature and applying magnetic field, making EuCuBi a promising candidate for exploring multiple topological phases.
Ternary pnictides with ZrBeSi-type structure having a hexagonal space group P6 3 /mmc (No. 194) have attracted much attention owing to their rich topological properties. For instance, KHgSb with honeycomb HgSb layer hosts an hourglass surface state protected by a glide mirror [28,29].
KZnBi was experimentally confirmed to be a three-dimensional DSM with surface superconductivity [28,29]. Moreover, topological phases in a single ternary pnictide with ZrBeSitype structure can be tuned. BaAgBi, originally reported as a DSM [30][31][32], can be tuned into a WSM by Eu doping on the Ba site breaking the time reversal symmetry or a TPSM by Cu doping on the Ag site breaking the inversion symmetry [32]. Theoretically, SrAgAs can go through multiple topological phase transitions between DSM, TPSM, and TI by controlling the content of doping Cu atoms on Ag site [18]. However, it is very challenging to experimentally control the occupancy and content of doping atoms for breaking symmetries and inducing new topological phases. In contrast, tuning the symmetry by changing the spin configuration in magnetic topological materials is much easier [25,33,34]. For example, EuAgP can be switched between nearly TPSM and WSM by controlling the directions of magnetization [25]. Below the Néel temperature, EuAgAs in an antiferromagnetic (AFM) state is a topological mirror semimetal or TPSM. Above the Néel temperature, it is in a paramagnetic (PM) state with a pair of Dirac points [26]. In addition, EuAgAs was experimentally found to possess the topological Hall effect caused by the nontrivial spin textures of real space [35][36][37][38][39]. Moreover, the introduction of magnetic atom into topological materials and the enhanced Berry curvature for magnetism result in other novel properties [40], such as the anomalous Hall effect [41][42][43][44], anomalous Nernst effect [45][46][47][48], magneto-optical effect [49,50], and magnetic spin Hall effect [51][52][53].
Here we report the structure, physical properties, and possibly emerging topological phases of EuCuBi single crystal. As revealed by the electrical resistivity, magnetic susceptibility, and specific heat capacity measurements, EuCuBi undergoes an AFM transition at T N = 11.2 K. Below T N , the presence of weak ferromagnetic (FM) component below μ 0 H 1 is probably due to the competition of FM and AFM interactions. By using first-principles calculations, we demonstrate the multiple topological states in EuCuBi and the transition among them. Below T N and μ 0 H 2, EuCuBi is in the AFM ground state and the possible magnetic structure is AFM [
Single crystal growth, crystal structure, and chemical composition
The EuCuBi single crystals were grown by the high-temperature solution method using Bi as flux. The europium ingot (99.9%, Alfa Aesar), copper powder (99.9%, Alfa Aesar), and bismuth granules (99.995%, Alfa Aesar) were mixed in a fritted alumina crucible set (Canfield Crucible Set) [54] in a molar ratio of 2:1:4 and then sealed in an fused silica ampoule under vacuum. The sealed ampoule was heated to 1273 K, kept for 24 h, and then slowly cooled down to 923 K at a rate of 3 K/h. At this temperature, the crystals were separated from the remaining flux by centrifugation. Shiny and hexagonal-shaped crystals were obtained with a size up to 1 mm × 1 mm × 0.5 mm. The crystal structure was determined by single-crystal X-ray diffraction (SCXRD) on a four-circle diffractometer (Rigaku XtaLAB PRO 007HF(Mo)R-DW, HyPix) at 180 K with multilayer mirror graphite-monochromatized Mo K α radiation (λ = 0.71073 Å) operated at 50 kV and 40 mA. The Xray diffraction (XRD) data of EuCuBi single crystal were measured on a PANalytical X'Pert PRO diffractometer (Cu K α radiation, λ = 1.54178 Å) operated at 40 kV and 40 mA with a graphite monochromator in a reflection mode (2θ = 10°-100°, step size = 0.017°). The chemical composition was analyzed by a scanning electron microscope (SEM, Hitachi S-4800) equipped with an electron microprobe analyzer for semi-quantitative elemental analysis in energy-dispersive X-ray spectroscopy (EDS) mode.
Physical property measurement
Resistivity and magnetoresistance measurements were performed on a physical property measurement system (PPMS) (Quantum Design, 7 T). Contacts for standard four-probe configuration were established by attaching platinum wires using silver paint, resulting in a contact resistance smaller than 5 with the applied current (about 2 mA) parallel to the crystallographic ab plane and the magnetic field perpendicular to the ab plane [55]. Magnetic susceptibility measurements were carried out with applied magnetic field parallel and perpendicular to the ab plane using the zero-field-cooling (ZFC) and field-cooling (FC) protocols. Isothermal magnetization data were collected at 2 K, 4 K, 6 K, 8 K, 12 K, 20 K, and 50 K under the applied magnetic field up to 16 T parallel and perpendicular to the ab plane on a high-field PPMS (Quantum Design, 16 T). Specific heat capacity data were collected on a PPMS below 220 K.
First-principles calculations
The first-principles calculations were carried out using a plane-wave basis set and projector augmented wave method [56] encoded in the Vienna Ab initio Simulation Package [57,58] for the calculation of electronic band structure. The generalized gradient approximation (GGA) parameterized by Perdew, Burke, and Ernzerhof pseudopotential was used for the exchangecorrelation functional [59]. We adopted the GGA+Hubbard-U method where U = 6.0 eV to deal with the strong correlation effects of the Eu-f electrons in the magnetic phases [60]. The energy of the plane-wave cutoff was set to 500 eV. The convergence criterion for the total energy was set to be 10 −7 eV. We used a 9 × 9 × 7 k-mesh as a sample of the Brillouin zone in the Monkhorst-Pack scheme [60]. All results take into account the spin-orbit coupling (SOC). The crystallographic data for EuCuBi determined by SCXRD is shown in Table Ⅰ and Table SⅠ. EuCuBi crystallizes in the hexagonal space group P6 3 /mmc (No. 194) having the ZrBeSi-type structure with a = 4.6099(1) Å, b = 4.6099(1) Å, c = 8.5208(4) Å, α = β = 90°, and γ = 120°. The Cu and Bi atoms are alternately arranged to form a honeycomb layer in AB stacking along the c axis. The Eu atoms are located between the neighboring CuBi layers. The XRD pattern of EuCuBi single crystal is shown in Fig. 1(a), where only the (00l) (l = even) diffraction peaks are observed, indicating that the hexagonal-shaped plane is the crystallographic ab plane. The temperature-dependent electrical resistivity of EuCuBi single crystal measured in the temperature range of 2 K-300 K with the current being parallel to the ab plane (I // ab) is shown in Fig. 1(b). It has a residual resistivity ratio (RRR) (ρ 300 K /ρ Tmin ) of about 1.79 (see Fig. 1(b)), close to those of EuAgAs (RRR = 1.96) [39], EuAuAs (RRR = 2.07) [61], and EuCuAs (RRR = 1.35) [62]. The resistivity at 300 K is 122.97 μΩ cm, which is comparable to those of EuAgAs (100 μΩ cm) [39], EuCuAs (110 μΩ cm) [62], and SrAuAs (126 μΩ cm) [63] but lower than that of EuAuAs (430 μΩ cm) [61]. The resistivity gradually decreases with decreasing temperature above T min = 24 K, exhibiting a metallic-like behavior. Below T min , the resistivity first increases and then decreases, exhibiting a sharp anomaly at T N = 11.2 K. The decrease in resistivity below T N is due to the reduced carrier scattering for ordered Eu 2+ moments.
A. Crystal structure and magnetotransport
By applying magnetic field perpendicular to the ab plane (H ab), this anomaly is gradually suppressed with the magnetic field increasing from 1 T to 5 T, exhibiting the low-temperature shifting and intensity decrease ( Fig. 1(c)). Such behavior is typical for an AFM transition [64]. Notably, below the magnetic field of 1 T, the anomaly moves towards higher temperature as the magnetic field increases (the inset of Fig. 1(c)), showing a distinct behavior of FM transition [65,66]. Thus, there probably is a competition between FM and AFM interactions [67,68].
The magnetic-field-dependent transverse magnetoresistance (TMR) defined by [ρ(H) -ρ(0)]/ρ(0) × 100% at different temperatures is shown in Fig. 1(d) (I // ab, H ab). TMR is negative with a cusp-like feature due to the suppression of magnetic fluctuation by increased applied magnetic field below 50 K (well above T N = 11.2 K) and becomes positive above 50 K. This suggests that the magnetic fluctuation exists well above T N . Under a magnetic field of 5 T, TMR reaches -5.73 % at 4 K, -6.63% at 9 K, and being smaller than 1% at 100 K. TMR at 9 K is smaller than that at 4 K, similar to that of EuAgAs [39], which may be due to the larger magnetic fluctuation close to T N resulting in more significant magnetic field suppression. Fig. 2(a) and 2(b) shows the ZFC magnetic susceptibility and its inversion for EuCuBi single crystal with a magnetic field of 0.1 T parallel (χ // ) and perpendicular (χ ) to the ab plane at temperature ranging from 2 K to 300 K, respectively. χ // shows sharper peak at 11.2 K than χ at 11 K, suggesting that the magnetic moments align in the ab plane [62,69,70]. The extended Curie-Weiss Law χ(T) = C/(T − Θ P ) + χ 0 , where χ 0 is the temperature-independent susceptibility, C the Curie constant, and Θ P the PM Weiss temperature, is used to fit the magnetic susceptibility at high temperature. The parameters and the effective magnetic moment derived by the fitting are listed in (d) show the ZFC and FC curves under certain magnetic field parallel and perpendicular to the ab plane, respectively. Around T N , χ // increases and then decreases rapidly with decreasing temperature forming a distinct peak associated with the AFM transition under low magnetic field (the inset of Fig. 2(c)). Whereas for χ , it exhibits a nonmonotonic behavior below μ 0 H 1 (Fig. S2), which first decreases and then increases as the temperature drops (the inset of Fig. 2(d)). Such behavior also implies the competition between FM and AFM interactions, similar to that of EuAgAs [39]. Above μ 0 H 1 , both χ // and χ have magnetic transition that shifts to lower temperature with increasing magnetic field (Fig. 2(c) and (d)), which corresponds to the AMF order.
B. Magnetic properties
Below μ 0 H 1 , dχ // /dT vs. T and dχ /dT vs. T both have peak that shifts towards higher temperature with increasing magnetic field in common with FM order (Figs. S2(c) and (d)). This is coincident with resistivity and susceptibility under low magnetic field. The hysteresis loop can be seen in ) and (b)). This suggests that the first metamagnetic (MM1) transition is underway. As the magnetization is clearly not saturated up to μ 0 H 3 (see Fig. 2(f)), there may exist the second metamagnetic (MM2) transition below μ 0 H 3 (magnetic moments continue to rotate towards the magnetic field). The isostructural compounds, like EuAgAs [39,71], EuCuAs [62], and EuAuAs [61], all have the A-type AFM order, in which Eu 2+ is in the FM alignment within the ab plane and in the AFM alignment along the c axis. Considering the very similar magnetic susceptibility and small magnetic anisotropy, EuCuBi most probably also has the A-type AFM order. The specific heat capacity C P (T) of EuCuBi single crystal was measured under zero field in the temperature range of 2 K-213 K and shown in Fig. 3 (a). A distinct λ-shape peak is observed around 11.2 K, which is consistent with the χ(T) and ρ(T). The value of C P (T) at 213 K is 67.6 J/mol K, which is close to the classical Dulong-Petit limit C P = 3NR = 74.83 J/mol K with N = 3 for EuCuBi, where N is the number of atoms per formula unit, R the ideal gas constant. The C P (T) in the temperature range 2 K-213 K was fitted by the Debye model using the expression as follows: Where Θ D is the Debye temperature, γ the Sommerfeld coefficient, γT the electron specific heat term. As shown by the dashed line in Fig. 3(a), the fitting has obvious deviation in the high temperature region and yields an unphysically negative γ. A much better fitting is obtained by combining the Debye model with Einstein model using the expression as follows [61,72]: Where Θ E is the Einstein temperature, b the weighting factor determining the weight of the Debye model in the lattice specific heat capacity. The parameters obtained from this fitting are as follows: γ = 3.11 mJ/mol K 2 , Θ D = 151 K, Θ E = 400 K, and b = 0.92. The Einstein temperature 400 K is higher than that of EuAuAs (Θ E = 313 K) and EuMg 2 Bi 2 (Θ E = 305 K), which may be correlated to the high-frequency optical modes [61,72]. In addition, we also fitted C P (T) in the temperature range of 18 K-28 K by the low temperature limit T 3 term and electron specific heat term of Debye model using the expression as follows: C P (T) = γT + βT 3 (5) Fitting parameters γ = 201.8 mJ/mol K 2 , β = 0.84 mJ/mol K 4 , and Θ D = 177 K are derived. Θ D is obtained by the expression: The large γ is due to the magnon contributions of Eu 2+ . The magnetic part of specific heat capacity C mag (T) is obtained by subtracting the fitting curve obtained using Eq. (4) from the sum of Debye model and Einstein model. The S mag (T) is derived by the expression: As shown in Fig. 3(b), S mag (T) tends to saturate and is close to the theoretical value Rln(2S+1) = 17.3 J/mol K for S = 7/2 at higher temperature. At T N , S mag achieves a value of 16.09 J/mol K and reaches 85.5 % of Rln(8) J/mol K. In view of the classical λ-shape specific heat peak, the corresponding transition should belong to the typical second-order magnetic transition [73][74][75].
Combining the results presented above, the magnetic phase diagram of EuCuBi is summarized in Fig. 4. The transition temperatures obtained from the magnetic susceptibility are determined by dχ/dT. The critical magnetic fields are determined by dM/dH from the magnetization. More details can be seen in Supporting Information. There are five regions in the magnetic phase diagram: below T N and μ 0 H 2 , spins align antiferromagnetically; above μ 0 H 3 , spins align ferromagnetically along the applied magnetic field direction; when μ 0 H 2 < μ 0 H < μ 0 H 3 , spins undergo the possible MM1 and MM2 transitions; and in the low-field region above T N , EuCuBi is in the PM state. A weak FM component may exist in the region below T N and μ 0 H 1.
D. Electronic structure
Within the unit cell of EuCuBi, Eu is at the Wyckoff position of 2a, Cu at the position of 2d, and Bi at the position of 2c. It is noticed that this structure contains a C 6 rotation axis and a vertical σ ν mirror plane, which protect the crossing points on the high symmetry axis. The degeneracy points near the Fermi energy mainly come from the s orbitals of Cu and Bi, and d orbitals of Eu, whereas the magnetism comes from the f orbitals of Eu.
To (33 μeV), also indicating small magnetocrystalline anisotropy. For the self-consistent calculation, AFM [110] is hard to converge, so the convergence criterion for its electronic energy is set to be 10 -6 eV, while all the numerical tolerance for other configurations is set to be 10 -7 eV. We also calculated the total energy of three probable We then calculated the topological features of different magnetic configurations for EuCuBi. The calculated band structure along the Γ-A path is shown in Fig. 5(f). For PM EuCuBi, the conduction and valence bands intersect at the point D. Since both time reversal and inversion symmetries are preserved in the PM state, all bands are doubly degenerated. The magnetic little co-group of Γ-A path is 6/m'mm and the conduction and valence bands belong to the two-dimensional irreducible corepresentations DT7 and DT9, respectively. The point D is a fourfold degenerated Dirac point protected by C 6 symmetry. Next, we considered the band structure of AFM [001] configuration. The Dirac point is broken for the absence of time reversal symmetry. The doubly degenerated band and the two non-degenerated bands intersect and form two triple degenerated points T 1 and T 2 . The double degenerated band belongs to the irreducible co-representation DT6 of the magnetic little group while the other two bands belong to DT4 and DT5, respectively. For the AFM [100] configuration, the intersections between the conduction and valence bands are all destroyed. Since we find the non-trivial mirror Chern number for the mirror M z , AFM [100] phase of EuCuBi is a possible mirror Chern insulator. In addition, the band structure of FM [001] configuration is also shown in Fig. 5(f). Its magnetic little co-group of Γ-A path is 62'2', in which there are only onedimensional irreducible representations. The non-degenerated conduction and valence bands intersect at the W 1 point, forming a doubly degenerated Weyl point protected by C 6 symmetry. Finally, the magnetic little co-group of Γ-A path in FM [100] and FM [110] configurations is m'm2'. The conduction bands and the valence bands belong to one-dimensional irreducible representations G4 and G3 respectively, and they intersect at points W 2 and W 3 forming doubly degenerated Weyl points.
Ⅳ. CONCLUSIONS
In summary, EuCuBi crystallizes in a hexagonal P6 3 /mmc space group with honeycomb layers formed by Bi and Cu atoms, in which the six-fold rotation axis protects the degenerated points on the Γ-A path. An AFM transition at T N = 11.2 K is confirmed by the resistivity, magnetization, and specific heat capacity measurements. The presence of weak FM component below T N and μ 0 H 1 suggests that there is a competition between FM and AFM interactions. A magnetic phase diagram of EuCuBi is established. Below T N , the symmetries influenced by the long-range magnetic order lead to the emergence of multiple topological states, including TPSM, WSM, and possible mirror Chern insulator. The magnetic phases with different topological states can be effectively obtained Γ by tuning the magnetic field and temperature. Magnetic-tuned topological phase transitions may provide a new perspective on spintronics. The results indicate that EuCuBi should be a promising candidate for revealing the interplay of topology and magnetism and exploring applications of spintronics. The chemical composition of EuCuBi single crystal was analyzed by a scanning electron microscope (SEM, Hitachi S-4800) in energy-dispersive spectroscopy (EDS) mode. The normalized Eu : Cu : Bi ratio is 1.00 (1): 1.00 (1) : 1.01(1).
|
2023-03-22T01:16:36.708Z
|
2023-03-21T00:00:00.000
|
{
"year": 2023,
"sha1": "33f991494cce18534f6f27b44194c9a0b4614467",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "33f991494cce18534f6f27b44194c9a0b4614467",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
267322176
|
pes2o/s2orc
|
v3-fos-license
|
A global perspective on household size and composition, 1970–2020
Households are core units of social organization and reproduction, yet, compared to other areas of demographic research, we have limited understanding of their basic characteristics across countries. Using data from 793 time points and 156 countries in the new CORESIDENCE database, this article provides a comprehen‑ sive analysis of global household size and composition trends. The findings reveal that despite significant international variations in household size, ranging from 1.8 in Denmark to 8.4 in Senegal, there is a widespread decline in household size. On average, households have decreased by approximately 0.5 persons per decade. Children contribute to over three‑quarters of the observed variability and decline in household size in recent decades. In contrast, the number
Introduction
Households constitute the most basic unit of interaction among humans and have profound implications for the social and economic reproduction of their members (Becker, 1998;Esping-Andersen, 2016;Laslett, 1970;Le Play, 1871;Parsons, 1949).They are widely used as units of enumeration for data collection purposes and have significant implications for research on poverty, living conditions, family structure, or gender dynamics (Deaton, 1997;England & Farkas, 2017;Lanjouw & Ravallion, 1995).At the micro-level, studying households provides insight into the processes that shape societies, including decision-making, resource allocation, consumption, and socialization (Agarwal, 1997;Becker, 1998;Browning et al., 1994).At the macro-level, household change is often linked to broader social and economic processes such as urbanization, housing dynamics, aging, or family change (Buzar et al., 2005;Clark & Dieleman, 2017;Lesthaeghe, 2020;Mulder, 2006;Myers, 1990).Despite their importance, comparative research on households at a global level is relatively scarce.Most existing studies tend to focus on single countries/regions or age groups and rarely combine multiple data sources (Asis et al., 1995;Bongaarts, 2001;Bongaarts & Zimmer, 2002;Burch, 1967;Dommaraju & Tan, 2014;Esteve et al., 2012aEsteve et al., , 2012b;;Salcedo et al., 2012;Thomson, 2014;van de Walle, 2016;Vos, 1990).While these studies provide valuable insights into the living arrangements existing in individual societies, they do not normally lead to a comprehensive understanding of variations in household size and composition on a more general scale.To address this knowledge gap, this study aims to answer the following question: How does the size and composition of households vary among countries and regions and how has it evolved in the relatively recent past?
We make use of diverse data sources derived primarily from population censuses and household surveys to comprehensively examine patterns of change for 156 countries and 792 data samples spanning from 1960 to 2021.These countries represent a broad range of demographic, social, and economic conditions and have undergone profound transformations in recent decades, including fertility decline, increases in life expectancy, educational expansion, and rises in per capita income.Modernization and demographic transition theories have relied on these transformations to predict a process of increasing individualization and rapid aging of societies which, according to these theories, will ultimately have an impact on the size and composition of households (Cherlin, 2012;Furstenberg, 2019;Goode, 1963;Lesthaeghe, 1989;Ruggles, 1994).
In this article, households are the units of analysis.We recognize that household-level analysis does not control for individual characteristics (e.g., age, sex, education, marital status).Nonetheless, households are important in demographic research because they provide insights into links between living arrangements and population structure.Household changes reflect demographic trends, such as declining fertility and the weakening of marriage, structural dynamics, such as urbanization, and socio-cultural dynamics as shown in the rich literature on households developed in the twentieth century (Goode, 1963;Laslett & Wall, 1972;Todd, 1985).We aim to add a global comparative perspective that provides an overview of past and present changes in household size and composition that is currently missing from the literature.
Background
Households have attracted the attention of several social science disciplines, including sociology, economics, anthropology, and demography.Sociological perspectives have primarily focused on gender roles, socialization, and family structure (Bales & Parsons, 2013;Forste & Fox, 2012;Goode, 1963), while economic views have examined household consumption and resource allocation (Becker, 1998;Browning et al., 2014;Mason, 1988).Anthropological perspectives have centered on kinship and different cultural dimensions (Goody, 1976;Murdock, 1967).Demographers have primarily investigated household size and composition and their determinants (Bongaarts, 2001;Ruggles & Brower, 2003).This current study, rooted in the demography of households, will have implications for an array of social science disciplines.
Examples of such implications extend into social, economic, and ecological dimensions and are often complex.Increases in one-person and two-person households, relative to larger ones, change demand patterns in the housing market and household characteristics (Mulder, 2006) and their impact on housing markets can have ecological consequences linked to the provision of resources and infrastructure (O'Neill & Chen, 2002;Zagheni, 2011).Gender roles, division of labor, and norms associated with living arrangements evolve as household characteristics change (Bianchi et al., 2000;de Laat & Sevilla-Sanz, 2011;Pessin, 2018;Sevilla-Sanz et al., 2010).Smaller households might have positive social effects as fewer members can reduce complexity, vulnerability to conflict, and domestic violence.However, one-person households, especially at older ages, and living arrangements of single parents might be linked to feelings of loneliness, social exclusion, and economic deprivation (Holt-Lunstad et al., 2015;Nieuwenhuis & Maldonado, 2018).Fewer children in the household might reduce care obligations and changes in intra-household roles, which can impact female labor market participation and gender relations.Yet, changes in the transition to adulthood observed across different country contexts might increase the time spent in intra-generational households (Billari et al., 2001;Esteve & Reher, 2021;Furstenberg, 2010).We are not directly examining the implications of household change in this article, but we are highlighting them, to emphasize the relevance of the household-level perspective.
Household size
Household size refers to the number of individuals living together in a household during the process of census or survey data collection.Defining what constitutes a household and who qualifies as a member, presents challenges when comparing across countries.The United Nations (UN) defines a household as "a small group of persons who share the same living accommodation, who pool some or all of their income and wealth, and who consume certain types of goods and services collectively, mainly housing and food" (United Nations, 1993), but specific practices can vary significantly across countries (Bongaarts, 2001).Household membership can be defined by de jure or de facto enumeration.The de jure criteria includes persons who normally live in the household, while the de facto criteria refer to those who spend the census night in the dwelling.In societies where there is a significant number of temporary displacements and absences, this distinction may have significant effects.Generally, however, existing evidence shows that differences between the two criteria with respect to their impact on average household size tend to be negligible at the aggregate level, even in Sub-Saharan Africa, which historically exhibits the most complex structure of household organization (Lesthaeghe, 1989;van de Walle, 2016).
The size of a household is mainly determined by the number of children and the type of coresident family group (Glick, 1976).In societies with high fertility rates, households tend to be larger than in those with low fertility rates and declines in fertility rates invariably lead to declines in household size.The coresident group is mainly determined by two factors: the number of adult members in the household and the nature of their kin or non-kin relationships.Most commonly, these relationships involve a certain degree of kinship.Family-based households can be broadly categorized into two main types: nuclear and extended (Laslett & Wall, 1972).A nuclear family household comprises a couple and their children, or any combination of them, whereas an extended household involves kin such as grandparents, aunts, or uncles, and others.In societies where nuclear arrangements predominate, the average household size tends to be lower than in societies where extended households are more frequent (assuming similar levels of fertility and mortality).Among non-family households, we can distinguish two types.
The first type is single-person households.The second type is multi-person households whose members are not related by any degree of kinship (Ruggles, 1988).
As an indicator for household size, we take the number of persons living in any given household.It is important to note, however, that different distributions of small and large households can produce similar average household sizes.In this study, we will examine trends in both average household size and in the distribution of households by size.Average household size provides the link between the total population and the total number of households (Mulder, 2006;Myers, 1990).These dynamics have both macro and micro implications.At the macro level, variations in household size have direct implications for the housing market and the economy in general (Bloom et al., 2003;Espenshade et al., 1983;Malmberg, 2012).When people live in small households, family members tend to be spread over different units.This has consequences for the share of private transfers that take place within or between households (Hammer & Prskawetz, 2022;Lee & Mason, 2011;Vargha et al., 2017).At the micro level, household size shapes interfamily relationships and, thus, the process of socialization.The size of a household can shape power dynamics within households and their distribution along gender and intergenerational axes.
Household composition
Household composition refers to the internal structure of households.In this study, we explore two interconnected dimensions of household composition: age structure and the relationship to the household head or reference person.First, by analyzing age structure, we aim to understand how the presence of children and adults within households varies across societies and how it has evolved.As fertility decreases, life expectancy increases, and populations subsequently age, a decline in households with children and an increase in households with older adults would be expected to reflect these changes.Changes in the presence of children and/or elderly individuals in households have implications for intergenerational support, caregiving patterns, and expenditure dynamics, as households with children and older adults may have different consumption patterns and demand characteristics (Hammer & Prskawetz, 2022;McGarry & Schoeni, 2000;Vargha et al., 2017).
Second, we analyze the type of relationships existing between household members and the person of reference.Censuses and surveys most often define a reference person (also known as head of the household), to whom other members can be related.The relationship to the household head provides valuable insights into the family configurations of households (Bongaarts, 2001;Posel, 2001).The structure of intra-household relationships constitutes an indicator of the strength of family ties in any given society (Reher, 1998).To facilitate cross-national comparisons and maximize the number of countries included in the analysis, we consider four types of relationships to the person of reference: child, spouse/partner, other relative, and non-family.We take the presence of other relatives of the person of reference in the household as indicative of more complex or extended household structures that depart from the strictly nuclear household (Ruggles, 1994).
A central goal of this study is to assess the contribution of children and other relatives to variations in household size across societies and over time.This will allow us to elucidate the extent to which the distinctive characteristics of household configurations across societies persist during times of on-going reductions in the number of children.Fertility declines will reduce the number of children in society and, therefore, their presence in households, but this may not necessarily modify the type of families commonly found in households.However, if the decline in fertility is embedded within a broader process of social and economic modernization, a progressive simplification and nuclearization of households could also be a part of this very process (Cherlin, 2012;Lesthaeghe, 2010).In other words, fertility decline can potentially be associated with increases in the importance of nuclear households and a decline in the presence of other relatives present in households.
Changes and convergence in household size and composition
Although the global scope of this study prevents a detailed examination of the underlying mechanisms of changes in household size and structure for individual countries, we can identify some of them.Firstly, demographic dynamics shape household size and composition.As fertility declines, families, and often households, become smaller because there are fewer children on average (Bongaarts, 2001;Bongaarts & Zimmer, 2002).In the long-term, fewer children can also imply fewer siblings and smaller family networks (Murphy, 2011;Tomassini & Wolf, 2000), which might reduce the number of vertically extended households.However, fewer children could reduce intra-household resource competition, thus contributing to a delayed transition to adulthood and higher levels of intergenerational coresidence in some contexts (Aparicio-Fenoll & Oppedisano, 2016;De Falco et al., 2023).Increases in longevity and population aging can increase the duration of overlap between generations but also contribute to increases in households with older adults (Jiang & O'Neill, 2007;Zeng et al., 2008).Other demographic determinants of households are changes in union formation dynamics, migration, and changes in health and mortality patterns.
Thirdly, socio-cultural factors such as norms and values associated with family, marriage, kinship, and gender are closely associated with household size and composition.Decline in patriarchal family organization and parental control have been far reaching in some world regions, contributing to different relationship types and changes in the timing of life course events, which impact household size and structure (Esteve et al., 2012a(Esteve et al., , 2012b;;Ruggles, 2015;Therborn, 2004Therborn, , 2006)).Value systems and norms may evolve or persist over time, resulting in different household typologies across countries and regions (Therborn, 2004).
In the literature, it has been suggested that the above-described changes will contribute to a global convergence of household size and composition (Goode, 1963).Embedded in the larger framework of development theory, this idea of convergence has always been present in demography, initially linked primarily to the core aspects of mortality and fertility, and later extended to partnership dynamics (Cherlin, 2012;Furstenberg, 2019;Pesando & GFC team, 2019).However, theories of demographic change have paid little attention to convergence in household size and structure.To find theoretical references that contribute to this topic, we must turn to sociology, primarily drawing from the work of William Goode, who in the 1960s aimed to adapt the economic modernization theory to a systematic study of the family across different world regions (Goode, 1963).He predicted that societies undergoing the industrialization processes would witness an increase in conjugal families and a decline in extended households due to a reduction in the economic dependence on the family as a unit of social organization and reproduction.Goode's influential research on household and family change has emphasized the adaptive nature of families and households to the needs of society (Cherlin, 2012;Goode, 1963).Nonetheless, Goode failed to adequately predict further changes in society that would weaken conjugal life and present alternatives to the nuclear family model (Cherlin, 2012;Furstenberg, 2019), and his postulates could neither be empirically verified at a global scale nor countered by a theory of the same scope (Cherlin, 2012;Pesando, 2019).
Since Goode's seminal work, only Göran Therborn's, 2004 book, "Between Sex and Power: Family in the World, 1900-2000," set out to offer a similarly comprehensive global analysis of shifts in family patterns (Therborn, 2004).Therborn, while endorsing the notion of worldwide transformations in family systems, diverged from Goode's convergence hypothesis (Cherlin, 2012).Instead, Therborn directed his attention to what he perceived as a growing complexity and heterogeneity within global family systems across three analytical dimensions: (1) shifts in the roles and authority of fathers and husbands; (2) changes in marriage, cohabitation, and non-marital relationships; and (3) population policies (Therborn, 2004).He proposed that, rather than converging, family systems on a global scale would continue to evolve and diverge.This implies that various regions across the world would witness the emergence of distinct family patterns, notwithstanding shared underlying social dynamics such as declining fertility rates or alterations in union formation and types (Cherlin, 2012;Pesando, 2019;Pesando & GFC team, 2019;Therborn, 2004Therborn, , 2006)).
The debates around convergence of household composition initially centered on the structural and cultural forces promoting or hindering the nuclearization progress (Goode, 1963;Therborn, 2004).Limited theoretical consideration was given to the role of demographic change (Ruggles, 1987).Demographic shifts, particularly in fertility (Burch, 1967;Dorius, 2008), shape households, reducing their size not only through fewer children but also indirectly by thinning kinship networks.However, questions arise regarding whether the presence of other relatives in households will change and how household composition will evolve over time.
Data
The data used in this study is taken from the CORESIDENCE database (Galeano et al., forthcoming manuscript), which provides household-level indicators at the national and subnational levels for 156 countries, comprising 793 data points over time.The CORESIDENCE database combines data from various sources, including population censuses obtained via the Integrated Public Use of Microdata Series-international (IPUMS-i) (Minnesota Population Center, 2020), Demographic Health Surveys (DHS), Multiple Indicator Cluster Surveys (MICS), European Labor Force Surveys (EU-LFS), and other miscellaneous sources.All data used in this study are openly available in the CORESIDENCE database (https:// zenodo.org/ record/ 81426 52).The CORESIDENCE database offers several advantages to the UN Database on Household Size and Composition (United Nations, 2022).The former includes a more detailed number of indicators, has a higher temporal coverage, and draws from data sources beyond MICS.Additionally, data in the CORESIDENCE database are available on the sub-national level.The CORESIDENCE database is an open-source project, and the code for the harmonization processes are available for open access, which increases transparency and allows for reproducibility of all results.
The available indicators in the database are grouped into four main categories: size, age composition, kinship structure, and household headship.Each category includes multiple indicators.In this study we focus on the average household size, the proportion of single-person households, the proportion of households with 5 people or more, the proportion of households with children aged 0 to 4, the proportion of household with people aged 65 or more, the average number of children (offspring) of the reference person, and the average number of other relatives present in the household, irrespective of age, and the average number of members by age.We use country-level data to produce a global comparison that emphasizes trends and changes over time and across countries.Adding a sub-national perspective would expand the paper beyond its scope in terms of necessary explanations and data presentation.
The data available for each country often come from different sources and in the process of building the indicators from the microdata, the individual weights were provided by the data sources.It is important to highlight that while data and indicators can be harmonized, definitions cannot be harmonized across samples.Thus, harmonization primarily refers to the construction of indicators, such as household size typology indicators and relationships to the head.Underlying definitions for headship across some samples may differ.Additionally, the harmonization process entails harmonizing data on the subnational level, as geographical regions may have changed over time.While there is a high level of consistency among the sources within the same country, we treat each source independently for graphical representations.Therefore, when plotting trends over time, lines will only connect data points taken from the same source.
All analyses and graphical representations in this study are based on the 156 countries and 793 samples described here, with the sole exception of the map in Fig. 1, which shows the average household size of 19 countries and territories for which there are no data available after year 2000 in the CORESIDENCE database.Data for these cases come from the United Nations Database on Household Size and Composition 2022.These are Norway, Japan, Greenland, Iceland, New Zealand, Taiwan, Israel, French Guiana, Sri Lanka, Lebanon, United Arab Emirates, Equatorial Guinea, Central African Republic, Svalbard, Saudi Arabia, Libya, Kuwait, Djibouti, Eritrea, Iraq, and Oman.Data presentation by geographical region serves visualization purposes and is not based on theoretical or analytical claims with respect to these groups.Such clustering would require indicators beyond the household-level ones used for this research.Instead, we emphasize our focus on the country-level analysis and have grouped countries to make the visualization on the global scale possible.However, we note the important work on clustering of family systems in the literature (e.g., Castro et al., 2022).For the data analysis and visualization, we treat countries and samples as single points in time.No weighing takes place beyond the weighing of the source data, as the objective is not to analyze living arrangements on the micro-level but to outline differences on the macro-level for countries and continental regions, as is common in cross-national descriptive analysis.At the opposite extreme, 37 countries spread across Africa, Asia, and Oceania have average household sizes above 5 individuals.These countries collectively represent over 11 percent of the world's population.The regions with the largest households include West Africa, Central Africa, East Africa, West Asia, South and Central Asia, and Melanesia.The range of household size in these regions is quite broad, spanning from 5 persons per household in Tanzania to 8.42 persons per household in Senegal.Of the top 10 countries with the largest household sizes in the world, 5 are in Africa (Senegal, The Gambia, Guinea, Guinea-Bissau, and Mauritania) and 5 in Asia (Afghanistan, Oman, Pakistan, Yemen, and Iraq).
Changes in size
80 percent of the countries in the world have households with a size ranging from 2.3 to 5 persons per household.These countries represent 83 percent of the world's population.At the lower end of this range (2.3 to 2.6), we find the majority of European countries along with Canada and Australia.In the next tier (2.6 to 3.13), we have countries such as China, the United States, and countries from the southern part of Latin America (Argentina, Chile, and Uruguay).These countries represent 25 percent of the world's population.With values between 3.13 and 4.63 persons per household, we find a mix of countries spread across all continents.Between 4.63 and 5 persons per household, we find India, the world's most populous country, along with countries in Southeast Asia (e.g., Laos), North Africa (e.g., Algeria), and two countries in Central America (e.g., Nicaragua).Lastly, 6.24 percent of the world's population lives in countries with an average household size above 5.84.
Figure 2 shows time trends of average household size for 156 countries around the world.Labels have been used to denote those countries with longer data series and those exhibiting values deviating significantly from the central trends.Specific data In absolute terms, this decline is most pronounced in countries with the largest households.In most countries, household size diminishes monotonically over time.African countries exhibit the greatest disparities in this regard.The diversity of data sources used for African countries in this study may explain a part of this pattern.When census data alone are available, as in the case of Latin America, trends over time are more consistent.As countries approach an average of two individuals per household, the rate of decline slows.
Figure 3 provides a summary of the observed trends.It shows the variation over time in the average household size by country, considering the time elapsed between the most recent observation since the year 2000 and the earliest available observation, always comparing observations from the same source.Countries are identified by their labels.We have added trend lines for major regions to facilitate the analysis.The color indicates the continent of origin.Out of the 128 represented countries, the average household size has decreased in 113 of them.Generally, the longer the observation period, the greater the decline.
In the Americas, we observe an average decrease of approximately one person per household every two decades.As an exception, Haiti maintains a relatively stable average household size around 4.7, while the United States experienced a decline of around half a person over four decades between 1960 and 2015.Across Europe, a widespread decline in household size is evident, occurring at an average rate of one person per household every five decades.Ireland stands out as an exception, exhibiting relative stability around 2.8 persons per household over a period of 45 years.In Asia and Oceania, household size decline is widespread, with exceptions visible in Pakistan, Papua Guinea, Yemen, and Australia.Notably, South Korean households have experienced a substantial decline of 2.7 persons per household over four decades.Thailand's households have experienced a similar decrease (2.9) over five decades.Africa exhibits the greatest heterogeneity, with several countries experiencing virtually no change in average household size.Countries such as Ethiopia, Guinea, Benin, and Cameroon have maintained a consistent household size for about three decades.Conversely, Botswana has seen a decrease of 3.3 persons per household over three decades.Kenya's average household size has declined by 1.1 persons over 40 years.
The decline in the average household size implies a redistribution of the households by size.In general terms, a decrease in household size should lead to a decline in the importance of the largest households, together with an increase in the importance of smaller households.Changes in the timing of life course events, such as a delayed transition to parenthood and marriage, contribute to a higher number of young individuals living alone.At the same time, population aging contributes to a higher proportion of older adults in the population that might live alone.Figure 4 Combining the values for each country represents the country-level proportion of unipersonal households.In recent decades, in some European countries, but also in South Korea, Australia, and Botswana, single-person households headed by a person below age 50 have increased.This increase is particularly pronounced in Botswana and South Korea.In both, the proportion of single-person households headed by younger adults increased by more than 10 percent between 1980 and 2010.In most countries of Asia and Oceania, and the Americas, the proportion of unipersonal households with relatively younger heads has remained somewhat stable, below 5 percent in the former and below 10 percent in the latter country groups.In Africa and Europe, there is high variability in the proportion of unipersonal households headed by an adult below age 50 across countries, ranging from close to 0 to over 20 percent.
Headship of unipersonal households by adults above or equal to age 50 is increasing in most countries in the Americas and Europe, reaching 20 percent of households in the United States.At the extremes, we find countries like Hungary, Germany, and the Netherlands, where around 25 percent of households are one-person households headed by an adult of at least age 50, with pronounced increases in recent decades.In Africa the proportion of unipersonal households with a relatively older head remains below 10 percent in all countries.In most countries in Asia and Oceania, with exceptions such as South Korea, Australia, and China, the share of unipersonal households headed by an adult of at least age 50 remains below 10 percent as well, however, recent trends suggest slight increases.Thus, single-person households are extremely rare in Africa and most Asian countries, quite widespread in Europe, and are growing rapidly across Latin America.This points to a considerable heterogeneity in household structures and underscores the need to analyze household composition to understand how and under what conditions declines in household size occur.
Changes in household composition
Figure 5 illustrates cross-national variations in the average number of household members by age.This indicator is calculated by dividing the number of people of each age across all households by the total number of households.The panel on the left provides an overview of the members' contribution by age groups for the most recent sample after 2000.The panel on the right illustrates changes and variability in the contribution of each age group between the earliest and the latest available sample, adjusted for a decade of change.This approach involves dividing the observed change between the earliest and most recent observations by the number of years between the two observations and multiplying it by 10.For visualization purposes, the panels are presented in continental clusters.
For each country and year, the relative contribution of each age group correlates perfectly with the relative weight of that group in the total population.The figure is thus reflective of what we would observe in a traditional population pyramid.Depending on the number of households over which the population is distributed, the absolute level will be different.Two countries with similar population structures but different household size will show different values.In most countries in Africa, Asia, and the Americas, the absolute (and relative) share of each age decreases as we observe higher ages.In these countries, children are the group most present in households, followed by teenagers.At the opposite end, the elderly are the least numerous.European countries are the exception to this pattern as the most frequent age groups in households correspond to the ages 40-49 or 50-59.Cross-national differences within Europe are comparatively smaller than in countries on other continents.
The right panel shows the contribution of each age group to the variation in the average household size.For each country, the sum of the total contributions by age equals the observed change in the average household size.Changes are adjusted to a decade of change during the observed period.The range of the boxplots highlights the difference between countries.The median values reveal the magnitude of the change.Negative values contribute to the decline in household size and positive values to the increase.Compared to the rest of the world, European countries are fairly homogeneous and experience few variations in the age structure of their households.The contribution of ages between 0 and 49 has decreased, while that of older ages has also increased, albeit modestly.In the rest of the countries, the decline in the contribution of ages between 0 and 29 is more pronounced and not offset by the small increase in older age groups.
The differences across countries and over time outlined here are the reflection of two main factors.Firstly, the share of each group in the total population.Secondly, the degree of concentration or dispersion of individuals across households, summarized by the average household size.A decomposition on a global scale of the different dynamics is not straightforward but we can highlight the need for a further analysis of changes in household size and composition by age to better understand to what extent demographic dynamics and the distribution of the population across households drive the observed Fig. 5 Cross-national differences in the age-specific contribution to the average household size and change over time.Left panel shows the data for the most recent sample and the right panel the change in the age-specific contribution between the earliest and latest samples, standardized for a decade.Source: CORESIDENCE database differences over time.In Morocco, Nigeria, and Ethiopia, average household size is similar, with 4.6, 4.7, and 4.8 people, respectively.However, in Ethiopia and Nigeria, children, and teenagers (ages 0-19) account for around 55 percent of household members, compared to 37 percent in Morocco.In the United States and Denmark, young individuals account for a somewhat similar share of the household with 25 percent and 21.3 percent, respectively, despite a difference in average household size of nearly one person.Germany, with an average household size of 2 is placed between the U.S. and Denmark, but young people account for only 18 percent of the household members.The share of older adults in the household (ages 70 +) is similar in Denmark (14.8 percent) and Germany (16 percent) but different in the U.S. (9.7 percent).Thus, differences and similarities in the average household size mainly reflect underlying demographic dynamics, such as fertility patterns and the population age structure, yet variation may also arise due to a distinctive social organization of people across households.
Figure 6 provides an alternative perspective on this trend.This figure illustrates trends in the average number of children of the person of reference (all ages) per household Fig. 6 Country-level trends of the average number of children and other relatives within the household.Source: CORESIDENCE database (upper panel) and the average number of other non-primary kin (all ages) per household (bottom panel).Both groups are a large part of the total number of coresident kin and constitute important components in changing household size.This figure shows convincingly that the average number of children of the reference person in the household is declining in all countries worldwide, including Africa.The variations in levels are significant and reflect dynamic fertility conditions, mortality rates, and children's patterns of transitions to adulthood.In most European countries, the average number of children per household is less than 1.In Africa, the average number of individuals who are reported as children of the reference person is at least two.It should be noted that the number of children of the head does not have to be perfectly aligned with the total fertility rate of a country due to multigenerational, fragmented, or polygynous households, in which there may be children present that are not of the household head and therefore not included, such as grandchildren.This is particularly relevant across African countries, where child fostering, polygyny, and intergenerational coresidence are relatively more common, compared to other countries.
The bottom panel provides information on the time trends of the average number of other relatives in the household.Generally, other kin are less frequent in households than children.There are, however, large variations across countries both within and across continental regions.The countries with the highest number of other relatives are found in Africa and Asia.Most African and Asian countries have values above 0.5 regarding other relatives per household.In Europe, the United States, Australia, and South Korea, the number of other relatives is comparatively lower than in the rest of the world.There are no significant declines in the numbers of other relatives in the household over time.Thus, as households are shrinking in size, the stability over time in the number of other relatives in the household implies a relative increase in their weight within households.
Figure 7 summarizes the data displayed in Fig. 6 and contributes additional information.We use box plots to summarize the variability in household size using the most recent data since 2000 (left panel) and to illustrate the variability of change in household size between the earliest and latest observation, adjusted to a decade of change (right panel).In both cases, we examine the variability in space and time, considering only the household head (A).Furthermore, we explore the variability in size by considering different sets of household members.Each time, we add one type of household member to analyze the impact on the overall variation.We start with the household head (A), and then systematically add members in an incremental manner: first spouses (B), then children (C), followed by other relatives (D), and finally non-relatives (E).This exercise provides insights into the contribution of each type of member to the observed variability.
Initially, in all households, regardless of the country, there is only one reference person.If we add spouses (B), the median increases slightly, however the interquartile range (IQR) would hardly vary.The most significant impact on increasing the median and the variation of the IQR is observed when children are added to the household (C).In this scenario, the median value increases to 3.2 and the IQR increases to 1.5.If we add other relatives (D), the median increases to 3.8, a nearly 20 percent increase compared to the prior scenario.Lastly, adding non-relatives has a negligible effect on the IQR and the median.In summary, children and other relatives account for more than 90 percent of the variation between countries in the average household size, though the importance of children is much greater.The presence of spouses/partners of the reference person has little effect on variation between countries.
The right panel replicates the same structure of the left panel, but it summarizes crossnational variations in change over time in terms of the average household size.This graph addresses the question which type of member has contributed the most to the decline in household size in recent decades.Change over time has been standardized to account for a decade of change.This approach involves the same steps as those for the standardization used in Fig. 5.By applying this method, we ensure that the observed changes in household size across different decades are comparable, reducing the likelihood of biases due to varying observation periods.
In most countries, the size of households has decreased.Spouses/partners make the least significant contribution to the decrease in household size (B).In 75 percent of cases, the rate of decline falls between 0.16 (Q3) and 0.53 (Q1).Children make the most significant contribution to the decrease in household size (C).In this case, the median value drops to -0.27, and the IQR increases compared to the prior scenario (B).When we include other relatives in the household (D), the median value of change decreases to -0.33 and the IQR increases slightly.In the last scenario, we add individuals who are not related to the reference person within the household.The median value stands at 0.34 individuals less per household in a decade of change and the IQR at 0.37 when all members are considered (E).
Conclusions
Households play a crucial role in people's lives.The structure of households and the way they change over time are the focal points of broad transformations in society, including demographic dynamics, changes in values, and economic changes that have implications in areas such as poverty, the division of labor, or gender dynamics.Despite this, there are no studies that document the global transformation of households in two of their most basic dimensions: size and composition.This study has filled this gap by analyzing household level data from 792 censuses and surveys conducted in 156 countries.While we have not delved into the underlying factors driving household formation, our study analyzes the diversity of household configurations and examines change over time.The idea of convergence has historically been of great relevance for family demographers and sociologists, yet to date it has remained unanswered due to data limitations (Furstenberg, 2019;Pesando & GFC team, 2019).Our analysis yields clear findings on the basic components of household change around the world over the past few decades.
Firstly, the world population is clustered in increasingly smaller households and large households are becoming far less common.On average across countries, households have declined by about 0.5 persons per decade.The reduction in household size is more significant in countries that initially had larger household sizes, contributing to a gradual convergence on a global scale.The decrease in household size is primarily related to the decline in the number of children in the household.Children account for more than two thirds of the decline observed in recent decades.By contrast, the number of other relatives in the household has remained relatively stable or has declined only moderately.Households are shrinking in size, but their composition might not be converging globally to the same extent as their size.Variations across countries are likely driven by differences in the timing of life course events and life course trajectories, cultural and religious norms, as well as contextual and individuallevel factors.Future comparative research aiming to explore these differences could explore changes in living arrangements throughout the life course across time and countries, and possible implications for household size and composition.
As households become smaller, their age structure also undergoes changes.Across the world, we observe an increase in households with elderly individuals and a decrease in households with young children.The visualization of household structure by age groups highlights steep declines in the contribution of younger household members to the average household size.However, older adults do not contribute to increases in household size, not even among European countries with large shares of older adults in the population.This suggests that as populations are aging, the number of households increases, resulting in increasing proportions of one-and twoperson households.This trend has direct policy implications with respect to housing needs but also more indirect repercussions, such as changing energy needs in the case of a higher number of separate households relative to a lower number of larger households (Bardazzi & Pazienza, 2023;Ermisch, 1991;Zagheni, 2011;Zeng et al., 2021).If households become smaller, intra-and inter-household transfer patterns might change (Abio et al., 2021;Furstenberg, et al., 2015;Hammer & Prskawetz, 2022;Lee & Mason, 2011) but also gender relations and the division of labor in the household (Bales & Parsons, 2013;Bianchi et al., 2000;de Laat & Sevilla-Sanz, 2011;Forste & Fox, 2012.As the number of children in the household decreases, the need for care work in the household declines, possibly allowing for more opportunities of work outside the household.Women are the primary family caretakers across many societies; hence such household changes can have profound implications for their negotiation power and economic opportunities (Agarwal, 2011;Cherlin, 2012).Yet, an increasing number of older adults in the population suggests that care needs will prevail, and care work might have to be directed towards older generations rather than younger ones.
If these older generations increasingly reside in separate households, care work might become more resource intensive, especially if fewer members of the younger generation are available to take on these responsibilities.
Secondly, despite common trends, there is great diversity of sizes and types of households across countries.Two countries in this study, Denmark, and Finland, have an average household size below 2, while two others, Senegal, and The Gambia, have average values above 8.All other countries lie between these extremes.When observed on a map, the regional patterns formed by countries based on their average household size provide insight into the geographical distribution of family systems.European countries stand out distinctly from the rest of the world due to their notable characteristics such as smaller household sizes, a higher prevalence of single-person households, and of households with older adults.These features are consistent with a lower presence of children and of other relatives in European households compared to the rest of the world.Despite the differences that may exist within most of the developed world, households are distinct in size and structure compared to those in the rest of the world.
The roots of the uniqueness of these family patterns are still debated among academics.Some authors attribute them to advanced stages of economic and demographic processes, while others attribute them to cultural legacies (Lesthaeghe, 2020;Reher, 1998;Therborn, 2004Therborn, , 2006)).Countries in Africa, Latin America, and Asia exhibit greater internal diversity while also showing systematic differences with respect to Europe and other developed countries.Generally, households in these regions tend to be larger, with a higher proportion of children and other relatives.Yet on-going declines in household size are considerably faster here than in the more developed world.At least on the surface, our results point to a possible global convergence in household size that to date seems far from complete.Our results further align with research suggesting that households and family systems may converge in some respects but diverge in others, possibly supporting the "convergence to divergence" hypothesis (Pesando & GFC Team, 2019;Therborn, 2004Therborn, , 2014)).It is important, however, to point out that this study did not focus on underlying drivers of convergence such as kinship structures and specific relationship types among household members.Future research will be needed to address these aspects on a global scale as well.
Based on the results observed here, both in terms of spatial and temporal variations, we can draw some considerations for the future.If fertility rates and the number of births continue to decline, household size will be reduced even further.Thus, global convergence in household size is closely linked to global convergence in fertility (Dorius, 2008;Pesando & GFC Team, 2019).Over the medium and long term, the decline in fertility will also impact the availability of living relatives (such as siblings, cousins, brothers-inlaw, and uncles) within households (Furstenberg et al., 2015;Murphy, 2011).With fewer children and kin, the ability to maintain complex and large households based on traditional models will be significantly reduced.Conversely, increased life expectancy will lead to a longer overlap of generations between parents and their children, potentially favoring intergenerational co-residence (Esteve & Reher, 2021).A comprehensive study of coresidence patterns offers a major challenge for researchers in this field.
Households and family systems evolve due to demographic, social, and economic dynamics.It is crucial to recognize the procedural and contextual nature of these changes.The complexity of global analysis of households stems from the understanding that dynamics and contextual factors influencing the shrinking of household size today, such as lower fertility rates, may not hold the same relevance in the past or future across all countries.Regions with a prolonged history of fertility below replacement rates may experience diminishing significance of low fertility in shaping future household size and composition.As has been argued by Cherlin (2012: 601), with respect to the continuity of change in family systems, " […] there is no more reason to think that we have reached an endpoint today than there was in 1963".As long as households and families remain deeply intertwined, the question of future global convergence in household size and composition depends on the uncertainty and complexity of involved dynamics.
Figure 1
Figure 1 displays a global map illustrating average household size per country, based on the most recent data available since the year 2000.The 163 countries for which data are available are divided into 10 deciles of the distribution of household size.These categories are arranged from the smallest decile (depicted in the blue shade) to the largest decile of household sizes (represented by the red shade) worldwide.The legend incorporates an embedded histogram, providing a visual representation of the proportion of the world's population represented by each category.The average household size ranges from 1.83 individuals per household in Denmark 2021 to 8.42 individuals per household in Senegal 2019.The countries with the smallest households in the world (under 2.3) are located in West and North Europe as well as Japan.Together, these countries account for slightly over 5 percent of the global population.Denmark and Finland, with average household sizes below 2 individuals, have the smallest average household size worldwide.At the opposite extreme, 37 countries spread across Africa, Asia, and Oceania have average household sizes above 5 individuals.These countries collectively represent over 11 percent of the world's population.The regions with the largest households include West Africa, Central Africa, East Africa, West Asia, South and Central Asia, and Melanesia.The range of household size in these regions is quite broad, spanning from 5 persons per household in Tanzania to 8.42 persons per household in Senegal.Of the top 10 countries with the largest household sizes in the world, 5 are in Africa (Senegal,
Fig. 1
Fig. 1 Average household size by country, most recent year available since 2000.Histogram legend shows the percentage of the world's population in each category.Each category represents 10 percent of the 156 countries represented in the map.Sources: CORESIDENCE database and UN Household database
Fig. 2
Fig. 2 Country-level trends in average household size.Source: CORESIDENCE database.Two-letter country codes and country names in alphabetical order: AL Albania, AM Armenia, AR Argentina, AT Austria, AU Australia, BA Bosnia and Herzegovina, BD Bangladesh, BE Belgium, BF Burkina Faso, BG Bulgaria, BI Burundi, BJ Benin, BO Bolivia, BR Brazil, BW Botswana, BY Belarus, CD Congo Democratic Republic, CF Central African Republic, CG Congo, CH Switzerland, CI Cote d'Ivoire, CL Chile, CM Cameroon, CN China, CO Colombia, CR Costa Rica, CU Cuba, CYCyprus, CZ Czech Republic, DE Germany, DO Dominican Republic, DZ Algeria, EC Ecuador, EE Estonia, EG Egypt, ES Spain, ET Ethiopia, FJFiji, FR France, GA Gabon, GH Ghana, GM The Gambia, GN Guinea, GR Greece, GT Guatemala, GW Guinea-Bissau, HN Honduras, HR Croatia, HT Haiti, HU Hungary, ID Indonesia, IE Ireland, IL Israel, IN India, IR Iran, IT Italy, JM Jamaica, JO Jordan, KE Kenya, KH Cambodia, KM Comoros, KR South Korea, KY Kyrgyz Republic, KZ Kazakhstan, LA Laos, LC Saint Lucia, LR Liberia, LS Lesotho, LT Lithuania, LU Luxembourg, LV Latvia, MA Morocco, MD Moldova, ME Montenegro, MG Madagascar, MK Macedonia, ML Mali, MM Myanmar, MN Mongolia, MR Mauritania, MU Mauritius, MV Maldives, MW Malawi, MX Mexico, MY Malaysia, MZ Mozambique, NA Namibia, NE Niger, NG Nigeria, NI Nicaragua, NL Netherlands, NP Nepal, PA Panama, PE Peru, PG Papua New Guinea, PH Philippines, PK Pakistan, PL Poland, PR Puerto Rico, PS Palestine, PT Portugal, PY Paraguay, RO Romania, RS Serbia, RU Russia, RW Rwanda, SI Slovenia, SK Slovakia, SL Sierra Leone, SN Senegal, SV El Salvador, TD Chad, TG Togo, TH Thailand, TJ Tajikistan, TL Timor-Leste, TM Turkmenistan, TR Turkey, TT Trinidad and Tobago, TZ Tanzania, UA Ukraine, UG Uganda, UK United Kingdom, US United States, UY Uruguay, UZ Uzbekistan, VE Venezuela, VN Vietnam, XK Kosovo, YE Yemen, ZA South Africa, ZM Zambia, ZW Zimbabwe
Fig. 3
Fig. 3 Variation in average household size by country over time.Time measured in years elapsed between the most recent observation since the year 2000 and the earliest observation, always within the same data source.Color indicates continental region.Source: CORESIDENCE database serves to illustrate this point.It represents the share of unipersonal households headed by a person below age 50 (upper panel) and by a person aged 50 or older (lower panel).
Fig. 4
Fig. 4 Country-level trends of the proportion of unipersonal households by age of the household head.Source: CORESIDENCE database
Fig. 7
Fig. 7 Cross-national differences in the member-specific contribution to the average household size and change over time.Cross-national variations in average household size based on the most recent data since 2000 (left panel) and decadal change in average household size (right panel), considering different types of members.Source: CORESIDENCE database
|
2024-01-31T14:13:27.426Z
|
2024-01-30T00:00:00.000
|
{
"year": 2024,
"sha1": "c7ca1ebfa75d9b21724ba0627f19f29826daf2bb",
"oa_license": "CCBY",
"oa_url": "https://genus.springeropen.com/counter/pdf/10.1186/s41118-024-00211-6",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "75ceef4b1ce5f39f3cbe0907286169c77b131de2",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
}
|
146733604
|
pes2o/s2orc
|
v3-fos-license
|
Rasch Scaling of a Screening Instrument
The ERIraos Checklist (CL) is a screening instrument for assessing psychosis risk. Measuring CL data on a Rasch scale means that this scale locates individuals on a dimension of “proximity to psychosis onset” according to their current prodromal status. The probabilistic Rasch model leads to interval (difference) scales. The CL data from the German Research Network on Schizophrenia (GRNS) study were analyzed using the Rasch program Winsteps. All item measures based on data from different patient groups were consistent with the Rasch model. Examples demonstrate how item parameters were comparable in different subgroups and in patients in the early and late prodrome. The CL is a simple assessment tool fulfilling the requirements of a Rasch scale. This guarantees good psychometric properties, such as a high reliability and internal validity, and yields a measure of the construct “proximity to psychosis onset” on a difference scale.
Measuring Prodromal Development on a Rasch Scale
The probabilistic test model of Rasch implies that the symptoms in question are located on a single homogeneous dimension. Symptom development starts at the far (left) end of this dimension. With a growing number of symptoms, the severity of the disorder, too, increases. Finally, the closest proximity to the time-point when transition to psychosis happens is reached. The symptoms should unfold in a similar sequence in all patients. It is assumed that, once the symptoms have occurred, they usually persist continuously. These assumptions about symptom sequence and symptom persistence imply that in a sample of prodromal patients who are at different stages of the prodromal development non-specific symptoms are reported most often, because more patients experience and pass through the early than the later stages. In contrast, symptoms of greater specificity occur at lower frequencies, because fewer patients reach the late stage of disease development characterized by the more specific symptoms. It is possible to demonstrate that the terminology of Rasch scaling is applicable to the association between symptom presence and stage of disease development for the purpose of measuring "proximity to psychosis onset" by symptoms of growing severity. If this conception is correct, then the ERIraos Checklist (CL) should fulfill the requirements of the Rasch test model.
How Does Rasch Scaling Work and What Are Its Advantages?
Rasch scaling is a psychometric technique for scaling attributes (abilities, attitudes), based on the work of the Danish mathematician Georg Rasch (1960). The observed test value (raw score, sum score) is used as an indicator of the attribute or ability we want to measure. It is a latent trait parameter, because we cannot observe it directly. The Rasch model belongs to the category of item-response models. It is called a two-parameter model, because it needs only two parameters to explain the test scores: a difficulty parameter to characterize the items and an ability parameter to characterize the persons (Andrich, 1988). The basic assumption holds that the probability of a correct answer of an item increases with the ability of the person tested and decreases with the difficulty of the item. Hence, it is possible to construct an odd indicating the chances of a person to solve the item versus not to solve it: p(x = 1)/p(x = 0). The functions showing how the answer probabilities change depending on the probands' abilities are called item-characteristic curves (ICCs). The Rasch model produces measurements on a difference scale. Such scales are characterized by absolute measurement units of 1 (= logits) and an arbitrary zero-point, which is usually defined by an item with a solution probability of 50%. This probabilistic test model is consistent with the fact that human behavior is not normally deterministic, but influenced by random factors. All ICCs of the items defining a Rasch scale have an identical shape. They are only shifted in parallel along the abscissa. An important advantage of the Rasch model is that the number of items answered correctly is a sufficient statistic of the total information for estimating the ability. This attribute of the Rasch model is called "specific objectivity."
Examples for Rasch Scales in Psychiatric Research
The test model was originally developed for measuring people's task-solving ability by the difficulty of tasks in school attainment tests. In psychiatric research, Rasch scaling has been applied only seldom. There are a few notable publications from research into psychopathology and schizotypal personality and some from quality-of-life research. Lewine, Fogg, and Meltzer (1983) applied the Rasch model to derive psychiatric scales for assessing negative and positive symptoms in schizophrenia. The authors presented the basic ideas of Rasch scaling, addressing the question of how to transfer these ideas from attainment measurement to the assessment of psychopathological measures.
A large number of studies have reported worse results for people with schizophrenia in the "University of Pennsylvania Smell Identification Test" (UPSIT). Minor, Wright, and Park (2004) conducted Rasch analyses on the UPSIT in chronic patients with schizophrenia and healthy controls. These analyses were done with the assistance of B. D. Wright, the leading protagonist for Rasch analysis in the United States, using the Winsteps software (Linacre & Wright, 1998). The result was that the UPSIT items measured one single (unidimensional) construct "smelling ability" in patients with schizophrenia and healthy controls. The items (here, probes for smelling) and persons could be located on this ability-tosmell dimension. Lange, Greyson, and Houran (2004) reported on the Rasch analysis of a 16-item scale for the assessment of neardeath experiences (NDE; Greyson, 1983) with a satisfactory result.
The detection and possibly also the treatment of mental disorders in non-psychiatric settings need simple screening instruments, applicable daily by general practitioners (GPs) not specially trained in psychiatry. Fink et al. (2004) conducted a Rasch analysis within a validation study of a short diagnostic questionnaire comprising eight dichotomized items (e.g., depressive mood, feeling everything is an effort, nervousness or shakiness inside) of the Symptom Checklist Eight-Item Dichotomized Version (SCL-8d) for the recognition of psychiatric disorders in general hospitals. These authors' aim was similar to ours with the ERIraos CL. Olsen, Jensen, Noerholm, Martiny, and Bech (2003) analyzed the internal and external validity of the Major Depression Inventory (MDI) as a method of measuring the severity of depressive states. The MDI includes 12 items, among them the 10 depressive symptoms of the International Classification of Diseases (ICD-10). The authors applied not only the method of classical test analysis to determine the factorial structure and Cronbach's alpha but also modern psychometric techniques, such as Rasch and Mokken analyses . The Rasch analysis confirmed the symptom order on a single dimension, and the order of the symptoms was comparable with that of the less restrictive Mokken model. Zelinski and Gilewski (2004) reported a further application of the Rasch model to measuring the self-efficacy of memory. The 10 items of the Rasch modeled "Memory Self-Efficacy Scale" were determined by the computer program Winsteps of Linacre and Wright (1998), an algorithm we also used in our Rasch analyses. Zelinski and Gilewski (2004) started with the 33 items of the "Frequency of Forgetting Scale," including the best discriminating and most informative items in the new scale. According to the authors, the advantage of the 10-item Rasch scale was its shortness and psychometric properties comparable with the complete 33-item scale.
Finally, Tennant, McKenna, and Hagell (2004) suggested using the Rasch analysis in the development and application of instruments for assessing quality of life. In the Schizophrenia Outpatient Health Outcome (SOHO) study (Prieto et al., 2003), conducted within a European Union project, the transcultural validity of the "Euro Quality of Life Dimension Nr. 5" scale (EQ-D5) was tested. The item order was nearly identical in the 10 cooperating countries: (a) mobility; (b) self-care; (c) usual activities; (d) pain, discomfort; (e) anxiety, depression; and the item parameters were comparable across the different Rasch analyses in the 10 participating countries.
Early Detection of Psychosis Risk by the Early Recognition Inventory ERIraos 1
In the German Research Network on Schizophrenia (GRNS), psychosis risk was assessed using the Early Recognition Inventory ERIraos . The ERIraos is a two-step procedure. In Step 1, a 17-item CL is used as a screening instrument, applied before a contact with specialized services for early intervention is established. It is used by GPs, psychiatrists in free practice, and psychologists. In Step 2, experts carry out a comprehensive diagnostic assessment of the "at-risk mental state" at an early-intervention center using the complete 110item Symptom List (SL) and additional modules for the assessment of further risk factors, such as genetic risk, obstetric and birth complications, alcohol and drug use, and schizotypal personality traits. Table 1 shows an overview of the ERIraos components.
In the GRNS early-intervention study, the ERIraos was administered to 235 patients who fulfilled the criteria for the initial prodrome of psychosis (Häfner et al., 2004). The aim of the study was to derive a model for predicting indication for early intervention (cognitive-behavioral therapy [CBT] or atypical neuroleptic medication), for example, by finding a formula to determine the probability of transition to psychosis.
A prerequisite for a successful early intervention in psychotic disorders is knowledge of the relevant prognostic symptoms occurring in the initial prodrome, at the very beginning of the psychotic episode or, finally, in psychotic transition. A decision for early intervention can be taken in a responsible way only on the basis of reliable information on patients' at-risk status.
Due to the relative rareness of psychotic disorders and a risk period-sometimes extending over several decades-a psychosis screening in the general population is neither possible nor practical. Early detection, therefore, requires self-identification and self-selection in the first step. This means that people at risk consult their family doctors or psychologists/psychiatrists to report their early, mostly non-specific symptoms and ask for help. In this way, people become active and find their way in the treatment system. Usually, the first point of contact in this system is the GP. Hence, GPs should be sufficiently informed about early prodromal symptoms, risk factors, and forms of manifestation of the initial prodrome to be able to decide how to proceed. It is for this first step of risk augmentation that the CL has been included as a screening instrument in the early recognition inventory ERIraos. It helps the GP to explore early symptoms and, if necessary, initiate the next diagnostic steps at a specialized early-detection and early-intervention center.
The ERIraos CL
The CL, applicable both as an interview and a questionnaire, comprises 17 items. All these items are also included in the comprehensive ERIraos SL. The CL symptoms are listed in Table 2.
The CL symptoms are rank-ordered according to their increasing specificity for psychosis. Those occurring early in the process of disease development are non-specific in nature, for example, restlessness, impaired sleep, depressive mood, and social withdrawal. The subsequent symptoms, closer to psychosis onset, include mistrust, changed perception, and the experience of derealization. As the disorder progresses, APS or BLIPS will occur, indicating a high risk for transiting to psychosis. If the sequence of symptom development is in accordance with the pattern described, then it should be possible to arrange the CL symptoms on a single dimension called "psychosis proximity." The Rasch model presumes that with growing proximity to psychosis onset the probability of symptom manifestation increases.
ERIraos CL Data Used for the Rasch Analyses
The Rasch algorithm of the Winsteps program presumes dichotomous items (correct vs. not correct or symptom present vs. not present). As the CL items allow a third category "uncertain," this category had to be counted as 0 (symptom not present). In the following, the project numbers 1.1.2 2 and 1.1.3 3 refer to the GRNS early-intervention studies, based on n = 125 patients in the early prodromal phase and n = 101 in the late prodrome. Included in Project 1.1.1 was a heterogeneous group of n = 1,060 probands from the GRNS studies, for whom 965 CL data were available, but who were not able or refused to participate, and persons who were already psychotic or did not fulfill other criteria (medication, age) for the early-intervention studies.
Comparisons of the item order and independent estimations of item parameters permitted conclusions on reproducibility and, hence, on the samples' independence from the solution.
The Rasch analyses were conducted using the program Winsteps of Linacre andWright (1998, Version 3.52 of July 2004), and SPSS 14 for Windows was used for the statistical analyses.
Rasch Scaling of the CLs from the GRNS Studies
The result of the Rasch analysis run on the 226 CLs from the GRNS studies is presented in Table 3. It shows for the 17 CL items the measures that define the items' location on the Rasch scale (see column "measure" in Table 3). The items are arranged by decreasing difficulty (total score/count). The scale units are logits, and the value 0 corresponds to a solution probability of 50%. Such an item should be placed between Items No. 5 and 10. Increasing "difficulty" here means that patients more often state that the symptom has not been present. Hence, "difficult" items correspond to rarer symptoms, thus indicating growing proximity to psychosis onset. At the upper end of the scale-at a considerable distance to the other symptoms-Item No. 17 "hallucinations" is located, which in the study sample already indicates transition to psychosis and may occur in this sample as BLIPS. At the lower end of the scale, the Non-Specific Symptom No. 8 "tension, nervousness, restlessness" is located. The items of the scale range between values −1. 95 and +1. 71. All in all, the item order is plausible: Further non-specific symptoms are located at the lower end of the scale (depressive mood, social withdrawal, reduced interest in work, disturbed body functions, shyness/timidity). They are depressive and negative in type, frequently reported as occurring at the non-specific onset of the prodromal stage in psychotic disorders. They are followed by symptoms of disturbed thinking (basic symptoms) and symptoms of dysphoric mood. Both types of symptoms already indicate a higher specificity for psychosis. Next, there are symptoms of an increasing proximity to psychosis onset, for example, derealization or mild paranoid symptoms (unstable ideas of reference, subject-centrism, mistrust, ideas of persecution).
The only non-specific symptom ranking high, although it actually should not indicate a high proximity to psychosis onset, is Item No. 7 "reduced self-care." This symptom was rated more often by patients at the early prodromal stage than by those in the late prodrome in Project 1.1.3. As this result contradicted the assumption of homogeneity of the items included in a Rasch scale, the item became a candidate for elimination from the CL. But the Rasch model proved quite flexible against such deviations from the model assumptions, so the standardized in-and outfit statistics did not require the symptom to be excluded (cf. Table 3). Items with infit or outfit measures falling outside the range of ±2 standard deviations do not fit the model and should be eliminated. In the analysis presented here, only Item No. 4 "disturbed body functions" exceeded this cutoff (see also the graphical part of Table 3).
A measure for the overall evaluation of the scale is a reliability coefficient of .97. It corresponds to the quotient of the variance determined by the model divided by total variance (model variance plus residual variance). This quotient approaches a maximum of 1, provided the model estimations and the empirical values are located close to each other, and that was the case here.
A second cross-validation was done in a different, nonoverlapping sample of patients who did not participate in the early-intervention studies (CL 111 total). To make the Project 1.1.1 sample better comparable with the GRNS total sample, patients with CL scores below the cutoff value of 6 and those who were already definitely psychotic and fulfilled the inclusion criteria of the GRNS Project 2 first-episode study were excluded from the analysis in the subset of patients called "CL 111 reduced." This restriction had hardly any influence on the computation of the Rasch item parameters.
The Rasch analyses were based on these subgroups and data sets. Item order and item measures were fairly consistent. After splitting the GRNS total sample into Project 1.1.2 and Project 1.1.3 subsamples for the purpose of cross-validation, patients showing different proximities to psychosis onset were compared. These comparisons yielded differences in the item difficulties. It was particularly interesting to see whether the Rasch measures could be reproduced even under these more stringent test conditions. In the subsamples, the items showed different difficulties, as we had expected they would, but it was still possible to reproduce the item order (Table 4).
The computer program Winsteps, which we used to compute the Rasch analyses, yielded a reliability measure for each scale. It can be interpreted in the same way as Cronbach's alpha coefficient. The items showed reliability measures ranging from .94 to .99 for the different scalings and, thus, were very satisfactory without exception. For the persons, they ranged between .60 and .72, which seemed to be somewhat low compared with the item reliabilities. Another unsatisfactory result was that some items turned out not to conform to the model in various analyses.
Testing the Correspondence of the Rasch Scales: Correlations of the Item Measures
The different Rasch scalings can be correlated with each other to test whether they lead to comparable item measures. We computed both product-moment correlations (see Table 5) and rank correlations (Spearman's rho coefficients). In the latter case, only the symptoms' positions in the rank order of the items were relevant. The correlations yielded satisfactory results. We concluded that the result of the Rasch analysis was replicable and the construct "psychosis proximity" is adequately operationalized in the ERIraos CL.
The product-moment correlations were above .90 throughout the CL data. The rank correlations were somewhat lower, but still satisfactory.
Graphical Testing of the Correspondence of the Item Measures
Another possibility to test the correspondence of the Rasch measures proceeds from a linear transformation of item measures by the equation y = x − a. Rasch scaling results in a difference scale, which allows only this special form of linear combination (with the multiplicative factor b = 1). The units of Rasch scales are always logits. Hence, they are like the scale units of an absolute scale, but the zero-point can be chosen arbitrarily. This means that the measures can be shifted around the zero-point either to the right (or to the left) by adding (or subtracting) a constant. If the measures of the "real" CL in the GRNS total sample are used as a reference, then all the other measures should be located on the straight line in Figure 1, provided the values correspond exactly. For different scalings, the values can be shifted parallel to this line. The result is presented in Figure 1: All the measures are located closely to the reference line, for Project 1.1.2-characterized by lower scores-somewhat below it, for Project 1.1.3-characterized by higher scores-somewhat above it. As expected, the replication turned out well.
Frequency Distributions of Projects 1.1.2 and 1.1.3 Patients on the "Proximity-to-Psychosis" Dimension
The purpose of locating items on a Rasch scale and, thus, of an implicit proof of a homogeneous scale is to determine the degree of manifestation of the attribute in question (i.e., to measure it). The person parameter (in general, "ability"; here, "proximity to psychosis onset") is determined simply by a raw score-the number of items positively answered. This is a formal attribute of the model, resulting from the fact that Rasch scalability is given, and it is not correct to assume it a priori. Provided that the additive sum score carries all the information about a person's "ability," the relevant question is not which symptoms, but only how many symptoms have occurred. Given that items have a defined item order and the data structure conforms to the assumptions of the Rasch model, it is rather unlikely that unexpected symptom patterns-for example, a person answers all the difficult items correctly, but fails in the simple items-will be produced, because items not conforming to the model assumptions will be eliminated during scale construction. Figure 2 shows the frequency distributions of the patients at the "early" and the "late" prodromal stage as based on the intervention criteria of the GRNS early-intervention studies. The majority of the late prodromal group (Project 1.1.3) is located to the right of the zero-point, whereas the majority of the patients located to the left of the zero-point are at the early prodromal stage (Project 1.1.2). If the two groups, early and late in the prodrome, are determined using the Rasch scale, then the picture will be somewhat different: Probably the group to the left of the zero-point will show a lower proximity to psychosis onset and the group to the right of it (or some other cutoff) a greater proximity to psychosis onset.
Discussion
When psychiatric scales are being developed, the focus is usually on the content and concepts to be assessed, while formal-methodological considerations are sometimes being neglected. For example, a depression scale has to adequately consider the diagnostic criteria of the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; American Psychiatric Association [APA], 2013) or ICD-10 (WHO, 2010) instead of defining a homogeneous scale to measure the severity of depression on an interval scale. This was also the case when we were selecting the symptoms for our CL. The 17 CL items were chosen on the basis of expert judgments, a comprehensive analysis of the literature, and some empirical evidence. The primary aim of designing the CL was not to obtain a unidimensional measure, but a screening method for identifying increased psychosis risk. Psychosis development can be understood as a process which starts with fairly non-specific symptoms followed by more specific ones, until, finally, a psychotic transition occurs. This event is preceded by mild symptoms, for example, APS or BLIPS, which in the end grow to full-blown psychotic symptoms in severity and duration. This description is consistent with the empirical results from the Mannheim Age Beginning Course (ABC) schizophrenia study (Häfner et al., 1995). The results have led to a cumulative model of how psychosis evolves, including a typical temporal sequence of symptoms. Implicit in that process is that the onset of initial symptoms is followed by additional symptoms at later stages of the process, while the earlier symptoms tend to persist. Different stages of disease development are characterized by typical sets of symptoms. In an at-risk sample, symptom frequency can be seen as indicating the symptom sequence. A higher frequency of particular symptoms means that they must have occurred at an earlier stage, too. If this assumption is correct, the twoparameter Rasch model should also be applicable to the CL data, and it should be possible to measure "proximity to psychosis onset" on a unidimensional scale using binary items. A remarkable advantage of our analyses was the availability of several independent sets of CL data. The scaling procedure was not only applied to the complete GRNS CL data but also used to replicate and cross-validate the item measures. A replication of the Rasch scale on independent data sets and a test of the conformity of symptom measures by correlation analysis yielded high correlations. Similar values from comparisons of item measures-restricted by admissible transformations in a graphical test-and high negative correlations of item measures confirmed the validity of this unidimensional construct and its measurability by the CL.
However, there are a few limitations. Although the reliability of the item measures was near to 1, the reliability of the person measures was not very high. One reason might be that the Rasch model is not applicable to a subgroup of patients with an acute onset of psychotic symptoms. Earlier statistical analyses of Interview for the Retrospective Assessment of the Onset of Schizophrenia (IRAOS) data have shown that this was the case in about 8% of the ABC first-episode sample. Another limitation might be the narrow range of measurement between about −1 and +2 logits covered by the CL symptoms and the fact that the symptom measures are not equally distributed over this range. Nevertheless, the Rasch model enabled us to locate probands on the scale between −2.5 and +2.5. Another unsatisfactory result was that the symptoms which turned out not to be model-conform were not always identical in the different data sets. However, we did not exclude these items from the CL, allowing for a certain degree of item deviation from model conformity.
A final point to be raised pertains to the study design: It did not permit us to proceed from a large number of symptoms in a rich item pool and to select by means of Rasch scaling the most appropriate ones with the highest discriminative power and covering all levels of item difficulties. This surely is a weakness. The CL items were chosen on the basis of expert knowledge, available literature and to some degree also of empirical studies on how schizophrenia develops. The GRNS study produced data on symptom frequency. Only a post hoc testing of whether the CL fulfills the formal conditions of the Rasch model was feasible.
Despite these limitations, it can be concluded that the Rasch scaling of the CL was successful. As the CL symptoms were consistent with the assumptions of the Rasch model, proximity to psychosis onset could be assessed and the process of stepwise disease development over time from less severe to more severe stages of the prodrome could be demonstrated. During this process, psychosis risk kept increasing, as the individuals moved along the "proximityto-psychosis" dimension. Overall, the successful Rasch scaling of the CL can be seen as a clear indication of the validity of this screening instrument.
Summary
To sum up, Rasch scaling has been successfully applied to a wide range of topics in psychiatry. It enabled us to depict proximity to psychosis onset and, hence, the stage of the initial prodrome on a measurement dimension. The CL symptoms are ordered along this dimension according to the results of the Rasch scaling, and their order indicates the degree of psychosis proximity. The use of modern probabilistic test models is to be welcomed: They promise • • more economical test applications (in terms of time and costs), • • a better theoretical foundation for the measurement and measurability of the construct under consideration, • • the validation of unidimensional constructs and an item selection based on objective methodological criteria.
They also allow • • a more adequate interpretation of people's responses to test items and may also • • contribute to improving the quality of psychiatric data.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
|
2019-05-07T14:22:10.192Z
|
2014-07-01T00:00:00.000
|
{
"year": 2014,
"sha1": "af1e095fc7491773c812351e22917e115117ba7b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/2158244014545326",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "498176f1bf2e290007c2bde5155b5fc9e5ef5e84",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
7843711
|
pes2o/s2orc
|
v3-fos-license
|
Influence of different shades and LED irradiance on the degree of conversion of composite resins
The aim of this study was to evaluate the degree of conversion (DC) of two composite resins with different shades that were light cured by light-emitting diodes (LEDs) of different irradiances. Specimens (5 mm × 2 mm) were prepared with a nanofilled (Filtek Supreme – A2E, A2D, and WE) or microhybrid resin (Opallis – A2E, A2D, and EBleach Low) and were randomly divided into 12 groups (n = 5 each) according to the composite resin and light-curing unit (Elipar FreeLight 2, 1250 mW/ cm2; Ultralume 5, 850 mW/cm2). After 24 h, the DC was measured on two surfaces (top and bottom) with Fourier Transform infrared spectroscopy (FTIR). Data were statistically analyzed with two-way ANOVA and Tukey test (α = 0.05). Statistical differences among the surfaces were observed in all experimental conditions, with higher values on the top surface. The microhybrid resin presented the highest DCs for shades A2E and A2D on the top surface. The LED with higher irradiance promoted better DCs. Taken together, the data indicate that the shade of a composite resin and the irradiance of the light source affect the monomeric conversion of the restorative material. Descriptors: Composite Resins; Polymerization; Curing Lights, Dental. Introduction To promote an adequate and natural appearance, composite resins are available in a variety of colors and shades; however, this characteristic can make the restorative procedure challenging for clinicians.1 Composite resin shades vary substantially among manufacturers.2 Despite having the same designation (e.g., A2 or B1), characteristics like color and translucency can differ according to the composite type, because the composition and inorganic content influence the optical properties of these materials.3 As light passes through a translucent material, it is dispersed, such that an object cannot be viewed clearly through the material. In other words, translucency is an intermediate state between complete opacity and transparency. Composite resins with different translucencies can present distinct behaviors with regards to the degree of conversion (DC), due to light dispersion in the materials. Different light sources can be used to stimulate photoinitiators to begin free radical formation and start the curing process.4-6 Efficient curing depends on the light-curing irradiance and wavelength emitted,7,8 such Declaration of Interests: The authors certify that they have no commercial or associative interest that represents a conflict of interest in connection with the manuscript. Influence of different shades and LED irradiance on the degree of conversion of composite resins 166 Braz Oral Res. 2012 Mar-Apr;26(2):165-9 that the spectrum irradiated by the device is higher than the absorption spectrum of the photoinitiator.9 Despite having different levels of irradiance than halogen lamps (QTH), light-emitting diodes (LEDs) present irradiance concentrated around 470 nm, coincident with the absorption peak of camphorquinone.8,10 In some situations, this characteristic favors the improved DC of dental adhesives.11 However, the influence of different irradiances on the DC of composite resins is questionable. As manufacturers introduce more options for composite resins, with different shades and compositions to promote better performance and characteristics similar to those of the natural tooth, studies are necessary to evaluate the influence of these parameters on composite behavior. The aim of this study was to evaluate the DC of composite resins (microhybrid and nanofilled) of different shades that were light cured by LEDs with distinct irradiances. The null hypothesis was that differences in DC would not be observed among the shades and types of composite resins, and that monomer conversion would not be influenced by the LEDs tested. Methodology Two composite resins were used in the current study: • a nanofilled resin (Filtek Supreme XT) and • a microhybrid resin (Opallis). The commercial names, composition, and manufacturers of the resins used are listed in Table 1. Specimens were divided into groups (n = 5 each) according to the light-curing irradiance (EliparTM Freelight 2 [EF], 3M ESPE, St. Paul, USA, 1250 mW/ cm2; Ultralume 5 [UL], Ultradent Products Inc., South Jordan, USA, 850 mW/cm2) and shades (Supreme – A2E, A2D, and WE; Opallis – A2E, A2D, and EBleach Low). The experimental groups are listed in Table 2. The wavelength spectra and peak emissions of tested LEDs are presented in Figure 1. Specimens were prepared with a metallic matrix (5 mm ∅ and 2 mm height). Composite resin was inserted in a single increment. A mylar strip and 500 g weight were placed over the mold and left for 20 s, to allow for better accommodation of the composite. Specimens were light cured according to the manufacturers’ instructions (20 s with Filtek Supreme A2E/A2D/WE and Opallis A2E/ EBleach Low; or 40 s with Opallis A2D) with the light-curing unit corresponding to each group. Specimens were stored for 24 h at 37 °C under light-protected and dry Figure 1 Light spectrum profiles emitted by the light-curing units. Elipar Freelight 2 presented an emission spectrum between 415 and 520 nm. Ultralume 5 presented a spectrum similar to that of Elipar Freelight 2; however, its wavelength spectrum was extended, with initiation at 385 nm and a peak at 405 nm. Both light-curing units had a peak at 454 nm. Composite Composition Manufacturer Filtek Supreme XT Bis-GMA, Bis-EMA, UDMA, TEGDMA Filler: 59.5% by Wt silane treated ceramic (65-75% by Wt) and silane treated sílica (5-15% by Wt) – size: 20 and 75 nm 3M ESPE, St. Paul, Min., USA Opallis Bis-GMA, Bis-EMA,TEGDMA Filler: 57% by Wt silane treated ceramic (65-75% by Wt) and silane treated silica (5-10% by Wt) – size: 3 μm FGM Produtos Odontológicos Ltda, Joinville, SC, Brazil Bis-GMA (Bisphenol A Diglycidyl Ether Methacrylate), Bis-EMA (Bisphenol A Polyethylene Glycol Diether Dimethacrylate), TEGDMA (Triethylene Glycol Dimethacrylate), UDMA (Diurethane Dimethacrylate). Table 1 Composition of the composite resins used in the present study. Gaglianone LA, Lima AF, Araújo LSN, Cavalcanti AN, Marchi GM 167 Braz Oral Res. 2012 Mar-Apr;26(2):165-9 conditions. After this period, both the top and bottom surfaces were polished with abrasive papers of decreasing grit (400, 600, and 1200, Buehler Ltd., Lake Bluff, USA) to analyze monomer conversion. The DC of the composite resin was measured on the bottom and top surfaces of each specimen. Measurements were performed with Fourier Transform infrared spectroscopy (FTIR Spectrum 100 Optica; PerkinElmer, USA), equipped with an attenuated total reflectance (ATR) device, in a room with controlled temperature and humidity. The DC (%) was evaluated in the absorbance mode with a baseline technique12 and was traced by the Spectrum program (Spectrum 100 Optica; PerkinElmer, USA). The DC was calculated with an equation considering the intensity of the C=C stretching vibration (peak height) at 1638 cm-1 and, as an internal standard, with symmetric ring stretching at 1608 cm-1. The DC results were statistically analyzed by split-plot two-way ANOVA and Tukey test (α = 0.05) with the SAS 9.1 software package (SAS Institute, Cary, USA). Results Table 3 shows the results obtained in the present study. A significant interaction was found between the variables “composite resin” and “light source” (p < 0.01). Specimen tops showed better DC values than specimen bottoms. The nanofilled composite resin presented lower DC values than the microhybrid resin in all situations, except for the A2E shade (bottom), which presented similar results. The highirradiance LED (Elipar) showed better DC values than the third-generation LED (Ultralume 5). Table 2 Experimental groups according to the light-curing unit, type, and shade of composite resin. Composite resin Light curing unit Shades
Introduction
To promote an adequate and natural appearance, composite resins are available in a variety of colors and shades; however, this characteristic can make the restorative procedure challenging for clinicians. 1 Composite resin shades vary substantially among manufacturers. 2 Despite having the same designation (e.g., A2 or B1), characteristics like color and translucency can differ according to the composite type, because the composition and inorganic content influence the optical properties of these materials. 3s light passes through a translucent material, it is dispersed, such that an object cannot be viewed clearly through the material.In other words, translucency is an intermediate state between complete opacity and transparency.Composite resins with different translucencies can present distinct behaviors with regards to the degree of conversion (DC), due to light dispersion in the materials.
5][6] Efficient curing depends on the light-curing irradiance and wavelength emitted, 7,8 such that the spectrum irradiated by the device is higher than the absorption spectrum of the photoinitiator. 9espite having different levels of irradiance than halogen lamps (QTH), light-emitting diodes (LEDs) present irradiance concentrated around 470 nm, coincident with the absorption peak of camphorquinone. 8,10In some situations, this characteristic favors the improved DC of dental adhesives. 11However, the influence of different irradiances on the DC of composite resins is questionable.
As manufacturers introduce more options for composite resins, with different shades and compositions to promote better performance and characteristics similar to those of the natural tooth, studies are necessary to evaluate the influence of these parameters on composite behavior.
The aim of this study was to evaluate the DC of composite resins (microhybrid and nanofilled) of different shades that were light cured by LEDs with distinct irradiances.The null hypothesis was that differences in DC would not be observed among the shades and types of composite resins, and that monomer conversion would not be influenced by the LEDs tested.
Methodology
Two composite resins were used in the current study: • a nanofilled resin (Filtek Supreme XT) and • a microhybrid resin (Opallis).
The commercial names, composition, and manufacturers of the resins used are listed in Table 1.
Specimens were prepared with a metallic matrix (5 mm ∅ and 2 mm height).Composite resin was inserted in a single increment.A mylar strip and 500 g weight were placed over the mold and left for 20 s, to allow for better accommodation of the composite.Specimens were light cured according to the manufacturers' instructions (20 s with Filtek Supreme A2E/A2D/WE and Opallis A2E/ EBleach Low; or 40 s with Opallis A2D) with the light-curing unit corresponding to each group.Specimens were stored for 24 h at 37 °C under light-protected and dry Figure 1 -Light spectrum profiles emitted by the light-curing units.Elipar Freelight 2 presented an emission spectrum between 415 and 520 nm.Ultralume 5 presented a spectrum similar to that of Elipar Freelight 2; however, its wavelength spectrum was extended, with initiation at 385 nm and a peak at 405 nm.Both light-curing units had a peak at 454 nm.conditions.After this period, both the top and bottom surfaces were polished with abrasive papers of decreasing grit (400, 600, and 1200, Buehler Ltd., Lake Bluff, USA) to analyze monomer conversion.
The DC of the composite resin was measured on the bottom and top surfaces of each specimen.Measurements were performed with Fourier Transform infrared spectroscopy (FTIR -Spectrum 100 Optica; PerkinElmer, USA), equipped with an attenuated total reflectance (ATR) device, in a room with controlled temperature and humidity.The DC (%) was evaluated in the absorbance mode with a baseline technique 12 and was traced by the Spectrum program (Spectrum 100 Optica; PerkinElmer, USA).The DC was calculated with an equation considering the intensity of the C=C stretching vibration (peak height) at 1638 cm -1 and, as an internal standard, with symmetric ring stretching at 1608 cm -1 .
The DC results were statistically analyzed by split-plot two-way ANOVA and Tukey test (α = 0.05) with the SAS 9.1 software package (SAS Institute, Cary, USA).
Results
Table 3 shows the results obtained in the present study.A significant interaction was found between the variables "composite resin" and "light source" (p < 0.01).Specimen tops showed better DC values than specimen bottoms.The nanofilled composite resin presented lower DC values than the microhybrid resin in all situations, except for the A2E shade (bottom), which presented similar results.The highirradiance LED (Elipar) showed better DC values than the third-generation LED (Ultralume 5).
Discussion
According to the current results, the resin shade and LED irradiance can affect the DC of the composite resin.Therefore, the null hypothesis of this study must be rejected.
The specimen bottoms (2 mm height) presented lower DC values than the tops.This result can be explained by the increased distance between the light-curing tip and bottom of the composite, which attenuates light activation at the bottom surface and compromises the extent of curing. 13,14Similarly, Beun et al. 15 showed a progressive reduction in DC depending on the specimen thickness when either QTH or LED was used.
The microhybrid resin promoted better conversion than the nanofilled resin.According to some studies, the DC is influenced by the organic content, because characteristics such as reactivity and monomer mobility are related to the formation of polymeric chains. 16,17The higher amount of filler present in the nanofilled resin could influence the results by promoting a lower DC.
Nanofilled composites are characterized by the wide distribution of nanosized filler particles.This characteristic improves the mechanical properties of the resin.Nevertheless, due to the higher amount of filler, a reduction in the organic matrix can be observed in the interfacial region between the particles.This reduction alters the DC, which is directly related to the concentration of monomer content. 18n addition, a reduced light intensity reaches the photoinitiator, due to the proximity of the fillers.As a result, the formation of free radicals (which are responsible for the curing process) is reduced.
No differences were observed among the nanofiller resins in terms of the shades on the top surfaces.However, EBleach Low (which was developed to restore bleached teeth) showed lower DC values with the microhybrid resins.It can be speculated that EBleach Low presents an alternative photoinitiator, because camphorquinone has a yellow coloration. 19Alternative initiators are activated at different wavelengths that are not provided by the LEDs used, due to the short spectrum of light that they emit, which compromises the curing efficacy of this material. 20Because manufacturers are usually reluc-tant to reveal the exact compositions of their products, 21 it is not possible to determine whether this situation explains the current results completely.
Greater DC values and similar results were observed among the microhybrid resins on the specimen bottoms.However, the nanofilled resin A2D showed a reduced DC compared to the other shades.As described above, use of high amounts of filler can reduce the formation of free radicals and compromise the DC. 18The high amount of filler and reduced translucency of this resin likely interfered with the passage of light through the material and decreased the monomeric conversion.This problem was not observed with the microhybrid resin, probably because of the extended curing time applied to shade A2D.The manufacturer's instructions were followed rigidly, because manufacturers generally recommend the best way to use their materials to obtain optimal results.Nevertheless, use of the extended curing time could have promoted improved monomer conversion of the tested nanofilled resins.
Use of high-irradiance LED (1250 mW/cm²) resulted in the best DC values.Despite having narrow wavelength spectra (between 415 and 520 nm), the LEDs tested presented a peak emission of around 454 nm (Figure 1).This peak emission is near the absorption peak of camphorquinone (470 nm), which was the photoinitiator used in the resins evaluated.Because the two LEDs displayed similar wavelength spectra, the high irradiance of Elipar FreeLight was most likely responsible for the obtained results.The third-generation LED used (Ultralume 5) has auxiliary LEDs, which expanded the emitted wavelength spectrum 22 (Figure 1).However, due to the proximity of the light source to the specimen at the moment of curing, the advantage of the auxiliary LEDs was not fully realized.As a result, the optimal performance of this LED was restricted.
In conclusion, not only the light source, but also the intrinsic properties of the resin (e.g., translucency, amount, type and size of filler), influence material polymerization.To obtain the best DC, when using resins with low translucency, an increase in curing time (mainly at the bottom of the increment) should be considered.
Conclusion
The DC of composite resin is influenced by the light source, with high-irradiance LEDs promoting better DC values.Specimen bottoms presented lower DC values in all situations tested, and the nanofilled resin presented lower means of DC.The shade can influence the DC of the top of microhybrid resins and the bottom of nanofilled resins.
Table 1 -
Composition of the composite resins used in the present study.
Table 2 -
Experimental groups according to the light-curing unit, type, and shade of composite resin.
Different letters indicate statistically significant difference (two-way ANOVA/Tukey's Test, p < 0.05).Upperand lower-case letters compare resin on columns and surfaces on lines, respectively.*/@ indicate differences among the light-curing units.
|
2017-12-18T22:29:30.152Z
|
2012-02-24T00:00:00.000
|
{
"year": 2012,
"sha1": "58b093fa9d9a0ea1de9d439a53e0a3b4a34f57f3",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/bor/a/Vm7JzmCc8VmGXP3cFxWcX8n/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "58b093fa9d9a0ea1de9d439a53e0a3b4a34f57f3",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
6835566
|
pes2o/s2orc
|
v3-fos-license
|
Solutions for submucosal injection in endoscopic resection: a systematic review and meta-analysis
Background and aims: Submucosal injection is standard practice in endoscopic mucosal resection of gastrointestinal lesions. Several solutions are used. Our aim was to systematically review their efficacy and safety. Patients and methods: We performed a systematic review and meta-analysis using a random effects model of randomized controlled trials (RCTs) from MEDLINE. Studies in animal models were qualitatively assessed for efficacy and safety. Results: In total, 54 studies were qualitatively assessed. Eleven RCTs were analyzed, two of which were on endoscopic submucosal dissection (ESD). The quantitative synthesis included nine RCTs on endoscopic mucosal resection (EMR), comprising 792 subjects and 793 lesions. Mean lesion size was 20.9 mm (range 8.5 – 46 mm). A total of 209 lesions were randomized to sodium hyaluronate (SH) vs normal saline (NS), 72 to 50 % dextrose (D50) vs NS, 82 to D50 vs SH, 43 to succinylated gelatin, 25 to hydroxyethyl starch and 36 to fibrinogen. In total, 385 were randomized to NS as controls. NS and SH are the best studied solutions and seem to be equally effective in achieving complete resection (OR 1.09; 95 %CI 0.82, 1.45). No solution was proven to be superior in complete resection rate, post-polypectomy bleeding or coagulation syndrome/perforation incidence. Many solutions have been tested in animal studies and most seem more effective for mucosal elevation than NS. Conclusions: There are several solutions in clinical use and many more under research, but most are poorly studied. SH seems to be clinically equivalent to NS. There are no significant differences in post-polypectomy complications. Larger RCTs are needed to determine any small differences that may exist between solutions.
Introduction ! Gastrointestinal tract cancer represents the leading cause of cancer death worldwide, with an estimated mortality over 1.75 million [1]. Early endoscopic detection and treatment of potentially curable cancers or precancerous lesions could potentially lead to a reduction of gastrointestinal cancer incidence and cancer related mortality [2 -5]. In the past decades, endoscopic resection therapies have gradually improved and gained more importance for premalignant lesions and noninvasive early cancers with a low risk of lymph node metastasis. The survival after endoscopic removal of an early cancer may be similar to that after surgical resection, providing the rationale for this approach [6,7]. Resection-based modalities consist of endoscopic mucosal resection (EMR) and endoscopic submu-cosal dissection (ESD). Injection-assisted EMR was first introduced in 1955 for rigid sigmoidoscopy [8] and then in 1973 for flexible colonoscopy [9]. In the following years, improvements in the EMR techniques, such as cap-assisted EMR and ligation method, have been introduced [10,11], and nowadays, EMR is a widely used and useful method to resect minimally invasive benign and early malignant lesions of the gastrointestinal tract [12]. However, despite its efficacy, this method is sometimes associated with local recurrence, especially when lesions larger than 40 mm are resected in a piecemeal fashion [13]. To overcome this limitation, ESD has been developed, allowing en bloc resection of superficial neoplasms and providing better histopathological diagnosis and decreased local recurrence rates [14 -16]. Endoscopic resection techniques are aided by mucosal elevation through the injection of a solution into the submucosal space. This technique may reduce complications, such as perforation or * These authors contributed equally to the study. Background and aims: Submucosal injection is standard practice in endoscopic mucosal resection of gastrointestinal lesions. Several solutions are used. Our aim was to systematically review their efficacy and safety. Patients and methods: We performed a systematic review and meta-analysis using a random effects model of randomized controlled trials (RCTs) from MEDLINE. Studies in animal models were qualitatively assessed for efficacy and safety. Results: In total, 54 studies were qualitatively assessed. Eleven RCTs were analyzed, two of which were on endoscopic submucosal dissection (ESD). The quantitative synthesis included nine RCTs on endoscopic mucosal resection (EMR), comprising 792 subjects and 793 lesions. Mean lesion size was 20.9 mm (range 8.5 -46 mm). A total of 209 lesions were randomized to sodium hyaluronate (SH) vs normal saline (NS), 72 to 50 % dextrose (D50) vs NS, 82 to D50 vs SH, 43 to succinylated gelatin, 25 to hydroxyethyl starch and 36 to fibrinogen. In total, 385 were randomized to NS as controls. NS and SH are the best studied solutions and seem to be equally effective in achieving complete resection (OR 1.09; 95 %CI 0.82, 1.45). No solution was proven to be superior in complete resection rate, post-polypectomy bleeding or coagulation syndrome/perforation incidence. Many solutions have been tested in animal studies and most seem more effective for mucosal elevation than NS. Conclusions: There are several solutions in clinical use and many more under research, but most are poorly studied. SH seems to be clinically equivalent to NS. There are no significant differences in post-polypectomy complications. Larger RCTs are needed to determine any small differences that may exist between solutions.
bleeding and improve the technical feasibility of the procedure. The volume of injected fluid is highly variable and depends on the size and location of the lesion, and repeated injections may be needed for complete removal. Several solutions have been used to lift the mucosal lesion, but the optimal solution is still a matter of debate. It is accepted that the "ideal" solution for submucosal injection should provide a thick submucosal fluid cushion, remain in the submucosal space long enough to safely allow EMR or ESD, and preserve tissue specimens and allow for precise pathologic staging. In this setting, normal saline (NS) has been the most widely used solution as it is simple to use and available at a low cost. However, the mucosal protrusion created by the submucosal injection of normal saline solution is only maintained for a short period of time. This may not have a significant impact on the removal of small lesions but, when performing longer procedures or resecting larger lesions, the need for repeated injections in order to maintain the cushion may become problematic and the risk of perforation may be higher. In order to overcome these limitations and to improve the technical feasibility of EMR and ESD, several solutions have been studied. Submucosal injection of glucose solution, glycerol, sodium hyaluronate (SH), colloids, hydroxypropyl methylcellulose, fibrinogen solution, autologous blood, and other alternatives have been investigated in different contexts. Nevertheless, these solutions are also associated with some caveats: they can be difficult to prepare or administer, available at a high cost or not readily available, or may be associated with toxicity.
In the past few years, several substances with different properties have been studied in ex vivo and in vivo studies. Among these, only a few have been evaluated in clinical trials. At the present time, no definitive proof of the superiority of any solution has been provided and there is no systematic review or meta-analysis on this topic.
!
The primary objective of this review was to identify and evaluate the safety and effectiveness of the available solutions for submucosal injection in endoscopic mucosal resection techniques (polypectomy, mucosal resection, and submucosal dissection) in human patients. As secondary objectives, we aimed to evaluate the duration of the effect, and the local deleterious effects of the solutions on the submucosal tissue, including those studies performed on animals.
!
We performed a systematic review and a meta-analysis to evaluate the effectiveness and safety of existing solutions for submucosal injection in endoscopic mucosal resection or dissection. This review was registered on the International prospective register of systematic reviews, PROSPERO: CRD42014009577. We considered all published randomized controlled trials for the quantitative synthesis. We performed a separate analysis for ESD and EMR. For the overall qualitative synthesis, we included nonrandomized trials, and observational studies (cohort, case-control, case series and case reports) evaluating the safety and effectiveness of submucosal injection solutions, regardless of blinding and language.
For the primary outcome, we included studies with humans submitted to upper or lower gastrointestinal endoscopy. For the secondary outcomes, we also included animal studies (including ex vivo). We included procedures where polypectomy, EMR or ESD were performed after the injection of submucosal solutions had taken place, either in the esophagus, stomach, colon or rectum.
Primary outcome
Complete resection of the lesionhistological determination of en bloc lesion free margins or endoscopic determination of no residual lesion. Endoscopic determination included the lack of residual lesion as reported by the endoscopist (with or without chromoendoscopy) or the inclusion of resection marks in the resected specimen or negative follow-up with tissue forceps biopsies from the resection site.
Secondary outcomes
Number of injections given; volume injected; duration of submucosal cushion; procedure time; endoscopic complications; residual lesion at follow-up; tissue injury.
Search strategy
We individually searched MEDLINE and included all studies published until March 2014. The electronic search was performed using the following key words: submucosal injection AND (endoscopic AND resection OR EMR OR ER OR mucosectomy OR endoscopic submucosal dissection OR ESD OR polypectom*) AND (solution* OR saline OR hyaluron* OR glycerol OR hypertonic OR fibrinogen OR epinephrine OR adrenaline OR dextrose OR blood OR gelatin OR jelly OR mannitol OR sodium alginate OR carboxymethylcellulose OR albumin OR succiny* OR indigo OR methylene) AND (complete resection OR R0 OR adverse event* OR complication* OR injection* OR volume OR duration).
Study selection
Two authors (AF, JM) independently scanned all titles and abstracts for relevance by electronic search. A third author (JT) intervened in case of disagreement.
Data extraction
Data extraction was performed independently by two authors (AF, JM) using a data extraction form to evaluate risk of bias according to the Cochrane Handbook for Systematic Reviews of Interventions. Studies were classified as high risk, low risk or unclear risk of bias. The end points were rate of complete resection (primary end point), number of submucosal injections, total volume (mL) used, duration of submucosal cushion (min), procedural time (min), rate of en bloc resection, incidence of endoscopic complications (perforation and bleeding), recurrence rate at follow-up and incidence of tissue injury or fibrosis.
Data synthesis
We provide a description of the findings including a summary of the study's results by intervention. We performed the analysis in STATA 13 (Stata Corp., Texas, United States) and the flow diagram using Review Manager 5. We meta-analyzed the complete resection rate and the incidence of adverse events (bleeding and perforation), using both randomeffects and fixed-effect meta-analyses but we only report the random-effects meta-analyses, since the two methods concur- red. We present odds ratios with a 95 % confidence interval. Heterogeneity was assessed using the I 2 statistic. We produced a summary of findings table, rating the quality of evidence of the primary outcome.
!
The electronic search resulted in a total of 159 published manuscripts that were scanned based on the title and abstract; 105 did not meet the inclusion criteria. The remaining 54 were assessed for eligibility using the full text articles and 11 were initially included for quantitative analysis. The flow diagram is shown in Fig. 1, and the details of the studies are shown in• " Table 1.
Since there were only two studies on ESD and with different solutions (Mesna and SH) [17,18], a meta-analysis was not performed. In these studies, 53 lesions were randomized to Mesna (vs NS) and 33 to SH (vs NS). There were 88 lesions randomized as controls. In the Mesna RCT [17], Sumiyama and colleagues aimed to evaluate the procedural time with Mesna compared to NS for gastric epithelial lesions. There was no statistically significant difference in this outcome. There were no differences in other outcomes such as R0 resection rate and adverse events (bleeding and perforation). Kim et al. [18] designed an RCT to compare SH to NS with "clinical usefulness" (a combination of en bloc resection and the need for additional injection) as the primary outcome. They randomized 76 gastric lesions and demonstrated a significant effect of SH in increasing the usefulness rate (90.9 % vs 61.1 %; P = 0.004). The nine EMR studies were all two-arm RCTs; eight of them used NS as the control group and only one used SH as the control [19]. Three trials evaluated SH solutions [20 -22], three trials evaluated D50 [19,23,24], and the others evaluated fibrinogen [25], hydroxyethyl starch (HES) [26], and succinylated gelatin (SG) [27]. The three studies that were excluded from the meta-analysis did not report the outcome of interest [28 -30]. Quality assessment of the nine RCT determined that six had a low risk of bias on the generation of the randomization sequence and allocation concealment; six had kept double blinding, while two studies failed to report adequate blinding of the subjects and personnel, and one reported no blinding. In the EMR studies, a total of 792 subjects and 793 lesions were included for analysis. The majority were male patients (56.7 %) and their mean age was 63.6 ± 3.9 years. Mean lesion size was 20.9 mm (range 8.5 -46 mm). After pooling, 209 lesions were randomized to SH (vs NS), 72 to D50 (vs NS), 82 to D50 (vs SH), 43 to SG, 25 to HES and 36 to fibrinogen. In total, 385 were randomized to NS as controls. Six studies were performed on colorectal lesions, one on gastric, and two using both gastric and colorectal lesions.
Complete resection rate
All the nine studies included in the meta-analysis reported the resection efficacy and explicitly provided the complete resection rate (either by endoscopic evaluation or histological confirmation). The analysis results are shown as a forest plot in • " Fig. 2 Original article E4 THIEME This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited.
sults for SH suggest that there is no beneficial effect on the bleeding risk when using this agent.
Post-polypectomy coagulation syndrome/perforation rate
Only four studies reported the occurrence of perforations or coagulation syndrome. The results are shown in • " Fig. 4. There is only one RCT for each solution and none for SH. These studies were underpowered to detect significant differences in this specific outcome but the pooled analyses seem to suggest that NS may be effective in preventing perforations and coagulation syndrome (• " Fig. 5
Other secondary end points
Due to the lack of data and heterogeneity of definitions, it was not possible to analyze the other proposed end points, such as num-ber of submucosal injections, total volume (mL) used, duration of submucosal cushion (min), procedural time (min), rate of en bloc resection, recurrence rate at follow-up, and incidence of tissue injury or fibrosis.
Descriptive analysis
This section will evaluate the 54 studies included in the systematic review in order to assess the proposed outcomes. A summary of these studies is available in the Appendix. Sodium hyaluronate (SH) solution is widely used as an endoscopic submucosal injection material. It was first reported in animal models that the submucosal fluid cushion created by SH persists for longer periods of time than other available submucosal solutions [31 -34]. Its efficacy in EMR and ESD was also reported in clinical practice. Using 0.4 % SH as a submucosal injection solution in endoscopic resection enabled an effective lifting of a colorectal intramucosal lesion, reducing the need for additional injec- Original article E5 THIEME tions [22]. Fujishiro et al. [35] reported that a mixture of a high concentration of SH and glycerine had good results in ESD. SH was compared with NS in two randomized controlled trials that included patients with colorectal lesion < 20 mm managed with EMR. Yoshida et al. [20] concluded that EMR using 0.13 % SH applied to colon lesions of less than 20 mm diameter is more effective than NS for complete resection and maintenance of mucosal elevation, since complete resection was achieved in 74 of 93 lesions (79.5 %) in the SH group and 63 of 96 lesions (65.6 %) in the NS group (P < 0.05) and high mucosal elevation was maintained in 83.9 % of procedures in the SH group and 54.1 % in the NS group (P < 0.01). Kishihara et al. [21] also reported the superiority of NS solution for the ease of submucosal injection and snaring with less variability (P < 0.05). Finally, SH was compared to NS in a randomized controlled trial with gastric lesions proposed for ESD and it was shown that the usefulness rate and the volume of solution injected were significantly better in the 0.4 % SH group [18]. However, SH still faces some problems, namely its higher cost, requirement of an air-sealed container for storage, and the conflicting data concerning stimulation of tumor growth [36,37]. Sodium alginate is an inexpensive high viscosity solution. Eun et al. demonstrated that mucosa-elevating capacity was comparable between 1 % sodium alginate solution and 0.5 % SH solution [38]. It also showed greater elevation when compared to that created by NS solution [39]. In a clinical study, 0.4 % SH solution exhibited no significant difference in catheter injectability but significant superiority in mucosa-elevating capacity over 0.6 % sodium alginate solution, with no findings indicative of tissue injury. En bloc resection was achieved in all cases, no adverse events were observed, and no case showed recurrence [40]. Further investigation is needed on the usefulness of this material as a submucosal injection solution for endoscopic procedures. With regard to dextrose solution, in a prospective, uncontrolled clinical study, Katsinelos et al. [41] first investigated the effectiveness of EMR using a hypertonic dextrose plus epinephrine solution as a submucosal cushion agent for the resection of 59 large sessile colorectal polyps, showing that 23/59 (39 %) were resected en bloc and 36/59 (61 %) in a piecemeal fashion. Also, Varadarajulu et al. [24] compared D50 and NS for injection assisted resection of 52 sessile gastrointestinal lesions. Compared with NS, lower volumes (median 2 vs 1 mL; P = 0.03) were required. Even after completion of resection, submucosal elevation persisted in 36 % of the patients randomly assigned to D50 compared with 20 % of those randomized to NS (P < 0.001). There were no significant differences in the rates of complete resection. Later, Katsinelos et al. [23] performed a prospective, double-blind, randomized study that compared EMR of 92 sessile rectosigmoid lesions ( > 10 mm) using D50 plus epinephrine or NS plus epinephrine. Injected solution volumes and number of injections were lower in the D50 group (P = 0.033 and P = 0.028, respectively). Submucosal elevation had a longer duration in the D50 group (P = 0.043). This difference mainly included large (≥ 20 mm) and giant (> 40 mm) lesions. There were 6 cases versus 1 case of post-polypectomy syndrome in the D50 and NS groups (P = 0.01). Dextrose solution was also compared with SH [19] in a RCT including 174 patients. R0 resection was achieved in 59 of the 82 lesions (72 %) in the dextrose group and in 56 of the 81 lesions (69 %) in the SH group (P > 0.1). Nevertheless, Fujishiro et al. [33] showed that injection of 20 % submucosal dextrose in an animal model was associated with mucosal and muscle damage on the day of injection, with ulceration extending to the submucosal layer within a week after injection.
Glycerol was first evaluated for mucosal elevation in porcine esophagus, showing a longer disappearance time when compared with NS [34], and later in EMR of colorectal laterally spreading tumors (LSTs) [42]. In this clinical study, particularly for nongranular, laterally spreading tumors (LST-NGs) < 20 mm, the glycerol group had a higher en bloc resection rate than the NS group (P < 0.01), however a similar recurrence rate and complications were achieved and there was no difference between en bloc resection for LST ≥ 20 mm. Sodium carboxymethylcellulose is a water-soluble polymer derived from cellulose. In vitro, the submucosal injection of sodium carboxymethylcellulose solution was able to dissect by itself most of the mucosal layer from the muscular layer at a concentration above 2.0 %. In vivo, three specimens were resected with 2.5 % sodium carboxymethylcellulose without difficulty. There were no procedure-related complications and histologic examination revealed no tissue damage [43]. Hydroxypropyl methylcellulose is a high viscosity agent that has been considered to be a good and low cost option readily available in the United States. Its superiority over NS solution in height and duration of mucosal elevation has been shown in animal studies [31,32]. Further studies are needed to clarify the real benefits of this synthetic agent. Photocrosslinkable chitosan in DMEM/F12 medium is a viscous solution that crosslinks UV irradiation, resulting in an insoluble hydrogel. Photocrosslinkable chitosan hydrogel injection led to a longer lasting elevation with clearer margins compared with NS or SH solutions [44], and was useful when used in ESD [44]. Furthermore, photocrosslinkable chitosan hydrogel may contribute to the healing of artificial ulcers after EMR and ESD [45], which makes it a promising agent for endoscopic procedures and it should be evaluated in clinical trials after biocompatibility has been established. Succinylated gelatin (SG) is a widely available, inexpensive, safe, colloidal solution that exerts an oncotic pressure comparable with that of human albumin, with a favorable safety profile. In an animal study [46], the mean EMR specimen dimension and surface area were significantly larger and the duration of mucosal elevation was significantly longer for SG (P = 0.005). Three perforations were recorded, two with SG and one with NS (P = 1.0). However, these perforations occurred in the proximal porcine colon which is thinner than distal porcine colon and human colon. The clinical efficacy of SG was evaluated by Moss et al. in a randomized double-blind trial, conducted to compare the performance of EMR with SG or NS for sessile lesions of the colon sized ≥ 20 mm [27]. The "Sydney Resection Quotient" (defined as lesion size in millimeters divided by the number of pieces to resect) was significantly different between groups, favoring SG; fewer injections per lesion (P = 0.002), lower injection volume (P = 0.009), and shorter procedure duration (P = 0.006) were reported with the SG group. There was also a non-significant trend towards higher en bloc resection rate with SG (30 % vs 15 %, P = 0.137). There were no perforations. Mesna (sodium-2-mercaptoethanesulfonate [C 2 H 5 NaO 3 S 2 ]) is a mucolytic agent that acts by cleaving disulfide bonds in proteins, thereby breaking down the connective tissue between anatomical planes. A preliminary clinical study that used submucosal mesna injection for ESD demonstrated the feasibility and safety of the procedure [47]. In an animal study comparing it with NS, there were no differences between groups related to ESD procedure time and en bloc resection, but mesna injection was asso- ciated with a non-significant lower incidence of intraprocedural bleeding (P = 0.09) [48]. Recently, mesna solution was compared to NS in a randomized controlled trial and it showed that ESD time was not significantly different between groups, but multivariate analysis indicated that mesna reduced procedural challenges associated with submucosal dissection [17]. Autologous blood is readily available at low cost. Previous human and animal studies have demonstrated that autologous whole blood produced the longest durable cushion compared with standard agents [49]. The feasibility of EMR with blood submucosal injection was also reported with no complications [29,50]. Regarding tissue injury, a study has shown that blood produces less tissue injury (measured as hydrops and tears) than NS [29]. However, some potential problems need to be clarified, namely the fact that autologous blood could hamper the specialist's view during the procedure and the possibility for blood coagulation [51]. Other agents such as fibrinogen mixtures, poloxamers, and photocrosslinkable chitosan have been reported for EMR with great enthusiasm. Compared with SH, fibrinogen mixtures and poloxamer solutions are significantly less expensive but remain substantially more expensive than NS [25]. A study that included EMR of 35 early gastric neoplasms showed that, after an initial injection of fibrinogen mixture, additional submucosal injection was not required for any lesion. The rates of en bloc resection and complete resection were, respectively, 82.9 % and 88.6 %. The en bloc resection rate was significantly lower for lesions over 20 mm in diameter (60 % vs. 92 %; P < 0.05) and for lesions on the lesser curvature or posterior wall of the stomach compared with those on the greater curvature or anterior wall (55.6 % vs. 92.3 %; P < 0.05). During follow-up, recurrence was noted in only one patient in whom the lesion had been resected piecemeal [52]. Later, the clinical efficacy of the fibrinogen mixture was evaluated in a RCT, comparing it with NS in EMR of early gastric neoplasms [25]. This study did not show differences between the two groups in the rates of en bloc resection and recurrence rate, but mean procedure time was significantly shorter in the fibrinogen group and additional submucosal injection to maintain elevation of the lesion was less frequently required in the fibrinogen group (P < 0.05). In addition, the use of fibrinogen mixtures for endoscopic resections still needs to be critically considered with regard to their potential to transfer infections. The poloxamer solution PS137 -25 was studied in porcine models, comparing it with NS and hydroxypropyl methylcellulose [53], showing greater height of the initial mucosal elevation and longer mucosal elevation. Five EMRs were successfully performed after one injection of PS137 -25, with no thermal injury or perforations. Recently, other alternatives have been presented. A novel injectable drug eluting elastomeric biodegradable polymer (iDEEP) was developed to overcome the limitations of previous solutions, using both viscosity and gel formation through redox initiated crosslinking [54], and showing more durable cushions than those formed with NS and SH. Carbon dioxide (CO 2 ) was also tested as an injection agent. Uraoka et al. [55] performed an animal study that showed the safety and efficacy of CO 2 as a satisfactory submucosal injection agent during ESD, the submucosal elevation created by CO 2 being longer than with either NS or sodium hyaluronic acid (P < 0.001). Creating and maintaining a CO 2 submucosal cushion of sufficient elevation was achieved combined with partial physical dissection of the submucosal layer, followed by complete endoscopic dissection of the CO 2 submucosal layer with ESD, resulting in successful en bloc resection with no complications. Cook Medical's (Bloomington, IN, United States) submucosal lifting gel consists of a proprietary combination of known biocompatible components that appears to be a promising safe and effective substance for submucosal injection. In an animal study, every injection resulted in adequate mucosal lifting, with no evidence of perforation, bleeding, gel extravasation through the serosal surface, or damage to surrounding tissue or organs [56].
Discussion
! EMR and ESD are minimally invasive endoscopic procedures now accepted worldwide as a treatment modality in the removal of dysplastic and early malignant lesions limited to the superficial layers of the gastrointestinal tract [6,7]. Endoscopic resection techniques are aided by mucosal elevation through the injection of a solution into the submucosal space in order to reduce complications. In this study, we tried to identify the best solution to use to lift the mucosal lesion. Our primary outcome was to evaluate complete resection of the lesion. All studies included in the meta-analysis [19 -27] provided the complete resection rate. SH is the best studied solution, being compared with NS in three RCTs [20 -22]. The remaining solutions, namely fibrinogen mixture [25], hydroxyethyl starch [26], and succinylated gelatin [27], were only studied in one RCT each. Our study shows that the available evidence does not allow a robust conclusion to be drawn on the solution's effect on resection rate (OR 1.07; 95 %CI 0.88, 1.29) and, particularly, there is no difference between SH and NS (OR 1.09; 95 %CI 0.82, 1.45) (• " Fig.3). Regarding the complications, bleeding rate was reported in all studies, but the definition of bleeding was different across studies. We found that no single solution was shown to be more effective in decreasing the post-polypectomy bleeding rate, but HES, SG, and fibrinogen have shown a non-significant favorable trend against NS. The post-polypectomy coagulation syndrome/perforation rate was evaluated in four studies [19,23,26,27]. From the analysis, we infer that NS may have a beneficial effect in preventing perforations and coagulation syndrome (• " Fig.5) with an OR (95 %CI) 0.27 (0.06, 1.19), especially when compared to HES (OR 0.15; 95 %CI 0.007, 3.03) and D50 (OR 0.16; 95 %CI 0.02, 1.38). However, these are rare events and a much larger sample size would be needed to determine a more precise effect estimate. In the descriptive analysis section, we analyzed several solutions with different properties. Many solutions have been tested in animal studies and most seem more effective for mucosal elevation than NS, without significant differences in complication rates. We highlight that the superiority of these solutions must be evaluated in RCTs. According to our results, no solution was proven to be superior in complete resection rate, post-polypectomy bleeding, or coagulation syndrome/perforation incidence. We emphasize the need for continuing research in this topic.
Potential biases and limitations
Our conclusions are limited by the small number of published RCTs and because there are several solutions being evaluated and different control groups. There is a potential bias in the analysis as many studies were not clear as to whether they report the intention-to-treat (ITT) or the Original article E7 THIEME per protocol analysis. Also two of the RCTs were not adequately blinded. The studies include lesions in the stomach, in the colon or rectum, and the effect of the submucosal injection may be different according to the anatomical site. In addition, the size of the lesions was quite different between studies, ranging from 8.5 mm to 46 mm lesions (EMR studies) and this represents a heterogeneous sample to pool. We chose to consider complete resection as either endoscopic or histologically assessed in the original studies even though they may not be perfectly correlated. In the adverse event reporting, there were also a wide range of definitions for post-polypectomy bleeding and some of the studies reported immediate and/or delayed bleeding rates, while we counted the totals.
!
In summary, there are many solutions being commonly used for submucosal injection and many more under research. There is a lack of high quality evidence. According to the present meta-analysis, it is not possible to select one solution over the others by considering complete resection rates and procedural safety. There was a trend towards a higher risk of bleeding and a lower risk of perforation/post-polypectomy syndrome with NS. More trials may be needed to select the best solution. At the moment, RCTs should use NS as the control group. The median duration of mucosal elevation was longer with HPMC/dextran, HES and SH compared with NS (P < 0.05). There were no significant differences between SH and HPMC/dextran and HES (P > 0.05 11, 9, 7 and 6 11, 10, 8 and 7 11, 10, 9 and 9 13, 12, 11.6 and 11.3 The initial mucosal elevations of all SH concentrations were higher than NS (P < 0.05) and all concentrations of SH maintained a greater degree of mucosal elevation than NS (P < 0.05). 11.5, 10, 9.5 and 9 12, 11, 9 and 9 The initial mucosal elevation of 0.13 %, 0.2 %, and 0.4 % SH were higher than NS (P < 0.05) and 0.13 %, 0.2 %, and 0.4 % SH maintained a greater degree of mucosal elevation than NS (P < 0.05).
% SH
There was no significant difference in the height of mucosal elevation between SA and SH and both maintain a higher mucosal elevation between 5 and 30 minutes when compared to other solutions (P < 0.05).
Animal study (invivo)
Histologic findings of the submucosal cushion, effect of injection on the tissue and effect on healing process of EMRinduced ulcers
% SA
A clear separation of the mucosal layer from the proper muscle layer was achieved by injecting SA solution. Histological examination of EMR-induced artificial ulcers revealed no apparent tissue damage and showed normal healing process. Whole blood is more effective in generating longlasting mucosa elevation than any other commonly used solution (P < 0.05).
Mean mucosa elevation before and up to 60 minutes after injection A significantly longer duration was obtained after injection of hydroxyethyl starch, 0.25 % and 0.5 % hyaluronic acid, serum, and plasma. However, whole blood generated a longer-lasting mucosa elevation than all other agents. The gel appears to be a safe injectate that provides a submucosal cushion with a duration that is longer than other available injectates for EMR and ESD.
Evaluation of submucosal cushion Every injection resulted in adequate mucosal lifting with a shoulder and defined margin and no cases of gel extravasation.
The submucosal cushion was still present at the time of organ extraction without evidence of perforation, bleeding or tissue damage.
Complications
Necropsy demonstrated no evidence of perforation, bleeding or gel extravasation. Degree of hydrops The hydrops in the NS group were more extensive than those in the plasma solution injected group (P = 0.011) The in vivo animal and human study demonstrated that whole blood or plasma solution may outperform normal saline due to its unique lifting ability, less tissue damage and marked effective submucosal blunt dissection.
Degree of tears Tearing in the NS group was less than that in the plasma injected group (P = 0.008) Degree of hydrops The degree of hydrops in the NS group was more extensive than that in the group with whole blood (P < 0.001).
Degree of tears
The effective submucosal tearing in the group with NS was less than that in the group with blood (P < 0.001). This document was downloaded for personal use only. Unauthorized distribution is strictly prohibited.
|
2017-09-20T19:28:18.140Z
|
2015-10-06T00:00:00.000
|
{
"year": 2015,
"sha1": "601f104f8b35c2bb58f9229e7efea4b3f167ea5e",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0034-1393079.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "601f104f8b35c2bb58f9229e7efea4b3f167ea5e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219510478
|
pes2o/s2orc
|
v3-fos-license
|
Accuracies of Soil Moisture Estimations Using a Semi-Empirical Model over Bare Soil Agricultural Croplands from Sentinel-1 SAR Data
This study describes a semi-empirical model developed to estimate volumetric soil moisture ( v θ ) in bare soils during the dry season (March–May) using C-band (5.42 GHz) synthetic aperture radar (SAR) imagery acquired from the Sentinel-1 European satellite platform at a 20 m spatial resolution. The semi-empirical model was developed using backscatter coefficient (σ° dB) and in situ soil moisture collected from Siruguppa taluk (sub-district) in the Karnataka state of India. The backscatter coefficients 0 VV σ and 0 VH σ were extracted from SAR images at 62 geo-referenced locations where ground sampling and volumetric soil moisture were measured at a 10 cm (0–10 cm) depth using a soil core sampler and a standard gravimetric method during the dry months (March–May) of 2017 and 2018. A linear equation was proposed by combining 0 VV σ and 0 VH σ to estimate soil moisture. Both localized and generalized linear models were derived. Thirty-nine localized linear models were obtained using the 13 Sentinel-1 images used in this study, considering each polarimetric channel Co-Polarization (VV) and Cross-Polarization(VH) separately, and also their linear combination of VV + VH. Furthermore, nine generalized linear models were derived using all the Sentinel-1 images acquired in 2017 and 2018; three generalized models were derived by combining the two years (2017 and 2018) for each polarimetric channel; and three more models were derived for the linear combination of 0 VV σ and 0 VH σ . The above set of equations were validated and the Root Mean Square Error (RMSE) was 0.030 and 0.030 for 2017 and 2018, respectively, and 0.02 for the combined years of 2017 and 2018. Both localized and generalized models were compared with in situ data. Both kind of models revealed that the linear combination of 0 VV σ + 0 VH σ showed a significantly higher R2 than the individual polarimetric channels.
Thirty-nine localized linear models were obtained using the 13 Sentinel-1 images used in this study, considering each polarimetric channel Co-Polarization (VV) and Cross-Polarization(VH) separately, and also their linear combination of VV + VH. Furthermore, nine generalized linear models were derived using all the Sentinel-1 images acquired in 2017 and 2018; three generalized models were derived by combining the two years (2017 and 2018) for each polarimetric channel; and three more models were derived for the linear combination of 0
Introduction
Soil moisture estimation across space and time has become possible with the advent of microwave remote sensing [1]. The amount of moisture in the soil is a function of physical, chemical, and management practices. Soil moisture is highly dynamic across space and correlated in time. The radar backscattering coefficient is a function of soil characteristics such as dielectric constant, texture, and surface roughness, and depends on the wavelength, polarization, and angle of incidence of the radar [1]. Shorter wavelength C-band radar backscatter has shown sensitivity to surface soil moisture at a depth of about 5 cm [2][3][4]. The launch of the Sentinel-1 mission of the European Space Agency has made a huge amount of C-band data acquired since 2014 from all over the Earth's surface accessible. This opened up new perspectives on studying soil moisture in semi-arid regions, as was undertaken in Karnataka, India, in this work. Large scale soil moisture monitoring will provide greater insights into energy fluxes, which can result in improved meteorological and climatic projections [5] that will provide critical inputs for agriculture.
There have been studies based on physical, empirical, and semi-empirical models that estimate soil moisture over bare soils through radar remote sensing [6][7][8]. Physical approaches require many input parameters such as surface roughness and slope, which are not available under practical conditions [8]. Empirical models are only data driven, whereas semi-empirical models, while being data driven, also support theoretical considerations. In soil studies, they are site-specific and generally valid for specific soil characteristics [3]. Previous semi-empirical studies have considered single polarization to build a relationship between soil moisture and a backscatter model at 10 cm depth [9] and estimated v ϑ with a root mean square error (RMSE) of 3-6% [10][11][12] using C-band data. There have also been studies that have used the SAR interferometry technique and Sentinel-1 data to estimate soil moisture and compare them with in situ measurements [13]. Even though SAR interferometry is less frequently used in the remote sensing community to estimate soil moisture, its advantage lies in its ability to disentangle moisture and terrain roughness contributions. Most SAR-based soil moisture estimation studies have covered small areas limited to a few hundred square kilometers [11][12][13][14][15][16][17]. Estimating soil moisture over a wider area and at a higher resolution using SAR imagery will provide information on managing water resources and irrigation scheduling that can benefit a large number of farmers [14].
The aim of this study was to estimate soil moisture in bare rice agricultural soils. While SAR images have been used to estimate rice phenology using X-band TerraSAR-X images [15], there have been limited studies to estimate the soil moisture in bare rice agricultural soils using Sentinel-1 C-band images. Bare soils in Siruguppa are rice growing areas that lie bare after the rice crop has been harvested in March, with rice stubble and weeds that have dried up during summer (March-June). By the time the monsoon rains start, it is extremely critical to estimate the amount of soil moisture in the top 10 cm, which will help farmers decide when to start preparing the land and start sowing the next crop. Surface roughness, soil status, soil moisture, and crop residue distribution affect radar backscatter [16]. It is well established that 0 VV σ is more sensitive to variation in soils and 0 VH σ is more suited to the identification of dry crop residue [17]. Utilizing both together can improve the accuracy of soil moisture estimates [18]. Nevertheless, soil moisture studies using 0 VV σ and 0 VH σ together, especially using Sentinel 1 SAR data, are limited. The need for such studies over significantly large agricultural fields is very important to study agriculture, water, and food security. The major goal of this study was to estimate soil moisture over bare soils using both 0 VV σ and 0 VH σ polarization and compare it with in situ measurements at a 10 cm (0-10 cm) depth. At the time of measurement, soil moisture to 10 cm is at the steady state and consistent across that top surface layer and therefore the C-band can be assumed to detect the top 10 cm layer. However, it is known that C-band SAR signals cannot penetrate to a 10 cm depth.
The contribution of standing stubble to total backscattering coefficient is comparable with that of the soil surface when the stubble has more than 75% water content. Backscatter coefficient decreases with a decrease in water content in the stubble. However, when the water content in the stubble is less than 40%, the contribution to the total backscattering coefficient is negligible [19]. We investigated both localized and generalized linear models to try to disentangle the stubble and soil moisture contributions. The linear coefficients of localized models were derived using in situ data acquired on a specific Sentinel-1 day. In contrast, generalized models were built using all in situ measurements acquired in the study period, thus adding the temporal dimension to the analysis of Sentinel-1 data. The question we wanted to answer is: can semi-empirical models estimate soil moisture, getting rid of the stubble contribution to the backscattering coefficient? We tried to answer this question by studying the effects of each variable, time, and polarization, separately. A localized model does not take into account the temporal evolution of backscattering, while a generalized model includes the time variable when estimating the linear coefficients. Furthermore, for each model, it is possible to keep the polarimetric channels separated or merge them. In this work, we used a large dataset of in situ measurements of soil-moisture acquired across a 2-year period to answer the above question. The issues of the stability of results and of collinearity of data are crucial and will be used to assess the results of this experiment.
The rest of this paper is organized as follows: Section 2 is devoted to materials and methods, Section 3 to the results, and Section 4 presents the discussion. Finally, a few conclusions are drawn in Section 5.
Study Area
The study was conducted in Siruguppa taluk (sub-district) in the Bellary district of Karnataka state, India ( Figure 1). Siruguppa is located between 15.35°N to 15.83°N latitudes and 76.69°E to 76.71°E longitudes covering an area of 1048 sq. km. Its climate is moderate and dry most of the year. It experiences high temperatures ranging from 23.2 °C to 42.4 °C from March to May and an annual rainfall of 645 mm. Irrigation from canal discharges cater to 60% of the cropped area, and the rest is either rainfed or irrigated through groundwater. Most of the crops are grown in predominantly black-clay, red-loamy, and red-sandy soils. The River Tungabhadra runs diagonally across Siruguppa from the northwest, providing water for irrigation. The major crops grown are paddy, sorghum, pearl millet, sunflower, groundnut, cotton and sugarcane. The last decade saw a fall in kharif (rainy season) crop production due to deficit rainfall during the monsoon in some places in the taluk, leading to a shift from paddy and millets to cash crops such as cotton and sugarcane. The Deccan Plateau region is frequently prone to drought, making information on soil moisture critical for allocating water resources and scheduling irrigation. The date of sowing is a critical decision farmers make after the initial rainfall has occurred. This is done based on traditional knowledge and the physical assessment of soil moisture by hand or using a push probe. A scientific estimation of soil moisture can help farmers to decide the sowing date. This study was conducted on "bare agriculture fields" of Siruguppa to estimate soil moisture using radar remote sensing.
Soil Sampling and Ground Data Collection
The soils of Siruguppa are classified into Vertisols (covering 720.9 km 2 ), Aridisols (146.8 km 2 ), Inceptisols (65.1 km 2 ), Alfisols (34.1 km 2 ), and other land cover such as rock outcrops (21.5 km 2 ). The locations for soil sample collection were based on random sampling, taking into account the fractions of different soil types. This mitigates the effects of variation from sampling error and increases the precision of the measured variable [20]. Soil samples were collected using a 10 cm standard metallic cylinder for a soil type to account for vertical and horizontal homogeneity [21], and weighed on site using a Mettler Toledo electronic balance. A handheld GPS (Garmin etrex) was used to georeference the locations immediately with an average accuracy of 2.5 meters as we collected it after a good almanac was received. Sixty-two locations were sampled spread across the four soil types. Forty-eight locations were sampled in Vertisols, eight in Inceptisols, four in Aridisols, and two in Alfisols. This was repeated for two years (2017 and 2018) over 13 dates of satellite overpasses, bringing the total data points to 806 ( Figure 1).
Bulk density (BD) samples were collected simultaneously using standard cylindrical cores on site to estimate volumetric soil moisture ( v ϑ ). The sampling was carried out from March to May in bare agricultural soils with crop residue from paddy and weeds.
Laboratory Analysis
Volumetric soil moisture was measured in two steps. First, the gravimetric method was used to estimate soil moisture from field samples over bare agricultural land [22]. Global Positioning System(GPS) coordinates were taken at each sample location to allow the approximate identification of the soil sample location with the image pixel. The soil collected from the ground after measuring the wet weight ( w ϑ ) was filled in airtight polythene bags and numbered with their corresponding GPS ID. The polythene bags were brought to the soil laboratory to measure their dry weight ( d ϑ ) using a standard drying process. Each sample was transferred to a microwave bowl and placed in the oven at 105 °C for 24 h, and the weight measured as dry weight. The following formula was used to estimate gravimetric soil moisture: The second step involved collecting the soil cores to estimate bulk density (BD). The drying process was repeated for each sample and the following formula was used to estimate BD: where V is the volume of the core.
Volumetric soil moisture was expressed as: where 0 H 2 ρ is the water density.
Data Collection and Pre-Processing
Thirteen Sentinel-1 images were used, six acquired between March 4 2017 and May 27 2017 and seven between March 11 2018 and May 22 2018 (see Table 1). The incidence angle varied from 30° to 35° covering the study area in Co-Polarization (VV) and Cross-Polarization (VH) polarization. The frequency of the acquisition of imagery over India is very low, and a cycle of low and high number of acquisitions in alternating months was seen from the data portal (Table 1). Pre-processing of SAR imagery was carried out using SNAP software developed by the European Space Agency (ESA). Radiometric calibration, thermal noise removal, and terrain correction (using the Range Doppler terrain correction operator) algorithms were applied to obtain the backscattering coefficient σ° dB [23]. A Lee speckle filter was applied to reduce speckle noise. Linear Sentinel-2 Level-1C S2 imagery with less than 10% cloud cover was downloaded for the years 2016 to 2018. These were converted to Level 2A to obtain bottom of atmosphere reflectance using SNAP software provided by ESA under a GNU General Public License V3 . Visible and Near Infrared Radiation (NIR) bands B4 and B8 were used to generate normalized difference vegetation index (NDVI) to delineate the agricultural area.
Methodology
The study began with pre-processing of Sentinel-1 C-band data (described in Section 2.3) to obtain σ° from both polarizations after applying appropriate corrections and speckle reduction. The in situ data collected during the field missions were used to extract 0 VV σ and 0 VH σ values in dB from the respective images of different dates ( Table 1). The in situ data and σ° data were compiled to analyze and build a semi-empirical model. Agricultural land was derived using band B4 and B8 of a time series of Sentinel-2 images used to calculate the NDVI for the date for which an image was available in the season during each year. Random forest (RF) classification was applied to the set of nine NDVI images covering the study area and training dataset. This is useful to mask out non-agricultural areas when visualizing soil moisture estimates. An evaluation of the semi-empirical model was conducted to assess the accuracy of soil moisture ( Figure 2). Cross-Polarization (VH ) imagery.
Semi-Empirical Model
A semi-empirical model was proposed to estimate soil moisture over bare soils in agricultural areas from the backscatter coefficient based on a linear relationship. The linear equation captures the backscatter from bare soil, which constitutes soil moisture and surface roughness (as crop residue) and includes both VV and VH backscattering coefficients as: where v ϑ is the volumetric soil moisture; A, B, and T are empirical constants; and 0 VV σ and 0 VH σ are the VV and VH backscattering coefficients, respectively.
On bare soil, 0 VV σ and 0 VH σ are mainly influenced by soil moisture. Since the major crop in the study area is rice, there is a crop residue as rice stubble on the ground. The rice stubble at 75% water content also contributes to the 0 VV σ , but decreases as the water content decreases and is negligible in both polarizations [19,24]. A linear combination including both polarizations was found to better estimate soil moisture from bare soil.
Delineation of Agricultural Fields
The estimation of soil moisture is more meaningful when linked to the purpose for which it is used. The ideal domain for use of such information is agricultural lands. Ideally, NDVI [25] is used to understand changes in crop phenology as the growing season progresses. Since the target class was only agricultural land, time series NDVI during the cropping season was best suited for the delineation using Sentinel-2 imagery. A set of nine NDVI images during the three crop seasons was used to estimate land cover using the RF algorithm [26]. The training dataset included land use in the soil sample locations (62). Additionally, 200 training samples were used: 100 from agricultural land and 100 from non-agricultural land. This product was used as a base for mapping soil moisture in agricultural lands.
Evaluation of Semi-Empirical Model
Basic information like maximum, minimum and mean in situ soil moisture were generated ( Table 2). Linear regression was used to understand the relationship between Sentinel-1 backscattering coefficients and in situ soil moisture data. The P value, which indicates the significance of the accuracy assessment was significant (≤0.05) and not significant (≥0.05). The RMSE of the modeled soil moisture was estimated using the equation: To understand the contribution of each polarization and sum of both polarizations to the accuracy of the model, residual standard error (RSE) of the estimated soil moisture was calculated using equation: where is the observed soil moisture; is the predicted soil moisture; and n is the degree of freedom.
Results
A well distributed sampling scheme and data collected over two years yielded a well calibrated model to estimate soil moisture in the bare agricultural soils during the dry season (March-May). Linear and multi-linear regression was used to find the relationship between observed soil moisture and backscatter coefficients by deriving the model constants for each date and a combination of dates.
Field Measurements and Laboratory Analysis
Soil moisture was estimated using the gravimetric method for all 62 samples spread over Siruguppa taluk (Figure 1) Table 2). Figure 3 illustrates the range of values that each point in the population takes above and below the mean for six dates of satellite passes during 2017. It is worth noting that Figure 3 displays the soil moisture values measured the day of the satellite passes and for this reason, the ranges of the variation of soil moisture appeared as different from those reported in Table 2
Localized and Generalized Relationships
The concepts of localized and generalized relationships were used in the in situ measurements of soil moisture and SAR estimates. A relationship was localized if it was obtained using single date data points in the study area, collected both in 2017 and 2018. A generalized relationship was obtained when all the dates data points were considered in the study area ( Figure 5).
The relationship for localized models showed R 2 ranging from 0.62 to 0.75 between 0 VV σ and v ϑ , revealing a significantly strong relationship in 2017 ( Table 3) Table 3).
Generalized relationships attempted to study the impact of seasonal effects observed in the study area due to different agroecologies (i.e., the different management and practices in a homogenous landscape). Table 4
Soil Moisture Evaluation
Multi-linear regression and linear regression were applied to determine the value of empirical constants (A, B, and T) in both the localized and generalized models. Tables 5 and 6
Discussion
Accurate estimation of v ϑ was envisaged using a linear equation of 0 VV σ and 0 VH σ radar cross section from bare agricultural soils. A thorough data collection campaign was undertaken during 2017 and 2018, synchronizing with the pass of the satellite. Bare soil areas were mostly post-harvest cropped areas with little or no crop residue, depending on the crop sown. In the study area, 50% of the agricultural land comprises rice cropped and irrigated from a seasonal stream. Sentinel-1 SAR, dual polarized imagery was used to estimate soil moisture over bare soils using a semi-empirical model. Model parameters were estimated using linear and multi-linear regression. Performance evaluation was conducted based on a 70:30 ratio of sampled points and low RMSE was found between the observed and estimated soil moisture, when a linear relationship between 0 VV σ and 0 VH σ was combined for 2017 and 2018. that in both years, the backscatter and observed soil moisture had a significant positive correlation [2,10,27,28]. In both years, VV polarization had a higher backscatter dB value than VH polarization. In cross-polarization (VH), signal attenuation occurs due to volumetric scattering [29]. In 2017, soil moisture constantly increased from March 4 to April 27. The R 2 between radar backscattering coefficient and in situ measurements of soil moisture is reported in Table 3. A sudden increase in R 2 (VV) can be observed on May 15, corresponding to the consecutive rainfall events that occurred during the three days before the date of the satellite pass ( Figure 8). This means that there is a better correlation for high values of soil moisture, probably because under this condition, the radar backscattering coefficient's dependence on soil moisture is more important than it is on surface roughness.
Similarly, an unexpected increase in 0 VH σ was observed ( Figure 5). May 27, 2017 (Table 3) had a low R 2 value from 0 VH σ compared to the rest of the dates due to the rainfall event (Figure 8), weeds or crop residue moisture [24]. In 2018, R 2 for the relationship between 0 VV σ and observed soil moisture was significant during March because of residual soil moisture (i.e., the crop residual moisture influenced the radar backscattering coefficient, Table 3). Residual soil moisture was low on April 4 and May 22 due to evaporative demand and higher between April 16 and May 10 due to consecutive rainfall events (Figure 8). R 2 did not decrease from March to May, probably due to irregular changes in crop residue moisture, since 0 VH σ is sensitive to it [24]. The R 2 values from 0 VH σ during March were relatively low despite no rainfall in the month because of residual soil moisture from the previous crop. The cumulative moisture due to rainfall during April is reflected in the low R 2 of April 16 and April 28 ( (Table 3). A similar relationship existed during 2018 from a linear combination of 0 VV σ and , which improved R 2 significantly (Table 3).
Localized and Generalized Relationships
To operationalize the accurate estimation of soil moisture for decision making, a global relationship was envisaged considering all dates during the dry season. The R 2 value of global relationship from VV polarization during 2017 was 0.68, which was higher than the mean of the local relationships. The generalized relationship was found to be more useful for an accurate soil moisture estimate. In addition, R 2 for the generalized relationship performed better than the mean of the localized relationship (0.67) with VH polarization. The scenario during 2018 from VV polarization was more influenced by rainfall events in the dry season. The R 2 values ranged from 0.56 to 0.69 with a mean of 0.62 from localized relationships and 0.66 from the generalized relationship, which was more than the local mean. R 2 was very low from VH polarization due to cumulative moisture from rainfall events. However, the generalized relationship produced a lower R 2 than the mean localized relationship for VH in 2018 (see Tables 3 and 4). The usefulness of a generalized relationship was exhibited with a consistent increase in the accuracy of the soil moisture estimates over two years. The relationship from VV and VH polarization during 2017 showed significantly lower R 2 than the linear combination of VV and VH during the same year. Similarly, also during 2018, R 2 was significantly higher than the individual polarization. Finally, the best relationship was obtained when the linear combination of two polarizations was combined (appended) for the two years 2017 and 2018, than from single polarizations combined for the two years. It was inferred that generalized relationships are more promising in terms of building a model compared to localized relationships, which may not relate to the entire population.
Modeling the Relationships
The relationships of localized and generalized modeling were explored and tested for multicollinearity, especially linear combination models 0 VV σ + 0 VH σ . Multicollinearity is a statistical phenomenon in which two or more predictor variables in a multiple regression model are highly correlated [30]. To detect multicollinearity, we used an indicator called variance inflation factor (VIF), which is a tool to measure and quantify how much the variance is inflated [30]. If any of the model's VIF values exceed 5 or 10, it is an indication that the associated regression coefficient is poorly estimated because of multicollinearity [31]. The P value indicates statistical significance for independent variable contribution in the model, which is explained in Section 2.4.3.
For generalized models, nine different types of linear relationships were explored with 0 VV σ and 0 VH σ data (Table 6) This study also showed that the linear combination equations from the localized models also performed well with low VIF (<2) and a P value statistically significant for both backscatter coefficients (Tables 7 and 8).
A collinearity test on the generalized and localized models showed that the VIF for a linear combination of both backscatter coefficients (VV + VH) was <3. Hence, these models are non-collinear. All models showed low P value, indicating that both backscatter coefficients made meaningful addition to the models. During modeling relationships with a linear combination of individual backscatter coefficient, it was inferred that the individual backscatter coefficients were non-collinear, contributing to R 2 independently. It was found that the localized models from individual dates varied over time, and any one equation with a low RSE and VIF may not represent the whole season. In addition, the generalized models produced lower RSE representing the whole season, and were hence better than each localized model.
Validation of Models
Models were validated using 30% of the sampled points. Results for the localized models are summarized in Table 5. In 2017, the lowest RMSE (0.01) was found on 21 April. Figure 8 shows that no rainfall or very weak rainfall was observed on this day. An increase in RMSE was observed on 15 May. Similarly, in 2018, the lowest RMSE was observed on 23 March and the highest (0.03) on 16 April 2018, probably due to the increase in rainfall. The results seem to show that the RMSE of the models is related to the amount of rainfall. Localized models performed better in drier soils.
As far as the generalized models are concerned, the validation results showed that generalized models obtained using co-polar 0 VV σ data provided a lower RMSE than those based on cross-polar 0 VH σ data for both 2017 and 2018 and taking all data acquired from 2017 to 2018. We also found that the linear combination of both co-polar and cross-polar backscattering coefficients always provided a lower RMSE than the models using only one polarization. The best results came when using the linear combination of polarizations and all the data acquired along the two years, resulting in an RMSE of 0.02 (Table 6). This globalized model was used to produce maps of soil moisture and its spatial variability (Figures 9-11). This is probably the most important result, as a simple multi-linear model using both co-polar and cross-polar Sentinel-1 data acquired over long time periods can reproduce the spatial variability of soil moisture.
Conclusions
This study aimed to accurately estimate the soil moisture of bare, post-harvest agricultural areas collected from Siruguppa taluk (sub-district) in the Karnataka state of India. Fifty percent of this agricultural area is grown with rice that is irrigated by seasonal canal irrigation. An accurate estimate of volumetric soil moisture ( v ϑ ) was envisaged using a semi-empirical model based on a linear equation of co-polarized and cross-polarized radar cross section obtained by Sentinel-1 images. A thorough data collection campaign was undertaken during 2017 and 2018 during the pass of the satellite.
Both localized and generalized models were developed using Sentinel-1 image independently and all images together, respectively. Results indicate that the accuracy of the soil moisture estimates increased when using both co-polar and cross-polar images instead of only The use of localized models revealed that the RMSE of soil moisture estimates decreased corresponding to dry periods, with little or no rainfall. This indicates that better estimates of soil moisture can be obtained for drier soils. Coming to globalized models, soil moisture estimates with lower RMSE were observed when merging all data acquired in 2017 and 2018, and co-polar and cross-polar images, with a R 2 of 0.7 and RMSE of 0.02. The availability of a large amount of in situ data collected over a large area demonstrated that a globalized linear model based on the joint use of co-polar and cross-polar C-band SAR images acquired for a long time period, with a short revisiting time of twelve days, could capture spatial variability in soil moisture. This is an important result as the availability of Sentinel-1 data can provide farmers with timely and accurate estimates of soil moisture and enable the mapping of its spatial variability by using simple semi-empirical models. This information, when provided in the immediate weeks and months preceding the cropping season, could be very crucial in determining planting dates and assessing early season plant growth, thereby playing a key role in influencing productivity.
|
2020-05-28T09:14:32.440Z
|
2020-05-22T00:00:00.000
|
{
"year": 2020,
"sha1": "32c2d28d84a5a9437386b4fca8e9e67fb4746fa5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/12/10/1664/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2b412f1caad46f31729ee2db3a603495fc747c64",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
259246201
|
pes2o/s2orc
|
v3-fos-license
|
Sole microbiome progression in a hatchery life cycle, from egg to juvenile
Recirculating aquaculture systems (RAS) pose unique challenges in microbial community management since they rely on a stable community with key target groups, both in the RAS environment and in the host (in this case, Solea senegalensis). Our goal was to determine how much of the sole microbiome is inherited from the egg stage, and how much is acquired during the remainder of the sole life cycle in an aquaculture production batch, especially regarding potentially probiotic and pathogenic groups. Our work comprises sole tissue samples from 2 days before hatching and up to 146 days after hatching (−2 to 146 DAH), encompassing the egg, larval, weaning, and pre-ongrowing stages. Total DNA was isolated from the different sole tissues, as well as from live feed introduced in the first stages, and 16S rRNA gene was sequenced (V6-V8 region) using the Illumina MiSeq platform. The output was analysed with the DADA2 pipeline, and taxonomic attribution with SILVAngs version 138.1. Using the Bray–Curtis dissimilarity index, both age and life cycle stage appeared to be drivers of bacterial community dissimilarity. To try to distinguish the inherited (present since the egg stage) from the acquired community (detected at later stages), different tissues were analysed at 49, 119 and 146 DAH (gill, intestine, fin and mucus). Only a few genera were inherited, but those that were inherited accompany the sole microbiome throughout the life cycle. Two genera of potentially probiotic bacteria (Bacillus and Enterococcus) were already present in the eggs, while others were acquired later, in particularly, forty days after live feed was introduced. The potentially pathogenic genera Tenacibaculum and Vibrio were inherited from the eggs, while Photobacterium and Mycobacterium seemed to be acquired at 49 and 119 DAH, respectively. Significant co-occurrence was found between Tenacibaculum and both Photobacterium and Vibrio. On the other hand, significantly negative correlations were detected between Vibrio and Streptococcus, Bacillus, Limosilactobacillus and Gardnerella. Our work reinforces the importance of life cycle studies, which can contribute to improve production husbandry strategies. However, we still need more information on this topic as repetition of patterns in different settings is essential to confirm our findings.
Introduction
Recirculating aquaculture systems (RAS) have been developed to reduce water usage through waste management, and so, making intensive fish production compatible with environmental sustainability (Piedrahita, 2003). However, these types of systems pose unique challenges in microbial community management, being extremely demanding to maintain a stable and healthy microbial community within the RAS environment (Schreier et al., 2010;Martins et al., 2013).
Microbiomes usually form specific communities in different physical and biological environments, with a dynamic and interactive nature crucial for the functioning and health of their hosts (Berg et al., 2020). Due to their dynamic nature, bacterial colonization in its host can be heavily influenced by diet and environmental conditions (Bledsoe et al., 2016;Wilkes et al., 2019). In fish, this translates, for example, in the role live feed plays in early development stages (Califano et al., 2017) as vectors for potential pathogenic bacteria of the genus Vibrio (Montanari et al., 1999;Olafsen, 2001). The gut microbiome has already been extensively studied due to its role in reinforcing the digestive and immune system of the fish (Talwar et al., 2018). The composition of the fish diet affects gut microbiome composition, thus different diets applied to the different stages of fish development are expected to influence gut microbial communities during its life cycle (Stephens et al., 2016). Because the live feed administered in the early stages of development is known for being relatively poor in nutrients, the richness of the fish diet is higher in later stages (with commercial feed), which is conflicting with the importance of early bacterial colonization (Yukgehnaish et al., 2020). Another factor contributing to an improvement in bacterial colonization is the transition to RAS systems, as the establishment of the fish microbiome can be affected by the change in environmental conditions, with fish developing different profiles after this period (Steiner et al., 2021).
There is a multiplicity of ecological processes in microbiomes that affect community assembly (Goldford et al., 2018), such as selective pressures and nutrient availability, which causes cross-feeding networks with microbes communicating and trading metabolites and services, especially relevant in anaerobic environments (Marx, 2009). On the other hand, competitive interactions may also play an important role in shaping host microbial communities (Coyte et al., 2015). In aquaculture, and RAS in particular, life cycle studies are still rare, although they are required to detect temporal changes of the microbiome along farming cycles to identify the core taxa for future modulation (Infante-Villamil et al., 2021).
As mentioned above, microbiome studies are important to better understand how pathogen outbreaks occur and identify dysbiosis events. The community in RAS, particularly in the biofilter (a sector for optimal but undifferentiated bacterial growth used for ammonia removal from the system), influences the farmed fish that is in constant contact with the water, with its own prokaryotic community (Laurent et al., 2000) that also provides continuity between different physical and biological environments (host and biofilter, for example). Therefore, in this complex and interactive environment, there is a risk that disruptions may cause pathogenic outbreaks by opportunist bacteria (Blancheton et al., 2013). Groups commonly associated with disease outbreaks in sole are the Tenacibaculum genus (Gourzioti et al., 2018), Vibrio (Austin, 2010) and Photobacterium (Toranzo et al., 2005). The first two have also been linked in a pathogenic dysbiosis event (Wynne et al., 2020). The family Mycobacteriaceae also includes a large number of pathogenic bacteria for a number of different fish species (Delghandi et al., 2020).
The prokaryotic community can also result in improved nutrition and effective disease control by inhibiting potential fish pathogens (Irianto and Austin, 2002). In aquaculture, several microbial species, mainly present in the fish gut and water, have already been identified as potentially probiotic with several health benefits such as improved fish productivity, resistance to diseases and increased immune functions (el-Saadony et al., 2021). Microbiome studies can then help to guide the best practices to promote the persistence of these agents (Borges et al., 2021). Some of the bacterial orders already identified as having potentially probiotic interest are Lactobacillales (Alonso et al., 2019) and Bifidobacteriales (Quigley, 2017). Additionally, the genera Bacillus (Kuebutornye et al., 2020), Roseobacter, Phaeobacter, Paenibacillus, Pseudoalteromonas, Alteromonas, Pseudomonas, Aeromonas, Arthrobacter, Clostridium (Ringø, 2020, Saccharomyces (Gaggìa et al., 2010), Streptomyces (Teng Hern et al., 2019), and Shewanella (García de la Banda et al., 2010)have also been linked to this activity.
Our goal in this paper is to start filling the gap on the microbiota analysis during fish life cycle in aquaculture. That is, to characterize the bacterial community along a farming cycle, accompanying a batch from egg to the pre-ongrowing stage. In this study we were able to evaluate the temporal microbiota progression across sole life cycle, providing a reference microbiota map for this species at different stages of development. In addition, we were able to determine how much of the sole microbiome is inherited from the egg stage, and how much is acquired in the different production stages. This work improved the background knowledge needed to develop future microbiome modulation in sole production. Additionally, the results presented here can have a direct impact in the production husbandry strategies.
Sample collection
This study was performed in partnership with an aquaculture production unit, who provided the samples, a sole hatchery (Safiestela S.A.), located in Estela, Portugal. The pre-ongrowing and weaning tanks operate in a recirculating aquaculture system (RAS), while egg and larval stages are kept in a flow-through water system. The water circulation of the pre-ongrowing (PO) and weaning (WE) systems is displayed in Supplementary Figure S1 and it was previously described (Almeida et al., 2021). Briefly, after circulating through the tanks, wastewater is mechanically filtered with a rotary drum filter (mainly for particulate organic matter removal), followed by biological filtration with a moving bed biofilter reactor type (volume of 150 m 3 in the PO and 25 m 3 in the WE system). After the degasification column, where water trickles down, the water passes through the skimmer before returning to the tanks. The total water volume is 370 m 3 in the PO system and 60 m 3 in the WE system. In both systems, the water recirculation rate is approximately 400% per hour, the feeding regime is approximately 2% biomass/day, and the fish density varies between 2.5 to 5 kg/m 2 .
Frontiers in Microbiology 03 frontiersin.org A description of the age, system, life cycle stage, and feed of the collected samples is presented in Figure 1. Fish larvae were fed rotifers from 2 to 5 days after hatching (DAH) and brineshrimp from 7 to approximately 75 DAH (slightly after entering the WE system). Commercial feed (CF) A, for flatfish larvae with no potentially probiotic added, was introduced at 65 DAH and replaced by CF B, for nursery with supplemented potentially probiotic Pediococcus acidilactici, at 100 DAH. The exact amount of P. acidilactici was not disclaimed in the commercial diet formulation, but the BACTOCELL CNCM I-4622 strain was used. For this study, the same production batch was accompanied throughout the development stages and tissue samples were collected in duplicate. Eggs were collected at −2 DAH, larvae at 2 and 14 DAH. For juveniles, the separate tissues were collected for microbiome characterization (caudal fin, gills, mucus and intestine) at the weaning system (49 DAH) and at the beginning and end of the pre-ongrowing (119 and 146 DAH, respectively). For each sample type, on each day, duplicate samples were collected, one fish per sample in the case of the juveniles, and approximately 2 mL of dry volume in the case of egg and larvae samples. Live feed samples were also collected in duplicate. Information about temperature, salinity, and pH at the sampling time can be found in Supplementary Table S1.
DNA extraction and sequencing
Total DNA was isolated from the different matrices (eggs, larvae, caudal fin, gills, mucus and intestine, live feed), in duplicate, with DNeasy Power Soil kit (QIAGEN, Merck KGaA, Darmstadt, Germany). Samples were prepared for Illumina Sequencing by 16S rRNA gene amplification of the bacterial community. The DNA was amplified for the hypervariable V6-V8 region with specific primers and further reamplified in a limitedcycle PCR reaction to add sequencing adapters and dual indexes. First PCR reactions were performed for each sample using KAPA HiFi HotStart PCR Kit according to manufacturer suggestions, 0.3 μM of each PCR primer: forward B969F 5′-ACGCGHNRAACCTTACC -3′ and reverse BA1406R 5′-ACGGGCRGTGWGTRCAA −3′ (Michl et al., 2019) and 50 ng of template DNA in a total volume of 25 μL. The PCR conditions involved a 3 min denaturation at 95°C, followed by 35 cycles of 98°C for 20 s, 60°C for 30 s and 72°C for 30 s and a final extension at 72°C for 5 min. Second PCR reactions added indexes and sequencing adapters to both ends of the amplified target region according to manufacturer's recommendations. Negative PCR controls were included for all amplification procedures. PCR products were then one-step purified and normalized using SequalPrep 96-well plate kit (ThermoFisher Scientific, Waltham, United States) (Comeau et al., 2017), pooled and pair-end sequenced in the Illumina MiSeq ® sequencer with the V3 chemistry, according to manufacturer's instructions (Illumina, San Diego, CA, United States) at Genoinseq (Cantanhede, Portugal).
Sequence processing and analysis
To obtain a amplicon sequence variant (ASV) table, the DADA2 pipeline (Callahan et al., 2016) was implemented on our dataset. This was done using the R environment (version 4.1.2. Copyright 2019, the R Foundation for Statistical Computing) with the package dada2 (v1.16.0). Primer removal was performed within the pipeline of DADA2 using the filterAndTrim function. Sequence filtering, trimming, error rates learning, dereplication, chimera removal and amplicon sequence variant (ASV) inference were performed with default settings. For taxonomic attribution, the SILVAngs version 138.1 database was used (Quast et al., 2013). Taxa classified at the kingdom level as Eukaryota, at the order level as Chloroplast and at the family level as Mitochondria were removed.
FIGURE 1
A resume of the age (days after hatching) at which fish samples were collected, the system they were collected from, and the life cycle stage associated and feed.
Frontiers in Microbiology 04 frontiersin.org
For the general bacterial community analysis, the package phyloseq (v1.38.0) and ggplot2 (v3.3.5) were used for data handling and visualization. Alpha diversity was calculated using the Observed ASVs metric and the Shannon index with vegan (v2.5-7). Betadiversity was calculated with Bray-Curtis dissimilarity index and plotted with non-metric multidimensional scaling (NMDS), this was also performed for the target groups subsets (potentially pathogenic and potentially probiotic). Dissimilarity results were tested by permutational multivariate ANOVA (PERMANOVA) using the Adonis function (vegan) for beta group significance (p-values lower than 0.05) the parameters age (DAH), sample type (egg, larvae, fin, gills, mucus and intestine), life cycle stage (egg, larvae, juveniles) and system (egg, larvae, weaning, pre-ongrowing) were tested.
To be part of the core microbiome, we consider the bacterial genera that are present in at least 75% of all samples of the sole life cycle (prevalence), with an abundance higher than 0% (detection threshold), using the microbiome R package (v. 1.16.0). Additionally, venn diagrams were performed to analyze the membership of shared taxa across the sole life cycle with tissue samples were separated by life cycle stages. Venn diagrams were obtained using the venn R package (v. 1.10) to display the number of shared and exclusive taxa between whole body samples (egg and larvae) and each sole tissue (fin, gill, intestine, mucus) at different ages (49, 119, and 146 days).
To explore our target groups, potentially probiotic and potentially pathogenic bacterial organisms, these groups were identified at different taxonomic levels to mitigate the effects of unclassified sequences and (in the case of probiotics) to potentially find new promising genera for further studies. For the potentially probiotic group, we selected all genera from the order Lactobacillales (Alonso et al., 2019) and Bifidobacteriales (Quigley, 2017), and also the genera Bacillus, Roseobacter, Phaeobacter, Paenibacillus, Pseudoalteromonas, Alteromonas, Pseudomonas, Aeromonas, Arthrobacter, Clostridium (Ringø, 2020), Saccharomyces (Gaggìa et al., 2010), Streptomyces (Teng Hern et al., 2019), and Shewanella (García de la Banda et al., 2010), due to their previously identified role in probiotic potential and activity. The genera Tenacibaculum (Gourzioti et al., 2018), Vibrio (Austin, 2010), Photobacterium (Toranzo et al., 2005) and Mycoplasma (Delghandi et al., 2020) were selected as potentially pathogenic as it was demonstrated in previous studies. Notwithstanding the potential of our selected taxonomic groups to contain probiotic or pathogenic species, we must acknowledge that some of these genera also contain non-probiotic or non-pathogenic organisms. Thus, one cannot infer direct fish health effects (probiotic or pathogenic) from the detection of these groups in our study. When deciding where to categorize these genera groups, we considered as pathogenic only those previously associated with disease outbreaks in Solea senegalensis. For the potential probiotic list, we gathered those with probiotic activity described in the literature that had not yet been described as pathogenic for Solea senegalensis. A correlation matrix between the relative abundance of our target groups was also built with significant correlations (Spearman pairwise, value of p <0.05) using the R packages Hmisc (v4.1.1) and corrplot (v0.84).
Ethics declaration and data availability
The animals used in this work were not subjected to any experimental protocol and were a part of the routine procedures of a commercial hatchery facility. All animals were handled by the fish farm employees, the euthanasia method used was an anaesthetic overdose of the commercial anaesthetic Aquacen benzocaine 200 mg/ mL (CENAVISA, S.L., Spain), according to manufacturer instructions, and following SEA EIGHT's Veterinary Plan. According to the Portuguese legislation DL N° 113/2013, this work is exempted from the need for ethical approval. All methods are reported in accordance with ARRIVE guidelines.
The datasets generated and/or analysed during the current study are available in the European Nucleotide Archive (ENA) repository, accession number PRJEB55703.
Results
The 16S rRNA gene sequencing dataset used had a minimum and maximum read counts per sample (after trimming) of 7,776 and 84,097, respectively. The mean read counts for all samples was 32,709, the complete list of read counts per sample is presented in Supplementary Table S2.
General bacterial community
The most abundant phyla were Proteobacteria (42-91%), Bacteroidetes (or Bacteroidota, 2-40%) and Firmicutes (0-39%). The complete distribution at this taxonomic level can be found in Supplementary Figure S2 and at the genus level (abundance >1%) in Supplementary Figure S3 (Supplementary Table S3 for an ASV breakdown of abundances). Overall, alpha diversity indexes did not appear to be influenced by the different phases of the sole life cycle or type of tissue at the juvenile stage (Supplementary Figure S4 and Supplementary Table S2). The NMDS distribution of the Bray-Curtis dissimilarity index had a stress value of 0.166 and is plotted in Figure 2. It shows an apparent grouping by age and life cycle stage. All the parameters tested, age (DAH), sample type (egg, larvae, fin, gills, mucus and intestine), life cycle stage (egg, larvae, juveniles) and system (egg, larvae, weaning, pre-ongrowing), had significant p-values in the Adonis test, with the system having the highest percent variability explained (Supplementary Table S4). However, only the life cycle stage had a non-significant homogeneity of dispersion test. The % variability explained and the dispersion test indicate that both "system" and life-cycle" were the most important factors in shaping community dissimilarity.
The core microbiome, at the genus level, can be consulted in Figure 3. Twelve genera are part of this core microbiome, Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium, Vibrio, Pseudoalteromonas, Tenacibaculum, Cutibacterium, Methylobacterium-Methylorubrum, Delftia, Pseudomonas, Paracoccus, Peredibacter, Halomonas, and Marinobacter. Venn diagrams (Figure 4) were used to distinguish the inherited from the acquired community along the sole life cycle, by analyzing the shared genera across sample types. In the caudal fin bacterial community, there are ten genera (that represent 2.9% of the genera in this analysis) that are present across the entire life cycle (Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium, Cutibacterium, Delftia, Halomonas, Marinobacter, Methylobacterium-Methylorubrum, Pseudoalteromonas, Sulfitobacter, Unclassified Cryomorphaceae, Frontiers in Microbiology 05 frontiersin.org Beta-diversity calculated with Bray-Curtis dissimilarity index and plotted with non-metric multidimensional scaling (NMDS) was performed for the general prokaryotic community and for the subsets of the target groups (potentially pathogenic and potentially probiotic). Sample shapes represent the life cycle stage of the fish and gradient color represents their respective age.
FIGURE 3
Members of the core microbiota were determined with a detection threshold of 0 and a prevalence threshold of 0.75.
Target bacterial groups
Relative abundance of genus distribution of the target groups can be seen in Figure 5 and Supplementary Table S5. For the genera associated with potentially probiotic bacteria, it was observed that sequences from Bacillus, Enterococcus, Phaeobacter, Pseudoalteromonas, Pseudomonas and Shewanella were already present in the eggs (2 days before hatching). Shewanella disappears at 2 DAH and was only detected again in the WE system (at 49 DAH), after the two sources of live feed (rotifer and brineshrimp) were introduced. Bacillus and Enterococcus also drop below the detection limit (no sequences obtained) at 14 DAH, and only the first re-emerges in the WE system. The remaining three genera (Pseudoalteromonas, Venn diagram of the shared taxa between whole body samples (Egg and Larvae) and different types of tissue collected from later stages of the sole: fin (A), gill (B), intestine (C), and mucus (D). Alteromonas, Roseobacter and Aeromonas. Despite no major changes observed in total relative abundance of potentially probiotic genera, the number of detected genera increased from four at the end of the larval stage (14 DAH) to 22 and 21, in the intestine at day 49 and 119 respectively, and then back to 4 at 146 DAH. In respect to the potentially pathogenic genera, Tenacibaculum and Vibrio accompany the sole microbiome through its development, from egg to 146 DAH. Photobacterium and Mycoplasma were detected, respectively, at 49 DAH and 119 DAH. Photobacterium was also detected in brineshrimp and Mycoplasma was detected in both brineshrimp and rotifer samples.
Frontiers in
The spearman correlation matrix between the relative abundances of potentially probiotic and pathogenic genera can be found in Figure 6. There are no significant correlations between the potentially pathogenic genera. Almost all correlations between potentially probiotic taxa are positive, despite two exceptions (Alteromonas with Relative genus distribution of the target groups (probiotic and potentially pathogenic) ordered by age and colored by sample type (Egg, Larvae, Gill, Intestine, Mucus, Fin, Rotifer or Brineshrimp) and with a bar plot summary of overall target group composition by sample. Samples with no detectable abundancy of each functional group have been removed.
Discussion
Recirculating aquaculture systems have a unique challenge in managing a stable and functional microbial community (Schreier et al., 2010;Martins et al., 2013), with communities that are crucial for the health of the host (Berg et al., 2020) that can be heavily influenced by diet and environmental condition (Bledsoe et al., 2016;Wilkes et al., 2019). To fill the gap in life cycle studies, crucial to improve microbiome managing strategies, we characterized the bacterial community along a farming cycle, from egg to pre-ongrowing juveniles, evaluating the temporal microbiota evolution.
We found that alpha-diversity indexes did not change throughout development, although previous studies on a different species (Atlantic cod), refer to a loss of bacterial species diversity when artificial feeding is introduced (Ringø et al., 2006). Besides the fish species difference, the mentioned study was also technically very different, since the bacterial diversity was explored through isolation (Ringø et al., 2006), possibly missing the difficult to cultivate members of the community. The lack of substantial changes in bacterial alpha-diversity observed over the course of our study, is worth underscoring since it suggests a crucial importance of the early stages of fish development in establishing microbial community diversity.
The most abundant phyla in this dataset, Proteobacteria, Bacteroidetes and Firmicutes are commonly found to be the most abundant in aquaculture systems (Bledsoe et al., 2016;Wilkes et al., 2019). At the genus level, it appears that there are no dominating genera across the bacterial community and there is some variability in the relative abundance of genera detected between duplicates of the same sample. This variability in the bacterial community composition may be a consequence of the formation of heterogenous physical and biological micro-environments within the fish host, with specific bacterial communities as described by other studies (Zhang et al., 2019;Sylvain et al., 2020).
The term "core microbiome" has become widely used in microbial ecology to describe the set of microbial taxa that characterize a host or environment of interest (Neu et al., 2021). In this work we used a shared core microbiome analysis to infer possible conserved ecological Genus-genus interactions between target groups: potentially probiotic and potentially pathogenic. The correlation matrix represents significant interactions (p < 0.05) using Spearman pairwise correlation coefficient.
Frontiers in Microbiology 09 frontiersin.org roles and found that it was composed of twelve genera, four of them (Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium, Vibrio, Pseudoalteromonas and Tenacibaculum) present in all tissues analysed and in all growth stages. As mentioned before, two of them are potentially pathogenic (Vibrio and Tenacibaculum). One thing to keep in mind is that both the live feed and the border sole tissues collected are in permanent contact with the water, and when studying these frontier environments its complex to disentangle the host from the environment community. Indeed, three of these genera have already been identified in the water, tank biofilm and biofilter carriers in this aquaculture unit, Vibrio, Pseudoalteromonas and Tenacibaculum (Almeida et al., 2021). Using Venn diagrams, we found that the inherited community had very few genera represented (2.2-2.9%), all of them included Tenacibaculum and Vibrio. However, to our knowledge, this is the first time this type of characterization is performed in an aquaculture setting. Studies that accompany the evolution of the microbiome are rare, although there are some studies that accomplished a similar characterization, but only in wild populations. In migratory wild salmon, there is a microbiota community destabilization in migratory phases of the life cycle (Llewellyn et al., 2016). Another study found that, although deep-sea anglerfish microbiomes are dominated by the same genera from larvae to adult, their characteristic bacterial bioluminescent symbionts were not present in the early stages and were acquired from the environment (Freed et al., 2019).
Two target groups were selected (potentially pathogenic and probiotic) as having the most impact during the sole life cycle in an aquaculture production batch. We found a sharp increase in the number of potentially probiotic genera when sole moved to the WE system (49 DAH), around 40 days after live feed was introduced in the diet. Commercial feed did not appear to substantially increase the number of potentially probiotic genera at 119 or 146 DAH. Despite carrying Pediococcus acidilactici in its formulation, the commercial feed B did not consistently increase the abundance of Pediococcus in the sole intestines fed with this diet (119 and 146 DAH). When considering the increased number of potentially probiotic genera in the fish tissues after day 49, it is worth noticing that most of these genera were not present in the feed itself (live or commercial). An explanation could be that components in these feeds may act as prebiotics, that is, nutrients that are not digested by the fish that may fortify certain components of the intestinal microbiota by stimulating the growth and the activity of particular bacteria (Ringø et al., 2010). Indeed, prebiotic supplementation has shown potential as a strategy to overcome chronic stress-induced disease susceptibility in farmed S. senegalensis (Azeredo et al., 2019). Although reaching its highest number at 119 DAH, the number of potentially probiotic genera drops abruptly at 146 DAH with no change in the feed, raising the question if it was a consequence of husbandry or an unsuccessful establishment of the potential probiotic community. We should note, also, that most of the prokaryotic diversity is found in the rare biosphere (Pascoal et al., 2020), a genetic pool mostly undetected with the sequencing depth applied in this study, and some rare taxa can remain rare while others may grow abundant when the conditions change. This seed bank can include low abundant pathogenic communities and its monitorization could be useful in early identification, but can also support host functions specific to the aquaculture environment (Pascoal et al., 2022). This shift from undetectable to detectable groups may happen when the production alters the diet (specially between feeds), as the nutrients available change, diversity of certain genera increases momentarily and then declines with the stabilizing environmental conditions, explaining the drop of potentially probiotic genera at 146 DAH. Much like in the human gut microbiome, a diverse diet provides a competitive advantage to low abundant taxa, and the more diverse the microbiome, the more adaptable it will be to perturbations (Heiman and Greenway, 2016). Studies in chinook salmon also found that the gut microbiome is shaped by the environment, both by water and by formulated feed (Steiner et al., 2021). However, high inter-individual variation suggests that the host physiology itself may affect the community structure as much as environmental conditions (Fossmark et al., 2021;Hossain et al., 2021). In our study, it has also been observed that some genera associated with nitrifying activity (e.g., Nitrosomonas and Nitrospira) increased their relative abundances when fish were introduced to RAS, during the pre-ongrowing stage (Supplementary Table S5). Other studies have also found colonization of this group in fish tissue under similar conditions (van Kessel et al., 2016). Most probably, this is a consequence of nitrifying groups circulating from biofilters to the different compartments of the RAS unit, where they were found to occur (Almeida et al., 2021).
For the potentially pathogenic genera, Tenacibaculum and Vibrio, they appear to be acquired at the egg stage, accompanying the sole microbiome through its development, from egg to 146 DAH. The other two, Photobacterium and Mycoplasma, appear to be detectably colonizing later in the life cycle. In this study they have been identified in the rotifers and brineshrimp and thus the live feed could be a potential vector as has been previously demonstrated (Hurtado et al., 2020). This early diet driven microbiome development can have a significant impact in the future fish microbiome (Wilkes et al., 2019). Differentiating which pathogenic genera are inherited from those that the fish acquires throughout production is paramount. By identifying where in the production, the fish is exposed to these groups, husbandry improvements can be implemented to control them. However, if these pathogens are inherited from a wild broodstock, it may be difficult to safely remove them in a sustainable way. However, it is important to have in mind that the genera included in this study are potentially pathogenic, but are not composed solely by pathogenic species. In fact, the genus Vibrio is an important ecological marker, as it is widely abundant in riverine, estuarine, and marine aquatic environment (Hurtado et al., 2020) and one of the most diverse marine bacterial genera (Gomez-Gil et al., 2014). In the case of Tenacibaculum, out of 28 total species (Parte et al., 2020), only seven are generally associated with disease outbreaks: T. maritimum, T. soleae, T. discolor, T. gallaicum, T. dicentrarchi, T. finnmarkense, T. ovolyticum (Fernández-Álvarez and Santos, 2018). There are technologies available that may help to increase the definition for pathogenic species identifications. For example, long read sequencing with Pacific Biosciences (PacBio) or Oxford Nanopore Technology (ONT). These technologies had recent advancements that now provide higher accuracy and can provide up to 60 kb reads (Hu et al., 2021), almost enabling the sequencing of the complete 16S rRNA gene. Additionally, after the detection of groups of interest by an overall sequencing approach, like the one implemented in this study, targeted approaches to detect pathogenic markers of a subset of species could be implemented in a production setting to provide Frontiers in Microbiology 10 frontiersin.org confirmation of pathogenicity. However, this was out of the scope of this work, which aimed at understanding the microbial progression along a fishery life-cycle, and not the occurrence of disease/stress. In the correlation matrix, we found that six genera with potential probiotic activity were significantly negatively correlated with Vibrio, two of them, Bacillus and Streptomyces have already been described as potential inhibitors of Vibrio pathogen species (Vaseeharan and Ramasamy, 2003;Teng Hern et al., 2019). Only one genus had a negative correlation with Tenacibaculum (Pseudomonas), two with Mycoplasma (Pseudoalteromonas, Streptomyces) and two with Photobacterium (Phaeobacter and Alteromonas). For this analysis, we must recognize the potential biases in NGS community correlation studies that may result in misleading positive correlations, derived from the fact that more taxa are detected in deeply sequenced samples and therefore taxa co-vary with sequencing depth (Faust et al., 2015). Attesting to this, the correlation matrix shows a positive interaction between Shewanella and Photobacterium, however it had been amply reported that the first increases resistance to the later (García de la Banda et al., 2010Banda et al., , 2012Vidal et al., 2016). These detected correlations can be useful to unveil possible interactions, but more studies are needed to confirm or discard mechanistic hypotheses. Also, it is relevant to note that Photobacterium has a total of 14 positive correlations with potentially probiotic taxa, which might also be a consequence of the positive bias. A similar observation occurred between Phaeobacter and Tenacibaculum (Edward et al., 2022). With the limits of these techniques, it is unreliable to distinguish between positive bias from non-specific potentially probiotic activity with positive correlations between pathogenic and potentially probiotic taxa in our data (Phaeobacter and Vibrio) or cases like Bacillus that shows a negative correlation with Vibrio but a positive one with Photobacterium. The genera Streptococcus, Phaeobacter and Limosilactobacillus also have a similar behavior. Four genera had exclusive positive correlations with the potential pathogenic bacteria (Alteromonas, Pseudomonas, Gardnerella, Lactobacillus), therefore these might be the most promising for future empirical studies.
Conclusion
This work aimed to describe the sole microbiome development throughout the production cycle in a RAS. Through a description of the inherited and acquired community in the different tissues analysed at different production and life stages, we hope to promote the emergence of life cycle studies in aquaculture and to underscore its applicability. We found that the bacterial community was significantly altered throughout the Solea senegalensis early development. Two potentially probiotic genera were inherited from the egg stage (Bacillus and Enterococcus), but the main increase in potentially probiotic abundance and diversity occurred around 40 days after live feed was introduced in the diet (at the weaning stage). Notwithstanding this increase, the establishment of this community in the following development stages was not successful. Regarding potentially pathogenic genera, two appear to be inherited (Tenacibaculum and Vibrio), and two are suggested to be acquired during production (Photobacterium and Mycoplasma). These results are relevant, because acquired potentially pathogenic groups may be prophylactically treated with improvement in husbandry conditions, but those that are inherited from the egg stage may be difficult to safely eradicate.
Our study has conducted a comprehensive description of the bacterial community in different life cycle stages of the Solea senegalensis, to our knowledge, the first of its kind. By analyzing the composition of this community, particularly with the definition of key target groups and the definition of the inherited and acquired community in the production cycle, we have highlighted the importance of whole life cycle studies to understand the vulnerability of the stages of fish production with a direct impact in husbandry strategies. The shifts in the composition of key components of Solea senegalensis gut microbiome during its life cycle, open important questions related to the functional significance of the observed taxonomic changes in terms of potentially probiotic activity and pathogenic incidences in the life cycle of this fish that must be explored in future investigations.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: European Nucleotide Archive (ENA) repository, accession number PRJEB55703.
Ethics statement
Ethical review and approval was not required for the animal study because the animals used in this work were not subjected to any experimental protocol and were a part of the routine procedures of a commercial hatchery facility. All animals were handled by the fish farm employees, the euthanasia method used was an anaesthetic overdose of the commercial anaesthetic Aquacen benzocaine 200 mg/ ml (CENAVISA, S.L., Spain) following SEA EIGHT's Veterinary Plan. According to the Portuguese legislation DL N° 113/2013, this work is exempted from the need for ethical approval. All methods are reported in accordance with ARRIVE guidelines.
Author contributions
DA, MS, CM, IB, and AM had substantial contributions in the conception and design of the work. DA and IB were responsible for the acquisition of the samples. DA and MS for the analysis. DA, MS, CM, and AM in the interpretation of the data. DA drafted the first manuscript. MS, CM, IB, and AM revised it critically. MS, CM, and AM provided final approval for publication of the content. All authors contributed to the article and approved the submitted version.
Funding
This work was funded by the project ATLANTIDA (NORTE-01-0145-FEDER-000040), supported by the North Portugal Regional Operational Program (NORTE2020), under the PORTUGAL 2020 Partnership Agreement and through the European Regional Development Fund (ERDF). DA was supported by the Ph.D. grant with the reference
|
2023-06-26T13:06:36.323Z
|
2023-06-26T00:00:00.000
|
{
"year": 2023,
"sha1": "0f316c844f08f413b3c7e8b4c16fe1721bc502dd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "0f316c844f08f413b3c7e8b4c16fe1721bc502dd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
229530635
|
pes2o/s2orc
|
v3-fos-license
|
Attitude and perceptions towards COVID-19 among pregnant women in Singapore: A cross-sectional survey
Background COVID-19 may predispose pregnant women to higher risks of severe disease and poorer neonatal outcome. Psychological sequalae of this pandemic may pose a greater conundrum than its clinical aspects. It is currently unknown that how pregnant women cope with this global pandemic and its ramications. The aims of the study are to understand the attitude and perceptions of non-infected pregnant women towards the COVID-19 outbreak in Singapore. Methods An online cross-sectional survey of COVID-19 awareness among pregnant women attending antenatal clinics in Singapore was conducted. An internet link was provided to complete an online electronic survey on Google platform using a quick response (QR) code on mobile devices. The online survey consists of 34 questions that were categorized into 4 main sections, namely 1) social demographics 2) attitude on safe distancing measures 3) precaution practices and 4) perceptions of COVID-19. Results A total of 167 survey responses were obtained over eight weeks from June 2020. The majority of women were aged ≤ 35 years n=127), were of Chinese ethnicity (55%, n=91), attained tertiary education and were not working as frontline staff Using multiple linear regression models, Malay ethnicity (vs. Chinese, β 0.24; 95% CI 0.04, 0.44) was associated with higher frequency of practicing social distancing. Malay women (β 0.48; 95% CI 0.16, 0.80) and those who as frontline their hands at higher frequencies. Age of ≥ 36 (vs. 30 years, β 0.24; 95% CI 0.01, 0.46), Malay (vs. Chinese, β 0.27; 95% CI 0.06, 0.48) and Indian ethnicity (vs. Chinese, β 0.41; 95% CI 0.02, 0.80), and at high-risk (vs. general clinic, β 0.20; 95% CI 0.01, 0.39) were associated with higher frequency of staying-at-home.
Conclusion It is important for clinicians to render appropriate counselling and focused clari cation on the effect of COVID-19 among pregnant women for psychological support and mental wellbeing.
Background
Coronavirus disease is an infectious disease caused by a newly discovered severe acute respiratory syndrome coronavirus (SARS-CoV-2) rst identi ed in Wuhan City, China, in December 2019 [1]. On 11 th March 2020, the World Health Organization (WHO) declared the COVID-19 outbreak as a global pandemic with exponential spread worldwide [2]. As of 4 August 2020 , there are currently over 18 million people globally affected by COVID-19 with over 700,000 deaths reported worldwide, and rising [3].
In Singapore, the total con rmed cases are over 53,000 with 27 deaths based on the Ministry of Health's (MOH) report on 4 August 2020) [4].
The effects of SARS-CoV-2 in pregnancy had been based upon previous experience with SARS-CoV-1 and Middle East Respiratory Syndrome-related coronavirus (MERS) initially [5][6]. However, SARS-CoV-2 turns out to be far more infectious, albeit with lower mortality and similar morbidity to women of reproductive age [7]. The rapidly evolving pandemic over the past six months has given rise to multiple living-guidelines for the management of COVID-19 in pregnancy from a range of professional bodies such as the Royal College of Obstetricians & Gynaecologists (RCOG), American College of Obstetricians and Gynaecologists (ACOG) and the Academy of Medicine in Singapore [8][9][10]. As our knowledge of increases, hospital recommendations on infection control, COVID-19 screening and isolation protocols change rapidly in accordance with the latest evidence. The physiological and immunological changes in pregnancy also make women more susceptible to severe illness from respiratory infections [11][12]. A recent Centre for Disease Control and Prevention (CDC) report demonstrated that pregnant women with COVID-19 are more likely to be hospitalised, admitted to the intensive care unit and receive mechanical ventilation albeit with similar risk of mortality compared to non-pregnant women [13].
Pregnancy itself poses logistical challenges and conundrums for obstetricians managing pregnant women with suspected or diagnosed with COVID-19. The RCOG suggests that the COVID-19 pandemic increases the risk of perinatal anxiety, depression, and domestic violence in pregnant women [8]. Hence, pregnant women deserve a more sensitive approach and mutual understanding during this global pandemic among clinicians and their partners. There are limited studies assessing the attitude and public perceptions towards the effect of COVID-19 among pregnant women. As the COVID-19 pandemic continues to intensify globally, it is timely that clinicians should better understand the mentality of pregnant women; render appropriate counselling and focused clari cation for support during the antenatal, intra partum and post-partum period.
Social media and information access in Singapore are readily available via the internet. Hence, public awareness of COVID-19 using an online survey is realistic in both developing and developed countries with adequate resources for disseminating and receiving information. Herein, we reported the results from a rapid online cross-sectional survey related to COVID-19 among pregnant women attending antenatal clinics in Singapore.
The survey aimed to 1) establish the baseline attitudes, practices and perceptions of pregnant women towards COVID-19 and 2) correlate socio-demographics with women's precautionary practices.
Methods
We conducted an online cross-sectional survey for pregnant women attending antenatal clinics in two large tertiary-referral hospitals in Singapore from April to June 2020. Approval for the study including waiver of informed consent was obtained from the Singhealth Centralised Institutional Review Board (CIRB 2020/2307).
Pregnant women attending antenatal clinics were provided with an internet link to complete an online electronic survey on Google platform using a quick response (QR) code on any mobile device with internet access. The survey was anonymous and could be completed in about 10 minutes. The online electronic survey was created using CHERRIES (Checklist for Reporting Results of Internet E-Surveys) [14] and the questions were designed by a group of senior obstetricians.
The survey was designed to capture general awareness of COVID-19 and perceived views on COVID-19 including social distancing measures, preferred mode of delivery, willingness to separate from their child at birth and avoiding breast feeding to minimize the risk of vertical neonatal transmission. We classi ed pregnant women attending the high-risk clinics based on obstetric indications by their clinicians whereas low risk pregnant women attended general clinics.
Responses to the questions were rated in different scales, 1) Yes, No, Not Sure, or 2) Not often, Occasionally, Often, Very often, or 3) Never, Rarely, Sometimes, Usually, Always. Respondents did not receive any incentive to complete the survey and standard of care was not affected if they did not participate in the online survey. Respondents had to provide a response to every question to complete the survey. The electronic data were compiled and saved on a secured website that was password protected to access the data with no identi able patient information available.
Women's characteristics and distributions of their attitudes, practices and perceptions towards COVID-19 were presented in frequencies and percentages. Multiple linear regression analysis was performed to examine the main factors associated with women's precautionary practices among the six independent socio-demographic variables, including age (≤30, 31-35, ≥36 years), ethnicity (Chinese, Malay, Indian, others), education (primary or secondary, post-secondary, tertiary), front-line jobs (no, yes), history of miscarriage (no, yes) and type of antenatal clinic (general , high risk). The scales of the dependent variables were treated in continuous form to increase the power of analysis. Data were presented as β coe cients and 95% con dence interval (CIs). Statistical analysis was performed using the IBM SPSS Statistic Package, version 20.0 (IBM Corp., Armonk, N.Y., USA).
Results
A total of 167 survey responses were obtained over eight weeks from April to June 2020. The clinical characteristics and demographics are presented in Table 1. Among the included women, the majority of them were aged ≤35 years (76%, n=127), were of Chinese ethnicity (55%, n=91), attained tertiary education (62%, n=104) and were not working as frontline staff (70%). In terms of obstetric history, most women conceived naturally (90%, n=149), were primiparous (51%, n=85) and at their third trimester of pregnancy (44%, n=74), had no history of miscarriage (80%, n=134) and were currently followed up in general clinics (75%, n=125). Table 2 and Table 3 shows the distribution of participants' attitude (Q11-17), precautionary practices (Q18-21) and perceptions ( Q22-34) towards COVID-19 in pregnancy. One hundred twenty-four women (74%) were worried and very worried about being infected with COVID-19 in pregnancy (Q23). Seventyseven (46%) women remained neutral if pregnant women infected with COVID-19 are more likely to miscarry or go into pre-term labour (Q27). Seventy-eight (47 %) women think that there is high risk of COVID-19 infection to their baby at the time of delivery if they were diagnosed with COVID-19 (Q25) and eighty-nine (53%) women would choose having a caesarean section over a vaginal delivery if they were diagnosed with COVID-19 (Q30). After delivery, fty-eight ( 35%) women preferred to breast feed if they were diagnosed with COVID-19 (Q34). These questions did not show any association in relation to sociodemographic factors (data not shown).
Discussion
To the best of our knowledge, our study is hitherto the rst study performed in a South East Asian population of pregnant women. Factors like race, religion, education background and employment status can in uence women's attitude, practice and perception especially in an a uent country like Singapore. Our survey showed that Malay pregnant women are likely to practice safe distancing and sanitise their hands at a higher frequency compared to Chinese to minimise the spread of COVID-19. In addition, women attending high-risk clinics are more likely to stay at home compared to women attending general clinic.
Employed individuals who worked in front line services such as healthcare, hospitality have a lower tendency to stay home for social distancing, possibly driven by their more sociable or outgoing characteristics when compared to those do not work in front line. Conversely, our study also showed that employed individuals with front line jobs are more likely to practice hand hygiene compared to those who do not to reduce the risk of infection. In our study, women with history of miscarriage history had lower tendency to stay home for maintaining social distancing (Q19, β: -0.22) suggesting that obstetric experience did not make women more cautious to practice social distancing to protect themselves. The same inverse associations were observed for Q18,Q20, Q21 with no signi cance.
There There are four statuses namely Green, Yellow, Orange and Red of which Singapore is at orange currently which meant that the disease is severe but has not spread widely and is being contained [16]. The Singapore government implemented a 'circuit-breaker' in different phases' akin to lock-down period in other countries to curb the community spread of COVID- 19 [17]. Safety measures implemented include staying mostly indoors and going outdoors only when necessary, practice social distancing at least one metre apart, wearing surgical masks in public places and adopting good hand sanitation practices to reduce the risk of community spread of COVID-19. Hence, pregnant women should be appropriately educated on preventative measures to reduce the severity of COVID-19 associated illness. Pregnant women should also avoid missing prenatal appointments if well and limit interactions with others to reduce the risk of transmission. Symptomatic women should be urged to be tested early for COVID-19 by nasopharyngeal or oropharyngeal swabs and practice self-isolation to reduce the risk of vertical transmission [18][19].
Yassa et al focused on Turkish pregnant women in attitude, concerns and knowledge towards COVID-19 from 30 weeks gestation onwards [20] where Turkey was one of the most affected countries then with over 20,000 cases and 425 deaths in April 2020 [21]. They showed that about 80% of women felt vulnerable towards the outbreak 45 % of women were confused or doubtful about the mode of delivery and 50% wasn't sure if breast feeding was safe during the pandemic [20]. This is similar to our ndings where 74% of women were worried about being infected with COVID-19; 53% of women would choose having a caesarean section over a vaginal delivery and only 35% of women will choose to breast feed if they were diagnosed with COVID-19. These views re ect the vulnerability of pregnant women despite differences in race or culture as pregnant women want the best outcome for themselves and minimize risk of vertical transmission to their baby.
In our study, 46% of pregnant women believed they are more likely to go into pre-term labour when infected with COVID-19. Di Mascio et al showed that 41.1% of pregnant women with COVID-19 had preterm birth before 37 weeks gestation , however that study did not distinguish between spontaneous and iatrogenic preterm birth [22]. A systemic review by A. Khalil et al also showed an 18.4% increase in iatrogenic preterm births before 37 weeks as these women were ill enough to require early caesarean deliveries [7]. This emphasizes the importance of imparting knowledge and educating women to to avoid unnecessary anxieties from non-evidenced based perceptions.
In our study, 46% of pregnant women also believed they are more likely to miscarry when infected with COVID-19. A systematic review by Zaigham et al did not report any adverse outcomes relating to perinatal outcomes [23]. Although results from the SARS epidemic did not suggest an increased risk of miscarriage or congenital anomalies associated with COVID-19 infection, more data is required before conclusions can be made on the risk of miscarriage [24].
In our study, almost three in four (74%) of women were worried and very worried about being infected with COVID-19 in pregnancy. Durankus et al showed that pregnant women scored higher on the Edinburgh Postpartum Depression Scale (EPDS) when compared to the control group [25]. It is understandable for pregnant women to be anxious and this can be associated with a higher risk of depression [26]. This highlights the importance of providing psychosocial support especially in a vulnerable group of pregnant women. Clinicians should work in tandem with clinical psychologists and psychiatrists in a multi-disciplinary setting. The care of pregnant women should be tailored individually for the mental health of women and their babies.
Most cases of COVID-19 have evidence of human-to-human transmission where the virus appears to spread through respiratory, fomite or faecal methods [27][28]. There is also emerging opinion that the fetus may be exposed to be exposed during pregnancy. Perinatal infection may occur but its true incidence remains unknown. The likelihood of vertical transmission is low based on the United Kingdom Obstetric Surveillance System (UKOSS) interim study where six babies (2.5%) had a positive nasopharyngeal swab for SARS-CoV-2 within 12 hours of birth in severely affected hospitalised women. [29]. Hence, the risk of vertical transmission in mild or asymptomatic patients is likely to be lower than that.
A case series published by Chen et al a tested amniotic uid, cord blood, neonatal throat swabs and breast milk samples from COVID- 19 infected mothers and all samples tested negative for the virus [30]. Conversely, two reported cases of possible vertical transmission showed evidence of immunoglobulin M (IgM) for SARS-CoV-2 in the neonatal serum [31][32]. Although direct evidence of viral positive reverse transcriptase-polymerase chain reaction (RT-PCR) were mostly negative in large majority of reported studies, the paucity of published data is limited with small cohort numbers , limited sensitivity and speci city of swab tests and rapid evolution of COVID-19 infection. [33][34][35][36]. Hence, more data is needed about the risk of vertical transmission before de nitive conclusions can be made.
The mode of delivery should be discussed adequately with pregnant women taking into consideration their preferences and any obstetric indications. In our study, 53% of women would choose to have a caesarean section over a vaginal delivery if they were diagnosed with COVID-19. A. Khalil et al showed that nearly half of pregnant women infected with COVID-19 had caesarean deliveries [7]. As there is no convincing evidence of vertical transmission, vaginal delivery is not contraindicated in patients with COVID-19 [8,9]. Thus, Caesarean section is preferred over vaginal delivery in the face of maternal deterioration and fetal compromise where delivery is imminent. However, logistical issues can arise from the transfer of patients in hospital to labour ward or the availability of operating theatre to perform a caesarean section with negative pressure to minimize the risk of transmission. Hence, clinicians should counsel women on the appropriate mode of delivery as there is a lack of data and uncertainty surrounding the risk of perinatal transmission during vaginal deliveries.
In our study, only 35% of pregnant women will choose to breast feed if they were diagnosed with COVID-19. There is also limited data to guide the postnatal management of babies of mothers who tested positive for COVID-19 in the third trimester of pregnancy. Currently, possibility of infection from breast milk remain uncertain although there is recent evidence to suggest a small risk of transmission through breast feeding [37][38][39]. As breast feeding requires close contact, direct breast feeding may be of concern in infected mothers. Hence, infected mothers should be advised to wear surgical masks, cleaning their breast before expression via breast pumps to bottle feed their neonates to reduce the risk of neonatal transmission. Precautionary separation of mother and child is debatable and cause loss of physical bonding and emotional attachment which have a negative psychological impact in infected women.
We chose to perform an online survey as this is a rapid and convenient mode of administration. Furthermore, we used CHERRIES (Checklist for Reporting Results of Internet E-Surveys) to ensure the quality of our web-based survey [14]. Limitations of our study include small sample size and lack of internal consistency of questions without validation. Despite our small sample size , the data collected likely representative of our local population as the two large public hospitals make up more than half of the obstetric load in Singapore. In addition, our ndings may be in uenced by possible selection bias because participants needed a mobile device with applications to scan the QR code to access the survey.
Ever-since the WHO declared COVID-19 a global pandemic, the world has seen an exponential number of rising cases and unprecedented death rates. Until a vaccine is found, herculean efforts rests on containing community spread of COVID-19 through means like testing for suspected cases , practising social distancing and maintaining good personal hygiene [40][41][42].
Conclusion
As much of COVID-19 remains hitherto unknown, current opinions regarding management of COVID-19 positive women may change with input of new knowledge. The physical burden of pregnancy makes it psychologically and emotional challenging in vulnerable pregnant women. Knowledge gained from our cross-sectional online survey can better guide clinicians to communicate better with pregnant women. Our study highlights the importance for clinicians to render appropriate counselling and focused clari cation on the effect of COVID-19 among pregnant women for psychological support and mental wellbeing.
|
2020-09-10T10:16:40.745Z
|
2020-09-03T00:00:00.000
|
{
"year": 2020,
"sha1": "d89cc276c2faa0563e50935a06d9854d81dc2f4a",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-67288/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "68341f366e9ccdd2762ae663ac00fe92fd4a5e41",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
255901029
|
pes2o/s2orc
|
v3-fos-license
|
Do the globalization and imports of capital goods from EU, US and China determine the use of renewable energy in developing countries?
Abstract The developing countries rely heavily on imports of capital goods to spur economic growth. When the economy grows, energy consumption rises, adversely impacting climate change. The low levels of renewable energy share in total energy consumption, developing nations confront a difficult task in achieving the SDGs targets related to an increase in renewable energy share and access to affordable, reliable, and modern energy. Finding solutions to increase renewable energy usage is critical. International trade is an unavoidable part of development, prompting us to consider the impact of imports on renewable energy usage. This study explores the effects of imports of capital goods from China, EU and USA on renewable energy consumption in developing countries by using panel data from 20 countries spanning 2000–2018. It is found that capital goods imported from China in developing countries negatively impact renewable energy consumption while imports from EU have a positive impact on renewable energy consumption. However, in the case of US it is found negative but insignificant. The role of economic, social, and political globalization is explored, and it is found that three types of globalization are positively and significantly linked with renewable energy consumption. Thus, this study recommends that trade policies complement domestic efforts toward increasing renewable energy production and consumption in developing countries.
Introduction
For the past few decades, economic growth-oriented human activities seem to be a menace to the established ecosystem in developed and developing countries.Global warming and climate change are serious concerns and the subject of debate among researchers, policymakers, public and private entities around the globe is to identify the primary threats of climate change and global warming to the ecosystem and environment [1,2].According to the Fifth Assessment Report issued by the International Panel on Climate Change, human activities (growth in the economy and pollution led by the industrial revolution) are the primary sources of (Green House Gas) GHG emissions.These emissions are the primary cause of climate change, global warming, rising sea levels, melting ice caps, and floods [3,4].
The overall share of pollution emissions in developing countries is considerably higher than the developed countries.The International Energy Agency (IEA, 2018) confirms in its report that carbon-intense energy resources and economic growth have contributed to the rising levels of ecological footprint in both developed and developing countries.Moreover, the literature reports multiple but related channels such as Globalization [5,6], energy usage (Ang, 2007), R&D investment (Zhu et al. 2016), trade expansion (Faiz-Ur- Rehman et al. 2007), population growth, and urbanization [7]; (Shahbaz and Sinha, 2019) that have played a vital role in the distortion of the established ecosystem in the preceding years.
Countries are increasingly adopting technologies and policies to reduce carbon emissions and remove GHGs from the atmosphere.Most of these policies are in the form of interventions in domestic policies, local regulations and enhancing consumer choices by encouraging environmentfriendly production and consumption.More recently, literature has highlighted the importance of regional integration, particularly international trade, in improving progress towards energy transition and reducing carbon emissions [6,8].Globalization is one factor directly or indirectly linked with human beings and their interactions in social, economic, and political contexts.As the world economy is growing, it is also becoming more integrated over time, leading to industrialization and urbanization, further resulting in environmental degradation [5,9].Many studies [10][11][12][13]; (Shahbaz et al. 2015) survey in detail the impact of globalization on renewable energy consumption.These studies confirm that economic growth and globalization leads to an increase in the demand for non-renewable energy consumption which directly impacts environmental quality [14].Contrary to the prevalent literature globalization positively affect renewable energy consumption if financial investment encourages research and development activities through green energy and environmentally friendly technologies.
The developing countries are major importers of capital goods, which is essential to accelerate industrialization.In developing countries, economic development heavily depends on how integrated it is with the international market in terms of international trade (Ahmad et al. 2021a) [15].It is well established that economic growth is bound to increase energy consumption and adversely impact climate change.However, if growing energy needs are fulfilled with renewable energy, then the negative effects of economic growth on climate change can be decreased.In this regard, capital imports can play important role as they are complementary inputs and economies importing capital based on green energies are also experiencing the stable growth.The imports of green energy-based capital is enhancing the penetration of green energy consumption and solving the energy crisis issues [10].
China has pledged to increase its share of renewable energy to 50 percent of total power generation by 2030 [16].Other countries are increasing their efforts to increase the share of renewable energy owning to its positive impact on the economy.Various studies survey in detail the literature on the relationship between globalization, energy use, trade and GDP growth.The results show that in the long term, globalization, energy usage, trade, and GDP growth impact the ecological footprint.Similarly, the findings of short-run interactions also reveal that globalization, energy usage, trade, and GDP growth have positive linkages [17].
Technological advancement has played an important part in energy transition around the globe.The countries that adopted this change early on, have experienced rapid progress.The role of R&D on transition from traditional or nonrenewable sources to contemporary and renewable energy courses is becoming increasingly important.The governments play an important role in bringing out clean and green production-oriented energy policies with the leverage of R&D expenditure.The investment in R&D promotes renewable energy production and boost innovation in energy conservation, causing overall energy consumption to decrease [18].Thus, R&D investment tends to reduce the dependency on natural resources by adopting efficient technologies that help reduce emissions and environmental degradation [19,20].Evidence from Germany indicates that the electricity generated through innovation in photovoltaic and wind energy enhanced growth in energy sector [21].Another study from China showed that wind energy and solar PV technology in Chinese renewable energy industry enhanced energy growth and resulted in higher export growth [22].The transformation of energy generation structure, shifting from nonrenewable to renewable energy sources is majorly due to the development of R&D that replaced the use of limited resources with the sustainable sources [23].On the contrary, with enhanced economic growth and trade decreasing marginal returns to R&D and innovation may lead to increased energy consumption in the long run.For example, the increased accumulation of knowledge in existing technologies over time makes new research in new technologies challenging, resulting in declining returns to R&D [24].Hence, development of innovation and R&D tends to reduce non-renewable energy consumption more than renewable energy consumption [25].
Technological advancement supplemented with globalization results in research and development, technical knowledge and enhances a country's growth by transitioning to renewable energy consumption and reducing environmental deterioration [26][27][28].In high-income countries, globalization can speed up economic activities in various ways.For example, technological innovation gives rise to renewable energy sources that not only boost production but also reduce the per unit cost of production by solar and wind forms of renewable energy sources [29].This gives even more reasons to adopt renewable energy sources to avoid the adverse effects of fossil fuel base production that increases the cost of production as well as its prices [30,31].However, such advancements require effective policies to incentivize the deployment of renewable energy through effective governance [32].
Conventional globalization entails only foreign direct investment (FDI) and trade openness, but the contemporary notion encompasses a composite index of globalization i.e. economic, social and political aspects of globalization [33].All these globalization measures are expected to impact renewable energy consumption positively.In economic globalization, the FDI inflows in high-income countries occur along with the transfer of knowledge and capital, sound management practices, technology and innovative sources that enhance efficiency in the placement of renewable energy technologies [34].Similarly, trade liberalization may impact renewable energy consumption positively as it facilitates the economies with the availability of advanced technologies to shift from dirty to clean energy usage.However, the anti-globalization moves in terms of imposition of tariffs affect bilateral trade [35].
While the movement of goods across countries might increase energy consumption, the inflow of FDI may promote the conversion to green energy sources [36].The social aspect of globalization consists of the increase in trade of information due to culture proximity which may enlighten the countries and enhance the adoption of sustainable practices.The political aspect of globalization tends to reduce the consumption of nonrenewable energy through sanctions, regulatory frameworks and strict policies for environmental externalities [37].The more a country is enlisted in global climate treaties and agreements the higher the investments in renewable energy technologies and better living standards.
The recent changes in consumption and production of renewable energy point towards two major elements, i.e. oil and electricity that are largely used by capital goods for the energy sector and otherwise.Therefore, import of capital goods is very critical for development of energy sector as following the import of capital goods, energy input specific to that imported capital can be used [38].Therefore, imports of capital goods can potentially reduce imports for products like oil and coal.While oil products are required for existing machinery to run the existing production processes, the capital goods may produce more and cleaner energy, thus boosting the economy in a sustainable manner.Countries that import capital goods turn them into goods for power development in cleaner ways are better positioned to increase their competitiveness [39].
It is more important for developing countries to improve their technological efficiency and enhance their access to the latest green technologies through imports.Since trade liberalization and financial independence accompanied with international environmental rules incites the economies towards adoption of renewable energy sources [40], the margin for improvement is considerably higher in developing countries.Moreover, effective policy instruments such as tax advantage, grants and subsidies, education, skill enhancement, quotas and information on renewable energy consumption can also spur growth in renewable energy production and consumption [41].If the same investment could be directed towards investment and imports of capital that consume renewable energy, the import of capital goods can increase renewable energy production and consumption and replace the nonrenewable energy sources.Relative to developing countries, high income countries are better at adopting renewable energy technologies and affect consumption patterns simply because they have lesser constraints related to investments [42,43].Also, the tariffs are already very low in developed countries, however, developing countries need to lower tariff on capital goods specifically related to renewable energy production and consumption, this is expected increasecapital goods imports reducing the problem of asymmetric technological advancement in developing countries and further increasing the demand for renewable energy inputs and consumption [44].
Energy use and per capita GDP are highly correlated in the long run.This correlation has lent support to the claim of "resource economics" that energy is an essential input in the economy butit mainstream arguments can also explain that suggestthat energy use is the result of higher income.Controlling for capital intensity, GDP growth drives energy use in the long run [45].This also means that a rapidly growing economy may worsen the environmental conditions due to increased greenhouse gas emissions, but the increased income may also offset those effects by investment in technological innovation for renewable energy sources [46].Higher-income increases the demand for green products and forces producers to adopt clean energy sources, increasing renewable energy consumption.
This study contributes to the literature on the impact of international trade or globalization on renewable energy consumption and carbon emissions by focusing on import of capital goods instead of overall trade.As it is well established that there is short-run and long-run relationship between trade, renewable energy and carbon productivity [47,48].But the literature is silent on the role of capital imports on transition towards renewable energy.Capital imports leads to more economic output, but the production process requires various inputs including energy inputs.The energy requirement, whether renewable or primary depends on the type of capital imported.There is no classification available to differentiate capital or machinery imports according to their energy requirements, but in this study, we use the origin of imports as an indicator to differentiate capital imports.An attempt has been made to investigate the impact of globalization, research, and development (R&D), and import of capital goods from the USA, China and EU Countries on renewable energy consumption in the case of developing countries by estimating a fixed-effect model.
Research method
The present study used the panel fixed effects technique to analyze the impact of imports of capital good from China, EU and US along with economic, social and political globalization on renewable energy consumption by controlling R&D and GDP.The fixed effects model is well behaved in a way that it captures the cross-country differences [49].For this purpose, this study used the dummy variable for each country because each country has a different level of energy consumption [50,51].Thus, we can write equations for the Fixed Effects model as Where RE is renewable energy consumption, EG is economic globalization, SG is social globalization, PG is political globalization, ICHN is imports of capital goods from China, EU is imports of capital goods from European Union and IUS is imports of capital goods from US. GPC is GDP per capita, and R&D is research and development expenditures in the model.The independent variables, used in the analysis are reported in multiple studies and their channels are highlighted with renewable energy consumption such as Globalization [5,6] [52].
b i serve the purpose of capturing the crosscountry differences for all countries included in the sample, ln represents the variables are in log form.However, it may always be true that the crosscountry differences are captured through separate intercepts.For this case, we need to include an error term along with a common intercept.This approach is suggested by the proponents of the random effects model or the error correction model.It has the feature of identifying intercept separately for each country, that is, intercept is random with a fixed mean and a random component having mean zero and variance rÙ2.
Data and variable description
The present study uses the data of imports of capital goods from China, EU and US, which is taken from COMTRAD.Further economic, social, and political globalization is used to check the impact of globalization on renewable energy in developing countries (a list of countries and detailed descriptive is provided in Appendix A Tables A1-A4).The KOF globalization index, developed by [53], is utilized.The trade measures economic globalization flows with other countries, FDI, and portfolio investment, and restrictions on these inflows and outflows.The Social Globalization index is measured by personal contact, information flows, and cultural nearness.Political globalization is measured by the number of embassies in other countries, international organizations membership, UN Security Council missions' meeting membership, and the number of treaties signed with other countries.The data of GDP per capita and R&D expenditures are taken from WDI.The data of renewable and non-renewable energy is taken from American Energy Agency and this study use data spanning the period of 2000-2018.
Figure 1 shows that research and development expenditures have U shaped relationship with ratio of renewable to primary energy consumption.The U-shaped relationship indicates that only after a certain level of research and development expenditure they positively affect renewable to energy consumption relative to primary energy consumption.This is likely because renewable energy consumption requires at least certain minimum capacity of the country to implement efficient renewable energy solutions in their on local context.Therefore, the countries with higher research and development expenditures have that capacity to excel in renewable energy consumption relative to primary energy.However, this does not tell us how ratio of renewable to primary energy consumption change overtime if there is increase in research and development expenditures.
Figure 2 shows that relationship between capital imports and ratio of renewable to primary energy consumption.In general, higher capital imports leads to more economic activity and it is bound to increase energy consumption.However, the impact it has on the renewable energy relative to primary energy consumption is less explored.Above figure shows that Azerbaijan and Belarus are two outliers, if excluded, the relationship gets vague.This is probably because the type of capital imports determines its impact on composition of energy consumption.One of the factors that can be used to differentiate the type of capital imported is the origin country of capital imports.For instance, countries that spend more on renewable energy projects are likely to export capital that consumes more renewable energy relative to other energy types.
Table 1 gives the summary statistics of the data used to estimate fixed effects model.The sample countries are importing higher amount of capital on average from EU countries and relative low standard deviation.The average capital imports from China are second to EU with highest standard deviation.The average capital imports from US are small compared to China and EU.The Economic, Social and Political Globalization index ranges between 1 to 100 where higher value indicates greater globalization.Our sample countries average political globalization index is higher relative to economic and social globalization indicating they are more politically influential.
Results and discussion
Before moving to the estimation process, the panels are tested for cross-sectional independence.The null hypothesis of the cross-section independence test, Pesaran, 2003 points that the panel is cross-sectionally independent.Hence, it is evident by the result in Table 2 that panel is cross-sectional independent because we failed to reject the null hypothesis.The key reason for cross sectional independence is that the selected countries do not belong to a single economic block and they have different economic relationships with the three major economies i.e.China, EU and US.
As cross-sectional is independent, we applied the Levin Lin Chu unit root test with intercept and considered both intercept and trend.The results of unit roots are reported in Table 3.The unit root results indicate that all variables are stationary at level except R&D and GDP per capita.Further, this study applied the causality test to check either independent variables cause the dependent variables or not.The results of Granger Causality are provided in Table 4.It is reaved from the results that all independent variables have a relationship with the dependent variable (Renewable energy use).
Table 5 provides the coefficient and t-values of the regression coefficients that affect the ratio of renewable to non-renewable energy use in developing countries through capital goods imported from US, China, and EU Countries.A fixed-effect model for a group of 20 countries comprising 360 observations is regressed.We used the panel fixed model because the countries in the sample are not homogeneous.The fixed effect model is wellbehaved and captures the cross-country and heterogenous differences.We apply the Hausman test that confirms the Fixed effects model is more appropriate, as p-value is < 0.05.The null hypothesis of Hausman test is "Random effect is appropriate" [54] so rejecting the null hypothesis means that the fixed effect model is appropriate.
As developing countries have significant share of capital goods in total imports.There is a trade-off between economic growth and climate change, energy consumption plays a vital role to speed up the industrialization process which results in adverse effects of climate change (Ahmad et al. 2021a) [16].The regression results suggest that importing capital goods in developing countries from three different parts of the world significantly affects renewable energy.Capital goods imported from China to developing countries negatively and significantly affect renewable energy consumption.A 1% increase in capital import from China decreases the renewable energy ratio to nonrenewable energy use by 0.03%.While on the other hand, interestingly import of capital from EU countries increases share of renewable energy consumption, as an empirical result indicates that a 1% increase in the imports of capital goods causes a 0.06% increase in renewable energy consumption share.Further, it is found that the impact of imports from US is negative but insignificant on developing countries.These empirical findings are similar to [55] that developing countries should import more capital goods from the EU region.The positive impact of imports of capital goods from EU on renewable energy may be due to technology Globalization has been linked directly or indirectly with human beings socially, economically, and politically.Globalization in all three contexts (Social, Political, and economic) is positively and significantly linked with renewable energy use [23,[25][26][27] investigated the impact of globalization along with technological development on renewable to nonrenewable energy consumption.Globalization accompanied with technological expansion results in the development of innovation and R&D which significantly increase the share of renewable energy consumption to non-renewable energy consumption.Regression results show that a 1% increase in economic, social, and political globalization causes respectively 0.303, 0.41, and 0.45 percent increase in the share of renewable energy use [56] and [57] in their respective studies established the same findings that in the process of globalization if the financial investments have been made in clean and green technologies, then the share of renewable energy consumption would increase.Further, financial investment in environmentally friendly energies positively impacts human development through reduced carbon emissions.
For the model with renewable to non-renewable energy ratio, a 1% increase in the GDP per capita decreases the renewable energy consumption by À0.09%.To accelerate the economic growth process, trade and industrialization are important, and in developing countries, the process of globalization facilitates trading activities which increase fossilfuel-based energy consumption and also intensify CO2 emission (Ahmad et al. 2021a).Moreover, most of the financial investments in developing countries boost economic activities at the cost of environmental degradation [58][59][60].
Regression results reveal that research and development expenditures have an insignificant impact on renewable energy consumption share, meaning that investment in research and development activities in both developing countries are not enough to enhance renewable energy share.
Conclusion
One of the Sustainable Development Goals (SDG) of the United Nations is to increase the use of Note: P-value are given in the parentheses and all variables are in log form.
renewable energy in energy consumption.Given the existing share of renewable energy in total energy consumption, developing countries confront a difficult task in achieving the targets related to renewable energy, and sustainable sources of energy and consumption under SDGs 2030.Finding solutions to increase renewable energy usage is critical.International trade is an unavoidable part of globalization, prompting us to consider the impact of imports on renewable energy usage.
The prime objective of this study is to explore the effects of imports of capital goods from China, EU and USA on renewable energy consumption in developing countries by using a panel data of 20 countries spanning the period of 2000-2018 by controlling R&D expenditures, globalization, and GDP per capita.A fixed-effect model found that importing capital goods in developing countries from three parts of the world affects renewable energy differently.Capital goods imported from China in developing countries have a negative impact on renewable energy consumption while imports from EU have a positive impact on Renewable energy consumption.However, in the case of US it is found negative but insignificant.Economic, social, and political globalization are positively and significantly linked with renewable energy use.GDP per capita decreases the renewable energy consumption.This study's finding significantly contributes to the vital importance of the trade policy in the context of renewable to non-renewable energy consumption in an economy.Capital imports from different parts of the world might become crucial for increasing renewable energy usage.The study also shows that, based on empirical analysis, industrialization and international trade should be regarded as complementary when creating developing countries' renewable energy policies.Our findings have few novel and practical policy implications.Firstly, renewable energy production and consumption in developing nations should be increased, as this is a viable alternative energy source that solves environmental externalities while not impeding economic advancement.This may be accomplished by stimulating renewable infrastructure investments and innovative product imports.Secondly, in developing countries government's support, encouragement, financial facilitations in the form of incentives, such as investment subsidies, tax breaks, and refunds for producing and importing renewable energy is very essential.
In this regard, the role of EXIMP (Export Import) Banks is important to facilitate the import of capital or machinery that is environmentally friendly.Moreover, the many countries are already implementing concessionary finance scheme to importers of capital or machinery to encourage investment.These concessionary finance schemes should provide additional concession for capital investment that is environment friendly.
Thirdly, officials and economists may take appropriate steps to raise knowledge about the negative implications of fossil fuel use and the advantages of renewable energy sources.Similarly, governments and decision-making bodies may create customer-friendly regulations for industries and businesses to make renewable energy more accessible to the market and public-private partnerships to stimulate renewable energy use.Finally, trade should be seen as a critical conduit for increasing renewable energy consumption through purchasing renewable energy technology.Policymakers can then enact restrictions to increase the scope of commerce necessary for advanced technology transfer in underdeveloped countries.
This study has established that source of capital imports matter in terms of affecting renewable energy consumption in developing countries.However, future research should be on finding the specific reasons for this phenomenon, whether this is due to technology embedded in capital goods imports from advanced countries that takes renewable energy as input, or its durability and efficiency.Also, more recent events such as Russian-Ukraine conflict can be analyzed in the context on high energy costs and supply chain issues affecting renewable energy consumption.
Figure 1 .
Figure 1.Research and development expenditures and ratio of renewable to primary energy consumption.
Figure 2 .
Figure 2. Capital imports and ratio of renewable to primary energy consumption.
Table 5 .
Results of fixed effects regression.
Table A2 .
Social globalization and political globalization.
Table A3 .
GDP Per capita and Capital Imports from EU.
Table A4 .
Capital imports from China and US.
|
2023-01-17T19:20:55.986Z
|
2023-01-11T00:00:00.000
|
{
"year": 2023,
"sha1": "f0afa699ad531efc7d0b00bd47915c81b80adf48",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/17583004.2023.2165162",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "c63bb6f9a8e68bc44ce832ee3cc5cdd0e0d36f72",
"s2fieldsofstudy": [
"Economics",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
1384809
|
pes2o/s2orc
|
v3-fos-license
|
A Review of the Strain Diversity and Pathogenesis of Chicken Astrovirus
Although a relatively recently emerged virus, identified only in 2004 as a separate species of avian astrovirus, chicken astrovirus (CAstV) has been associated with poor growth of broiler flocks, enteritis and diarrhea and is a candidate pathogen in cases of runting stunting syndrome. More recently CAstV has been implicated in cases of two other diseases of broilers as the sole etiological agent, namely severe kidney disease of young broilers with visceral gout and the “White Chicks” hatchery disease. Examination of the strains of CAstV associated with the two latter diseases reveals they are closely related genetically. This review will discuss the pathogenesis of CAstV in relation to strain diversity and the effects of vertical versus horizontal transmission, virus load, co-infections and age of bird at infection, all factors that may impact upon disease severity.
Introduction
Chicken astrovirus (CAstV) is a recently emerged virus and the most recently identified member of the avian astroviruses. With a shared familial morphology and genomic arrangement, like other astroviruses CAstV is a small, round, nonenveloped virus typically <35 nm in diameter with a positive sensed, single-stranded RNA genome that, being close to 7.5 kb in length [1], is within the astrovirus family genome size range of 6.2 kb (human) to 7.7 kb (duck) [2]. Astroviruses primarily cause enteric infections and infect many animal species including humans, where they are a leading cause of infant diarrhea. Madeley and Cosgrove termed these viruses astroviruses, from the Greek word astron, meaning "stars", due to the protruding capsid spikes that give the characteristic star-like appearance under electron microscopy [3]. Astrovirus species infecting mammals are classified into the genus Mamastrovirus. The other major group of astroviruses that have been studied are the avian astroviruses, particularly those that infect commercial flocks although they are also detected in wild birds, and are classified in the genus Avastrovirus, which along with Mamastrovirus make up the two genera of the family Astroviridae.
Historically astroviruses have been named according to the species they infect, e.g., turkey astrovirus (TAstV) although species cross-over has been observed for some astroviruses, e.g., astroviruses of chickens have been detected in turkeys [4]. Officially, there are three different astrovirus species that currently comprise the Avastrovirus genus according to the International Committee on Taxonomy of Viruses, namely Avastrovirus 1, 2 and 3 (Table 1), in keeping with the naming of mammalian astroviruses as Mamastrovirus 1-19. Although the first report of avian disease caused by astroviruses was in ducklings in 1965 [5], the virus responsible was only recognized as an astrovirus in the mid-1980s by electron microscopy [6] and is referred to as duck hepatitis virus 2 (DHV-2) in older papers and as duck astrovirus serotype I (DAstV-1) more recently [7]. A second astrovirus of ducks, originally called DHV-3 also causes hepatitis in ducklings [8]. It is now known as DAstV-2 and is antigenically and genetically distinct from DAstV-1 [9]. Two astrovirus species were isolated from turkeys: the first was identified in the UK in 1980 and called TAstV serotype 1 (TAstV-1) [10] and the second species, TAstV-2, was reported in 2000 [11]. There are two astrovirus species that infect chickens and both are associated with growth problems, enteritis and kidney lesions in young chickens. The first, avian nephritis virus (ANV), was isolated from a one week old, normal broiler chick in 1976 [12] and was originally thought to be a picornavirus [13], but later identified as an astrovirus [14]. The second astrovirus of chickens is called chicken astrovirus (CAstV) and is a separate species from ANV. Recent research has revealed that CAstV infections are highly common in broiler chickens and have strong associations with diseases of young birds and hatchery disease, that is disease which occurs prior to or during hatch, which will be discussed within this review in the context of genetic variation.
Identification and Genomic Structure of Chicken Astrovirus
In 2004 three isolates were cultured from two submissions of broiler chicks with runting stunting syndrome (RSS) and one submission from a flock with uneven growth. The isolates were antigenically identical [15]. Genetic sequencing of a 320 base pair reverse transcription (RT)-PCR amplicon made from RNA extracted from one of the isolates showed the agent was related to TAstV and ANV but was from a separate species also identified as an astrovirus and termed "chicken astrovirus" [15]. Prior to its molecular identification in 2004, CAstV had been described as an "enterovirus-like virus" (ELV) due to sharing similar characteristics to viruses of the genus Enterovirus of the family Picornaviridae [16,17].
Sequencing of the CAstV genome has shown that it shares a similar genetic organisation to other astroviruses being composed of only three open reading frames (ORF). The first two ORFs, ORF 1a and ORF 1b, code for non-structural proteins including a protease (ORF 1a) and an RNA dependent RNA polymerase (ORF 1b). In keeping with other astroviruses, CAstV contains a conserved heptameric frameshift motif at the 3 end of ORF 1a [18] that is involved in the translation of ORF 1b in a different frame to ORF1a although possibly through a different mechanism to other astroviruses [1]. The third ORF, ORF 2, codes for the capsid protein, the most variable region of the genome, especially in the 3 half of the ORF which codes for the outer surface of the capsid including the star-like capsid spikes which interact with the host immune system hence variability is desirable. It also contains the start of the conserved s2m motif [19], which continues into the 3' untranslated region (UTR). A short 5 UTR exists upstream of ORF 1a and a longer 3 UTR after ORF 2. A polyadenylated tail is located at the extreme 3 region completing the positive sensed, single stranded RNA genome.
Infection, Transmission and Strain Diversity of Chicken Astrovirus
CAstV is an enteric pathogen and infections often occur very early, either transmitted horizontally by the fecal-oral route, or some CAstV strains can also be vertically transmitted from naive in-lay parent birds, and chicks may hatch shedding high levels of CastV. CAstV is more resistant to disinfection and cleaning than other viruses as it is non-enveloped and may be more persistent in poultry houses where darkling beetles can act as vectors for CAstV [20]. For instance, CAstV was detected in internal tissues and in washings from the surface of darkling beetles by RT-PCR [21]. A recent investigation was carried out into CAstV carryover contamination between broiler crops in commercial broiler houses after the removal of spent litter both before and after cleaning and disinfection using proprietory disinfectants at recommended concentrations [21]. Preliminary results showed that~1-2 log reductions in CAstV levels were typically achieved when detected by quantitative RT-PCR at ten locations in UK broiler houses including feeders, floors, sills and walls, where CAstV levels became extremely low but even when fumigation was part of the cleaning regime, newly placed chicks quickly became horizontally infected shedding moderate levels of CAstV by day 4. Although the majority of these chicks were not shedding CAstV at day 0, it is possible that CAstV horizontal infection could have occurred prior to placement in the broiler houses.
CAstV infections usually occur within the first days or week of life, and, and the earlier they are contracted, especially vertical infections, may result in a worse outcome, although this will depend on the particular CAstV strain, since, as is typical of viruses with RNA genomes, they vary widely in pathogenicity. Also the viral load (dose) at the time of infection and the presence of maternal antibodies against CAstV will impact on the development of disease. Other important factors include the presence of other enteric pathogens such as ANV, which is frequently detected in co-infections with CAstV, and also avian orthoreoviruses and fowl adenoviruses, to name some of the more ubiquitous enteric viruses often found in co-infections with CAstV. In addition a flock may be infected with more than one strain of CAstV concurrently.
An investigation into CAstV strain diversity from historical and circulating field strains was reported in 2012 comparing sequences of ORF 2 (capsid gene) as this is where the most hypervariable regions associated with antigenicity are located [22]. Prior to this study two distinct serogroups of CAstV had been identified [17,23] which is supported by only a minor degree of cross-reactivity with the heterologous antisera. Antibody against them is reported to be widespread [24] and the existence of these two serogroups was further supported in the genotyping study by the subsequent clustering of strains into CAstV groups A and B according to a lower level of shared amino acid identity across ORF 2 (38%-40%). The group A CAstVs comprised three subgroups, withinter-subgroup homologies from 77% to 82%. The B group CAstVs comprised two subgroups, B i and B ii, which shared inter-subgroup identities of 84%-85% [22]. Subsequent to this study new CAstV ORF 2 amino acid sequences associated with specific broiler chick diseases, namely kidney disease with visceral gout, and White Chicks, have become available and a selection have been incorporated into the CAstV ORF 2 amino acid phylogenetic tree ( Figure 1) to help elucidate the association of specific strains with disease.
Runting Stunting Syndrome and Uneven Flock Performance
Historically, CAstV has been associated with malabsorption diseases of broiler chickens such as runting stunting syndrome (RSS) and with enteritis and growth problems in flocks [25,26]. However, CAstV is one of a number of endemic, enteric viruses that have been implicated in RSS but as yet a single etiological agent has not been identified. A runted chick hatches small while a stunted bird exhibits a failure to grow, and often appears to have delayed development, where its overall appearance appears to be that of a much younger chick with down and immature feathering, yellow
Runting Stunting Syndrome and Uneven Flock Performance
Historically, CAstV has been associated with malabsorption diseases of broiler chickens such as runting stunting syndrome (RSS) and with enteritis and growth problems in flocks [25,26]. However, CAstV is one of a number of endemic, enteric viruses that have been implicated in RSS but as yet a single etiological agent has not been identified. A runted chick hatches small while a stunted bird exhibits a failure to grow, and often appears to have delayed development, where its overall appearance appears to be that of a much younger chick with down and immature feathering, yellow colouration and small comb and beak. RSS is a production disease that was originally characterised by poor weight gain in young broiler flocks, frequently observed between six and twelve days post hatch but can be evident up to three weeks [27]. This coincides with the occurrence of intestinal cysts that reduce nutrient absorption along with reduced villus size or altered villus shape. Other common symptoms include enteritis and diarrhea, leg weakness and irregular feathering [28]. Chicks may huddle for warmth and culling can be extensive due to severe growth check of >50% causing a major economic challenge.
Uneven flock performance occurs when the variance in weights at slaughter is larger than expected, potentially causing carcass processing problems, and is a more common, chronic condition than RSS. Many of the same viruses, including CAstV, are present in underperforming flocks but they are often also present in the good performing flocks so the differences in factors that tip the balance of performance are likely to be subtle yet complex and probably involve co-infection with other pathogens especially other viruses, pathogen strain variation, infection timing, virus load and the presence or absence of maternal antibodies. It is also possible that early CAstV and other enteric viral infections may create an abnormal gut environment that facilitates later dysbacteriosis, an imbalance of naturally colonising bacteria, usually occurring between days 20 and 30 post hatch and which could further impair performance due to diminished nutrient digestibility and weakened intestinal barrier protection [29]. As CAstV and other enteric viruses are so common and widespread in commercial broiler flocks of all classes of performance, they may be considered as part of the normal microflora or gut virome forming a background "noise" that is present from hatch to slaughter in a similar way to the many species of bacteria that colonise poultry intestines. However, it is likely to be a dynamic virome with spatial and temporal changes of rapidly evolving RNA viruses and so infections by new strains or more pathogenic strains of viruses may tip the normal balance of the virome thereby impairing performance or in acute cases causing RSS.
CAstV is one of the earliest viruses to infect chicks often in the embryo, when immunity is least developed, and in recent quantitative molecular surveys broiler chicks that have just hatched were found to be shedding very high levels of CAstV, often substantially higher than CAstV infection peaks from chicks that became infected horizontally soon after hatch [21] suggesting that age confers resistance to infection. There are many strains of CAstV in circulation for which pathogenicity is unknown and it is unknown whether all CAstV strains are able to infect vertically. Currently CAstV pathogenicity is determined empirically through challenge experiments but vertical transmission has not been examined by these means. Challenge experiments of day old commercial broiler or SPF chicks with isolated CAstV strains have resulted in varying degrees of growth suppression [17] that is typical of the wide-ranging pathogenicities of viruses with RNA genomes. Furthermore, inoculations with CAstV isolates have not caused the full growth restriction typically observed in cases of RSS when young broiler birds may be <50% of their expected weight at 2-3 weeks old suggesting that there may also be other agents or other factors involved. Pathogenicity studies in specific pathogen free (SPF) chicks of two of the 25 genotyped strains, CAstV 612, which typifies group A, and CAstV FP3, representing group B, detected both viruses in the duodenum, jejunum and ileum, as well as the colorectum [30]. Both viruses were also detected in the liver, kidney and spleen. While the effects of CAstV 612 were relatively mild, CAstV FP3 resulted in intestinal lesions at day 1 post infection (p.i.) that altered the villus to crypt ratio observed at days 3 and 6 p.i. and a prolonged infection of the kidneys from day 1 to day 8 with lesions reportedly more severe than those seen in birds inoculated with ANV-1 (ANV serogroup 1 strain G4260) in the same study [30].
Co-infections of CAstV with other enteric viruses have been observed, most noticeably rotavirus [25,31] and ANV [32] and a multiplex RT-PCR test was designed to detect and distinguish CAstV, ANV and rotavirus from samples simultaneously [33]. More recently the use of viral metagenomics has demonstrated the presence of a wide range of enteric virus families in normal broilers [34] and in growth problem flocks of which Astroviridae was one of the more abundant families [35,36]. In a small set of five 3-week old broilers with RSS and two normal broilers of the same age CAstV was only detected in the RSS-affected birds [36]. This contrasts with findings from the Day study [35] where CAstV was detected in SPF control birds. These types of studies are preliminary and few in number but metagenomics is likely to be a powerful diagnostic tool for investigating RSS in the future although it will be important to sample broilers as soon as the growth problems become apparent in order to fully appreciate the role of very early viruses such as CAstV.
Kidney Disease and Visceral Gout
While CAstV is predominantly an enteric virus contracted through the fecal-oral route it is also known to infect organs outside of the enteric tract including the liver and kidneys. Severe kidney disease of young broiler chicks with outbreaks of visceral gout and up to 40% mortality were reported in India in 2012 with the causative agent being identified as a group B CAstV [37]. This particular strain of CAstV was isolated in embryonated SPF chicken eggs using homogenates from 18 CAstV positive kidney samples resulting in significant embryo stunting, liver necrosis and pale, swollen kidneys. Isolates made from clinical kidney homogenates passed through either SPF chicks or SPF eggs and inoculated into day old SPF chicks and day old broiler chicks resulted in extremely high levels of mortality (67.5%-100%) between days 5 and 10 p.i. for the SPF chicks and days 7 and 10 p.i. for the broilers. Post mortem findings showed that the chicks all had diseased kidneys and visceral gout. Molecular testing found the kidneys positive for CAstV and negative for ANV and infectious bronchitis virus [37].
Phylogenetic analysis of the ORF 2 amino acid sequences from these 18 isolates indicated that there was a high degree of similarity between these strains (92%-99.2%) and that they clustered together and in a separate B subgroup from other CAstV ORF 2 amino acid sequences (Figure 1) [37]. Similarly, AFBI's Stormont laboratory isolated three highly similar CAstV strains from broiler kidney diagnostic samples from the Middle East as part of a diagnostic investigation into high mortality problems associated with kidney disease and visceral gout in 2010 and 2012, that, when inoculated into day old SPF chicks, also caused mortality due to kidney disease and visceral gout in the first week post infection (diagnostic results). Sequencing and phylogenetic analysis of these strains placed them in the same B subgroup as the Indian strains (B iii, Figure 1). The three Middle East CAstV strains share >99% amino acid homology with each other and 96.5% to 98.8% with regional representative strains from India. Given the wide range of circulating CAstV strains detected previously in Europe and the USA [22] the detection of CAstV strains in India and the Middle East with such a high degree of capsid amino acid conservation in these cases of severe kidney disease supports the hypothesis that this particular strain of CAstV is the etiological agent. Ongoing diagnostic surveillance in 2016 indicates highly similar strains are still circulating in broiler flocks in the Middle East.
White Chicks Hatchery Disease
Recently CAstV has become associated with hatchery diseases, most noticeably "White Chicks", reports of which have come from various Scandanavian countries, North America, Poland and Brazil [38][39][40], but also with the "clubbed down" problem, although the latter association is less clear and still to be fully determined. White chicks that hatch have pale plumage, are weak and runted and do not tend to survive very long. The symptoms and lesions observed in white chicks share characteristics with those of RSS including lesions in the kidneys and liver, runting/poor development and weakness, and also abnormal feathering. An increase in mid to late embryo deaths was noted and there is a transient but substantial reduction in hatchability, which in Finland averaged 29% in affected flocks but which reached as high as 68% on one farm, with many dead in shell embryos in which CAstV was detected [38]. In Poland, a 4%-5% hatchability decrease was observed for a single breeder flock over a 4-week period when a maximum of 1% of chicks were pale and weak [40]. These observations are indicative of a vertical virus transmission and since it was reported that affected Finnish breeder flocks only experienced the disease once during their lifetime, it seems probable that acquired immunity prevents disease recurrence and further vertical transmission. It was discovered through CAstV quantitative diagnostic testing that many chicks were shedding high loads of CAstV at hatch [38] which was a very different situation to that observed when the same assay was first applied to commercial flocks in 2010 [32]; then all chicks were negative for CAstV at day 0 in a longitudinal survey of commercial broiler flocks.
Three CAstV isolates were purified from samples from Finland, Norway and Canada resulting in embryo death and runting when inoculated into SPF chick embryos [38]. Likewise the Polish PL/G059/2014 isolate caused high mortality, runting and poor hatchability when inoculated into SPF embryonated eggs [40]. When the amino acid sequences of the ORF 2 regions were compared the Scandinavian and Canadian isolates were highly similar, sharing 95%-98% identity, which is a significantly high level of conservation in the most variable CAstV ORF given the wide range of CAstV strains in circulation that can vary by more than 50% in this ORF. These strains clustered together giving rise to a new CAstV B subgroup (B iv, Figure 1) strongly suggestive that strains with highly similar capsid gene sequences could cause hatchery disease. They were also quite highly related to the CAstV strains responsible for severe kidney disease in Asia constituting subgroup B iii with shared amino acid identities ranging from 86.5% to 89.8%. By contrast, the ORF 2 amino acid sequence of the Polish strain, PL/G059/2014, places it very distant in subgroup A iii ( Figure 1, marked with an asterisk). Similar symptoms were apparent from the Polish case but the clinical outcomes were less pronounced: there was no perceived egg drop; the hatchability reduction was much less severe and there were fewer white chicks observed than in the Finnish cases. Perhaps the differences in White Chicks disease severity are associated with genomic differences, although a more in-depth analysis of further cases would be necessary to establish the link between specific strain variation and disease severity.
Immunity, Treatments and Future Developments
Currently there are no medicines to treat RSS, CAstV-associated kidney disease or White Chicks disease nor are there any vaccines to prevent transmission of CAstV to broiler chicks. Hygiene and biosecurity are the only ways in which CAstV infection risk can be minimised. It would be highly advantageous if breeder hens could supply adequate CAstV maternal antibodies to the eggs since this would prevent vertical transmission of CAstV strains and give early protection against horizontal transmission. Although it has yet to be definitively determined, the age of the chick when first infected appears to have a bearing on all of these conditions so early protection is encouraged. Breeder hens that become naturally CAstV seropositive during rear or through the use of a CAstV breeder vaccine is advocated in order to protect embryos and hatched chicks against the range of CAstV strains in circulation and prevent vertically infected hatched chicks from shedding CAstV to infect naive broiler chick housemates. While the involvement of CAstV as a key agent in cases of RSS remains to be fully elucidated, it is clear from recent evidence that certain strains of CAstV are associated with White Chicks and others with severe kidney disease and visceral gout. The development of a commercial vaccine that can protect against the strains causing these two diseases, which are not that far apart genetically and serologically is to be hoped.
The use of wild-type strains of CAstV as breeder vaccine candidates that can be conveniently grown in eggs or cell culture has the advantage of cellular replication to higher titres but may be limited in effectiveness due to serological differences between strains and so a greater understanding of the relationship between circulating strain diversity and disease severity is desirable, particularly in the case of RSS. There is also the concern that an attenuated live vaccine could evolve into a more pathogenic form, although this is unlikely to unduly affect older birds. Alternative vaccine strategies may involve the use of recombinant protein technology to develop non-replicative CAstV capsid precursor proteins. Recombinant CAstV capsids have been produced by two groups using the baculovirus system, to subgroup B ii [41] and subgroup B i [42]. The recombinant CAstV B ii vaccine gave partial protection against experimental RSS challenge whereby weight restriction was significantly less pronounced in vaccinated broiler chicks [41]. Lee et al. demonstrated that the B i recombinant CAstV capsid precursor proteins consistently stimulated virus-specific antibodies in SPF chickens at 3 and 4 weeks p.i. after 2 immunizations [42]. Given that wild-type infections of breeder birds with the White Chicks strain of CAstV appear to confer lifetime immunity, it is hoped that an effective CAstV vaccine would have the same duration of effect but this can only be determined empirically.
One of the limitations of working with CAstV has been a lack of convenient diagnostic tools requiring researchers to develop their own in-house tools. The baculovirus expressed recombinant CAstV capsid precursor proteins developed as vaccine candidates have both been used successfully in ELISA (enzyme linked immunosorbent assay) tests to quantify CAstV seroconversion during in vivo CAstV experiments [41,42]. The B i recombinant capsid protein has since been used as the basis of a CAstV B group ELISA test [43] that is now commercially available and suitable for screening chicken sera for the presence of CAstV B group antibodies, including those from the other B subgroups. This ELISA is useful for screening breeder flocks for seroconversion against CAstV B group strains prior to, or during lay, and can be used to pinpoint CAstV B group seroconversion by longitudinal serological surveys in cases of possible vertical transmission, e.g., White Chicks. It will not detect antibodies to CAstV A group strains as there is no serological cross reactivity of the A group and B group antibodies in the capsid precursor protein region. If further evidence appears that substantiates the involvement of CAstV A group strains in cases of White Chicks, then a similar CAstV A group ELISA would prove beneficial.
|
2017-02-17T08:44:35.884Z
|
2017-02-01T00:00:00.000
|
{
"year": 2017,
"sha1": "951b147c1c3428901d0d1ba6df7b0819902f4df7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/9/2/29/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "951b147c1c3428901d0d1ba6df7b0819902f4df7",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
53969827
|
pes2o/s2orc
|
v3-fos-license
|
Pharmaceutical Properties and Applications of a Natural Polymer from Grewia mollis
The use of naturally occurring biocompatible materials has been the focus of recent research activity in the design of dosage forms for immediate and controlled release formulations. Grewia gum is an intracellular gum obtained by extraction from the inner stem bark of the shrub Grewia mollis (Malvaceae). It grows abundantly (wild or cultivated) in the middle belt region of Nigeria, and the mucilage has been used by indigenes of this belt as thickener in soups. Grewia gum has been investigated for potential applications in pharmaceutical dosage forms.The industrial extrapolation of the applications of the gum has, however, been slowed by the limited structural, toxicological, and stability data available on the gum. This paper highlights ethnobotanical uses of G. mollis shrub and discusses the structural features, functional properties, and applications of grewia gum with emphases on its pharmaceutical potentials.
Introduction
Plant materials are playing increasing role as alternatives to chemical food additives and synthetic pharmaceutical excipients.Natural polysaccharide gums swell to form highly viscous solutions or dispersions in aqueous media.They have the advantage of biocompatibility, low cost, and relative abundance [1] compared to their synthetic counterparts.They are widely used in the pharmaceutical industry as polymers in various drug delivery systems [2][3][4][5][6][7].
Grewia polysaccharide gum is a natural resource that could be used as an excipient in the pharmaceutical industry in Nigeria to reduce the costs of pharmaceutical products.It may provide a suitable alternative to the synthetic counterparts which are expensive and mostly imported.It is obtained by extraction (maceration in cold or hot water) of the inner stem bark of the edible plant G. mollis, Juss (Malvaceae).In Nigeria, G. mollis from which the gum is extracted grows abundantly (wild or cultivated) in the middle belt region of the country where it is used as thickener in local delicacies.
This review delineates the ethnobotanical uses, physicochemical properties, structural properties, and the potential applications of the gum in drug delivery systems.
Grewia mollis Plant
G. mollis belongs to the flowering plant genus, Grewia.It was formerly placed in the family Tiliaceae or Sparrmanniaceae.Today most authors place the genus in the mallow family Malvaceae.
2.1.Botany.The plant has been described [8,9] as a shrub or small tree growing to attain a height of 10.5 m with young branches densely stellate-pubescent.The young branches turn dark grey to black or reddish brown when older.
The leaves of the plant are elliptic to elliptic-oblong, usually between 2.0 and 14.5 cm long and 0.7 and 5.5 cm wide.They are acute to slightly acuminate at the apex, broadly rounded or obliquely truncate at the base.The leaf margins are coarsely and sharply serrate, more or less glabrous to sparsely minutely stellate-pubescent above, but densely and finely greyish to brownish white-pubescent beneath [9].
The flowers are yellow and bisexual; sepals 6-10 mm long; petals obviate to oblong, 4-6 mm long, ±2 mm wide, and sometimes notched at the apex.The ovary is 1.5-2 mm long and densely hairy [9].The fruit has been described as unlobed and globose drupe of about 4-7 mm long and 5-7 mm wide, covered with a fine whitish tomentellum.The fruits are green when younger and turning yellow when older [8,9].A digital image of the aerial part of the plant is shown in Figure 1.
2.2.
Propagation.G. mollis grows in the wild and can be propagated by seed or seedlings.The seeds are collected from the dried fruits that may have fallen on the ground [9].
Uses of the Aerial Parts of G. mollis
The different parts of G. mollis have been used for different purposes across the regions and places where the plant grows or is cultivated.
Mucilage from Leaves and Stem Bark.
In the Democratic Republic of Congo, the bark is kneaded with water into a viscous substance that is added to sauces while, in Gabon, the inner bark is sometimes eaten as food.In Nigeria, the inner stem bark is used as thickener in soups and in local cakes made from beans or corn flour commonly called "Kosai" and "Punkasau" in Hausa (Nigeria), respectively [11,12].
The infusion of the bark obtained by cold or hot maceration in water is applied to give a smooth surface to mud walls and floors [13] while the wood ash is used as salt substitute including the ash of the leaves, stems, and roots [9].
Vegetables and Fruits.
The flowers, buds, and young shoots are added to soups and sauces as garnishing while, in Sudan, the young leaves are cooked and eaten as vegetables.The fruit is eaten raw or boiled [9].
Ethnomedicinal Uses.
Several ethnomedicinal uses of the infusion, decoction, maceration, or mucilage from the leaves, roots, or stem bark of G. mollis have been documented.These ethnomedicinal uses are described in Table 1.
Biological Evaluation and Chemical Constituents of G. mollis
Several scientists have investigated the histopathological and toxicological effects of extracts of G. mollis on laboratory animals.Onwuliri et al. [14] conducted toxicological and histopathological studies in rats using the ethanolic extract of the stem bark of G. mollis.Tannins, saponins, flavonoids, glycosides, balsam, phenols, terpenes, and steroids were isolated from the stem bark, but alkaloids were absent.The extract exhibited toxic properties at the lethal dose (LD 50 ) of 1500 mg/kg body weight.The convoluted tubules of the kidney but no structural effects on the liver and heart were observed suggesting that the extract may be safe in humans but should be used with caution on patients with renal failure.
Studies by Obidah et al. [12] on the toxic effects of the stem bark of G. mollis showed that the addition of the pulverized stem bark to the normal diet of male Wister rats at concentrations of 0, 1, 5, and 10% given to them as feed for 4 weeks, resulted in no deaths, and remarkable changes in appearance were not observed in the treated animals.However, rats fed with 10% dietary level showed significant ( < 0.05) increases in serum transaminases activities which was accompanied by decreased food intake.There was no observed effect on serum alkaline phosphatase activity, urea, creatinine, triglycerides, cholesterol, glucose concentrations, and body and organ weights in their study.These workers concluded that dietary exposure of rats to G. mollis stem bark powder at high concentrations (10%) may cause some adverse effects, especially liver injury.It may be argued however that gum extracted from the stem bark of G. mollis when used as excipient is incorporated into formulations in smaller proportions usually lower than the 10% proportion added to the feed of the animals in the foregoing study.Consequently, it may be expected that the low use level of G. mollis gum as excipient in formulations should not cause any adverse effects in humans.
Asuku et al. [15] evaluated the methanolic extract of G. mollis leaves for its antioxidant and hepatoprotective properties by an in vivo procedure.They reported significant ( < 0.05) hepatoprotective potential evidenced by the lowering of serum levels of bilirubin, aspartate aminotransferase, and alanine aminotransferase and decreasing of malondialdehyde levels in rats pretreated or posttreated with carbon tetrachloride (CCl 4 ).They concluded that G. mollis leaves contain potent antioxidant compounds that could offer protection against hepatotoxicity and ameliorate preexisting liver damage and oxidative stress conditions.Saleem et al. [16] assessed the nutritive value of the leaves and fruits of three grewia species under semiarid environment, and the results of the study indicated that the three species could be introduced as a source of fodder in animal production farms and silvopastoral systems.Rosler et al. [18] earlier reported the isolation of 6-methoxyharmane (Figure 2(g)), a Harman alkaloid from G. mollis.Harman alkaloids belong to the class of -carbolines.They bind strongly to benzodiazepine receptors in the brain to induce inverse agonist effects to benzodiazepines such as convulsive, anxiogenic, and memory enhancing effects [19][20][21].
Phytochemical Screening, Antibacterial and
Anti-Inflammatory Properties.Shagal et al. [23] reported the presence of saponins and phenolic compounds in the ethanol fraction of G. mollis leaves while glycosides were additionally present in the ethanolic stem bark extracts.Tannins, volatile oils, and flavonoids were also observed in the ethanolic root extracts.
Antibacterial studies on the ethanolic leaf and root extracts were reported to exhibit inhibitory activity against Staphylococcus aureus and Escherichia coli while the stem bark extract exhibited activity against S. aureus and Salmonella typhi.The aqueous root extracts showed no inhibition against the same organisms except for the aqueous leaf extract which showed some inhibitory activity against Streptococcus spp.and E. coli.The aqueous extract of the stem bark also exhibited inhibition against S. typhi which possibly justifies the use of the plant in the treatment against diarrhoea and dysentery.
The development of a new class of polyurethane nanofibers containing aqueous extract of aerial parts of G. mollis by one step electrospinning was reported by Musarat et al. [25] for its potential biomedical application as an antimicrobial fiber.They incorporated the aqueous extract into the polymer media in order to influence the morphology and size of the polyurethane nanofibers.Further testing of the extract loaded nanofibers was also observed to inhibit the growth of E. coli (ATCC 52922) and S. aureus (ATCC 29231).
Al-Youssef et al. [17] also investigated the anti-inflammatory activity of the same extract at 500 mg/kg which showed pronounced anti-inflammatory effect after two hours.Their findings provide preliminary evidence suggesting that fractions of G. mollis exhibit inhibitory effects in inflammatory process and support the use of this plant for the treatment of arthritis, dermatitis, and wounds in African and Saudi Arabian traditional medicine.
Grewia Polysaccharide Gum
Interest in grewia polysaccharide gum as a pharmaceutical excipient has been on for over a decade [26].This growing interest is informed by the viscous property of the gum extracted from the inner stem bark of the G. mollis plant.Consequently, the gum has been extracted and some of its physicochemical properties evaluated.
Extraction and Purification.
Several approaches to the extraction of the gum from the inner stem bark of the shrub have been reported [27][28][29][30].Briefly, the dried and pulverized inner stem bark of the G. mollis shrub is dispersed in demineralized water using an impeller.The fibrous material from the dispersed mucilage is removed by straining through a muslin cloth.Thereafter the mucilage is centrifuged before extraction of the gum with 96% ethanol.The extracted gum is redispersed in water and reextracted to get a beige-coloured gum which is then dried in an oven at 50 ∘ C for 8 h.
The gum can be further purified by treatment with 0.1 M sodium hydroxide or hydrochloric acid or with sodium chloride followed by extraction with 96% ethanol.It must be noted that treatment of the natural material with dilute alkali, dilute acid, or electrolytes could result in modification of the parent material with consequent variations in the physicochemical properties of the resultant material.
Some Physicochemical Properties of Grewia Gum.
The gum contains traces of metals (Ca, K, Na, Mg, Zn, and Fe) and a viscosity-average-molecular weight of 316,000 [28].Metals constitute about 6.1% of the gum, while proteins and lipids account for 1.24 and 0.021%, respectively.It has a high intrinsic viscosity with a swelling capacity greater than tragacanth and methylcellulose.
Grewia polysaccharide gum slowly hydrates and swells in water.The gum has aqueous solubility of about 0.2 mg/mL [29].The low solubility of the gum was attributed to insoluble cell-wall materials making up a larger proportion of the gum.However, the presence of acetyl groups has been reported to account for the insolubility of certain gums in water [31].Grewia polysaccharide gum has been shown from Fouriertransformed infrared (FTIR) spectroscopic studies to contain acetyl groups which may account for the low solubility of the gum in water [29].Generally, water poorly soluble gums due to presence of acetyl groups are improved by deacetylation with dilute ammonia [31][32][33].
The rheological properties of the gum dispersion and the water vapour permeability of aqueous-based grewia gum films have been reported [27,34].The aqueous dispersions of the gum at different concentrations exhibited pseudoplastic flow behaviour.The viscosities of the gum dispersions decrease with increase in temperature.The effect of incorporation of electrolytes to dispersions of the gum was also investigated.The result showed a decrease in the viscosity of the gum brought about by electrolyte which is proportional to the concentration as well as the valence of the cation, and these findings have been confirmed by other workers [29].
The FTIR spectral analysis of the gum exhibited the typical bands and peak characteristics of polysaccharides [29] as shown in Figure 4.These authors attributed the broad band occurring at about 3425 cm −1 to the presence of hydroxyl (-OH) groups while the peak obtained at about 2927 cm −1 was attributed to stretching modes of the C-H bonds of methyl groups (-CH 3 ) of the rhamnose.Absorption bands around 1618 and 1420 cm −1 were attributed to carboxylate groups of uronic residues of galacturonic acid while the region between 1500 and 1800 cm −1 and absorption peaks at 1735 cm −1 and 1256 cm −1 were linked to the presence of carboxylic and acetyl groups, respectively.The absorption 0.000 3.000 6.000 9.000 12.000 band between 500 and 900 cm −1 may represent the finger print region for grewia polysaccharide gum.The gum has also been characterized using other techniques such as scanning electron microscopy (SEM), gel permeation chromatography (GPC), differential scanning calorimetry (DSC), and thermogravimetric analysis of the extracted sample.Also, spectroscopic techniques such as Xray photoelectron spectroscopy (XPS), solid-state nuclear magnetic resonance (NMR), and 1H and 13C NMR techniques [29] have been employed in the characterization of grewia gum.Based on findings from the various techniques, the gum has been described as a typically amorphous polysaccharide gum of high thermal stability with an average molecular weight of 5925 kDa expressed as the pullulan equivalent.
The effect of drying methods (air-drying, freeze-drying, and spray-drying) on the physicochemical properties of the gum has been investigated using the techniques outlined above [35].Solid-state NMR results indicated that drying technique had little effect on the structure of the polysaccharide gum, but X-ray photoelectron spectroscopy (XPS) showed that surface chemistry of the gum varied with drying methods.Thermogravimetric analyses showed that oxidation onset varied according to the drying method.The workers concluded that, for industrial extrapolation, air-drying of the gum may be preferable to spray-drying and freeze-drying when relative cost, product stability, and powder flow are the priority, in tablet formulation, for instance.
Pharmaceutical Values of Grewia Polysaccharide Gum
6.1.Preformulation Studies.The assessment of possible interactions between a drug and different excipients used in formulation usually precedes large scale development trials of solid dosage forms [36,37].Physical and chemical interactions between excipients and active pharmaceutical ingredients (API) are common place [38] and necessitate the screening of novel excipients for possible incompatibilities.
In view of the suitability of grewia polysaccharide gum as formulation excipient in solid or liquid dosage forms, studies on the potential interaction or incompatibility of the gum with pharmaceutical actives and/or excipients used in solid dosage formulation are imperative.
Nep and Conway [39] investigated potential interactions between grewia gum and cimetidine, ibuprofen, and some standard excipients used in the formulation of tablets (Figure 5).The thermal and molecular behaviours of mixtures of grewia gum with cimetidine, ibuprofen, and the standard excipients (lactose monohydrate, magnesium stearate, colloidal silicone dioxide, and microcrystalline cellulose) were analyzed using differential scanning calorimetry and FT-IR spectroscopy to assess potential interactions.The results obtained indicated that grewia gum is an inert natural polymer which can be used alone or in combination with other excipients in the formulation of pharmaceutical dosage forms.
Pharmaceutical Applications.
Investigations into the pharmaceutical application of gum from G. mollis were first reported in the early 2000s [27,28,34,40,41].The gum was reported to possess excellent binding property in sodium salicylate tablets (Okafor and Chukwu [40]).It was found to be as effective as gelatin when employed as a binder in concentrations of 2-6% w/w.At the same concentration, the gum was more effective as a binder than maize starch or acacia gum.
Audu-Peter and Gokum [42] found that the method of incorporating the gum into tablet formulation had effect on tablet properties.They discovered that tablets with better tablet properties such as hardness and friability were produced when the gum was incorporated by activation with water than by wet granulation or direct compression.Acid and thermal treatment of the gum resulted in improved drug release from tablets attributable to reduced viscosity of the gum (Audu-Peter and Isah [43]).
Emeje et al. [28] in 2008 investigated the binding property of the gum.These workers compared the binding property of the gum with polyvinyl pyrolidone in paracetamol tablet formulations and analysed the compression properties of the formulations using density measurements and application of Heckel and Kawakita equation.They found that grewia gum compares favourably with the standard binder PVP and may be a useful substitute binder in paracetamol formulations.
Grewia gum has been investigated as a sustained release polymer matrix in ibuprofen [44].It was used at a concentration of 16%, 32%, or 48% to formulate tablets of ibuprofen.This was compared with similar formulations prepared using hydroxypropyl methylcellulose (HPMC), guar gum, or ethyl cellulose as the polymer matrix.The results indicated that grewia gum at these concentrations was capable of sustaining the release of ibuprofen tablets up to 24 hours (Figure 6).Similarly, these scientists demonstrated the potential of grewia gum to sustain the release of a water-soluble drug in tablet formulations [45].Tablets of cimetidine containing 40% of grewia gum were prepared by direct compression.
Similar formulations containing HPMC, gum Arabic, carboxy methylcellulose, or ethyl cellulose were prepared for comparison.The results indicated that grewia gum was superior to hydrophilic matrices of HPMC, carboxy methylcellulose, and gum Arabic in sustaining the release of cimetidine from tablets.In both instances, the workers concluded that grewia gum may be a useful excipient when used alone or in combination with other polymers to modify the release of soluble drugs or poorly soluble drugs from polymeric matrices.
Grewia gum fractions obtained by centrifugation at 4500 rpm for 30 minutes with average molecular weights between 230 and 235 kDa have been shown to demonstrate improved aqueous solubility that was useful in delivering more solids to the substrate when used as a film coating agent [26,30].
Grewia gum has also been evaluated as a mucoadhesive in tablets [46][47][48].Nep and Okafor [47] showed that the gum compared favourably with tragacanth when used as bioadhesive in tablet formulation.Nep and Conway [48] compared the mucoadhesive performance of compacts or gels of grewia polysaccharide gum with those of guar gum, carboxymethylcellulose, hydroxypropyl methylcellulose, and carbopol 971P.A software-controlled penetrometre, TA.XTPlus texture analyzer (Stable Microsystems, UK) was used to measure the detachment force of the compacts while the polymer gels were evaluated for hardness, stickiness, work of cohesion, and work of adhesion.The results showed that grewia gels had a significantly greater work of adhesion than carboxymethylcellulose gels ( < 0.05) and HPMC gels ( < 0.001).The mucoadhesive performance of grewia compacts was comparable to those of HPMC and carbopol 971P compacts ( > 0.05).It was concluded that grewia polysaccharide gum should be suitable for the formulation of retentive drug delivery devices.
The application of the gum as a suspending agent in paediatric formulations has been reported [49][50][51].The suspending ability of air-dried or freeze-dried grewia gum in ibuprofen suspension was compared with similar suspension formulations containing xanthan gum, sodium carboxymethylcellulose, or acacia gum as suspending agents [49].The stability of the ibuprofen suspension formulations was assessed using parameters such as appearance, pourability, viscosity, rheology, sedimentation volume and redispersibility.Other evaluation parameters used include degree of flocculation, zeta potential, and microbial load.The result from the study indicated that air-dried or freeze-dried grewia gum may provide a suitable alternative as suspending agent in paediatric suspension formulations.
Conclusion
Grewia gum is a natural resource whose potential pharmaceutical values have been established.Our literature survey revealed that, pharmaceutically, grewia gum has been well investigated for its application as binder, disintegrant or mucoadhesive in solid dosage forms and as stabilizer or suspending agent in liquid formulations.However, the scientific research on grewia gum is yet to be translated into industrial applications due to the absence of detailed physical and chemical characterization, stability data, and scientifically validated bioactive properties.Although the use of grewia gum as thickener in local delicacies spans generations, it may be necessary to reconcile the ethnomedicinal use of extract of the aerial parts of the plant with the absolute requirement that excipients are pharmacologically inert.Detailed scientific investigations that will fill in these details will not only help in furthering the scientific understanding of grewia gum, but importantly it will assist in realizing commercial potential as pharmaceutical excipient.Furthermore, grewia gum should be explored in design and development of other drug carriers such as micro-and nanomaterials, membranes, capsules, and hot-melt extruded carriers and in wound healing applications.Grewia gum holds promise as an excipient to be explored for the unmet needs in drug delivery.
|
2018-12-01T08:21:37.852Z
|
2013-07-21T00:00:00.000
|
{
"year": 2013,
"sha1": "ea824514423f25dd5a7bc5b4614a348292b4219a",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2013/938726.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b10660826f3ea1a6324de227ca2897a5a228f914",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
20844640
|
pes2o/s2orc
|
v3-fos-license
|
On the nature of amorphous polymorphism of water
We report elastic and inelastic neutron scattering experiments on different amorphous ice modifications. It is shown that an amorphous structure (HDA') indiscernible from the high-density phase (HDA), obtained by compression of crystalline ice, can be formed from the very high-density phase (vHDA) as an intermediate stage of the transition of vHDA into its low-density modification (LDA'). Both, HDA and HDA' exhibit comparable small angle scattering signals characterizing them as structures heterogeneous on a length scale of a few nano-meters. The homogeneous structures are the initial and final transition stages vHDA and LDA', respectively. Despite, their apparent structural identity on a local scale HDA and HDA' differ in their transition kinetics explored by in situ experiments. The activation energy of the vHDA-to-LDA' transition is at least 20 kJ/mol higher than the activation energy of the HDA-to-LDA transition.
Amorphous polymorphism is strongly linked to water, where for the first time two distinct amorphous modifications, namely a high-density amorphous (HDA, ρ ≈ 39 mol./nm 3 ) and a low-density amorphous (LDA, ρ ≈ 31 mol./nm 3 ) ice state could be prepared [1]. The existence of HDA and LDA and the characteristics of the transition between these two modifications, often referred to as first-order like, have triggered major experimental and theoretical efforts that all aim for a conclusive explanation of amorphous polymorphism [2].
It has, in particular, been conjectured from Molecular Dynamics (MD) simulations, that water in the super-cooled region exhibits a second critical point and a first-order transition line separating two liquid phases towards lower temperatures [3]. In this picture HDA and LDA are supposed to be glassy representatives of these two liquids.
The scenario of two super-cooled liquid phases has been recently questioned by the discovery of a third disordered modification apparently distinct from HDA and LDA and called, due to its higher density (ρ ≈ 41 mol./nm 3 ), very high-density amorphous (vHDA) ice [4].
Moreover, latest computer simulations are in apparent agreement with this finding suggesting the presence of "multiple liquid-liquid transitions in super-cooled water" [5].
Amorphous systems like liquids are isotropic and thus possess by definition no particular global symmetry. In the glassy state they are in addition non-ergodic, i.e. they are prone to relax their structures following distinct energetic pathways that may or may not be accessible during the experiment. Thus, unless there are clear indications for thermodynamic transitions, it is not possible to assign distinct phases to an amorphous system. The evolution of an amorphous system as a function of temperature, pressure and time may, however, be characterized via the changes encountered in local structural units. These can e.g. be studied with wide-angle diffraction. If the structural changes are significant this approach allows for a characterisation of the amorphous states. We would like to recall that due to the non-ergodic nature of the sample what we call a state is not necessarily completely characterized by thermodynamic variables but may well depend on the sample history. In the case of water the nearest-neighbor coordination number has been proposed as a criterium to distinguish LDA, HDA and vHDA (coordination number of four, five and six, respectively) [6,7]. Another approach recently used in MD simulations bases the analysis not on the coordination number but on the ring structures encountered in the system [8]. The important question remains whether the local characterization is sufficient and whether the thus classified states actually correspond to distinct phases.
In this communication we present diffraction and inelastic neutron scattering experiments on both HDA and vHDA samples. HDA (D 2 O) has been formed by slow compression of crystalline ice I h to 18 kbar at 77 K [9]. vHDA samples have been obtained by heating HDA (D 2 O) samples at high pressure [4]. Here, we present data collected with three vHDA samples formed at p = 10.5 kbar T = 145 K # 1, p = 10.5 kbar T = 155 K # 2 and p = 16 kbar T = 155K # 3. After the vHDA structure is formed all samples are cooled at the indicated p to T = 77 K and retrieved from the pressure cells, crushed into mm-sized chunks and placed into proper sample containers. Please note, that throughout this paper we will refer to the amorphous ice modifications obtained from vHDA by heating as HDA' and LDA'. We will show that HDA and HDA', a modification of amorphous ice obtained It furthermore can be shown that to any modification encountered along the transition from HDA to LDA [11] we find a matching partner along the transition from vHDA to LDA'. In shows the largest width in the middle of the transition [11,12]. This is demonstrated in The enhanced signal in the low-Q range coincides well with the behaviour of the wideangle S(Q). This is directly demonstrated by the broadening of the structure factor maximum in the raw data (Fig. 1). It can also be visualized by computing the Fourier transform The one-to-one correspondance of vHDA-to-LDA' and HDA-to-LDA states does not only hold for the elastic signal but is equally observed for molecular vibrations. In Fig. 3 a we report the direct time-of-flight signal I(2Θ, tof) measured at 75 K with sample # 1 at IN6 and averaged over the 2Θ range sampled by the spectrometer. Fig. 3 b shows the corresponding generalized density of states G(ω). It is obvious that no significant difference can be detected in the spectra of HDA and HDA'. The characteristic maximum in G(ω) which is due to low-energy optic modes in the crystalline counterparts shifts towards lower energies when vHDA transforms into LDA' and the sample density decreases, a feature as well observed for the diversity of crystalline phases [16,17]. Indicated as the grey shaded area is the stage of the transition at which HDA' is identified.
It becomes immediately clear that at a given temperature vHDA transforms into LDA' on a longer time scale than HDA converts into LDA. The characteristic time constants τ (T ) of the transition follow an Arrhenius line with an activation energy ∆E ≈ 65 kJ/mol, i.e., a ∆E of at least 20 kJ/mol higher than for the HDA to LDA transition [11]. In other words, HDA' is structurally stable in the temperature range in which HDA transforms rapidly to LDA.
By combining a logarithmic term, which takes care of aging [19], with an Avrami-Kolmogorov expression, which describes a first-order transition, it is possible to reproduce I(T, t) analytically for the HDA to LDA transition [11]. This is not the case when starting the annealing process from vHDA. As can be seen from fig.4 the transition from HDA' to LDA' becomes nearly uncontrollable at any temperature once it has set in. Such a rapid transition kinetics is a priori incompatible with the higher activation energy found for τ (T ) when the system is in equilibrium. Therefore, an additional energy scale must be at work.
Given the speed of the transition the evacuation of latent heat from the sample and thus the temperature control becomes a real experimental challenge.
Indeed, following the vHDA to LDA' transition in high-vacuum the vHDA samples recrystallize directly. In contrast, LDA can be at any pressure obtained from HDA. This be-haviour can be easily understood by comparing the activation energies of the vHDA to LDA' transition with the activation energy ≈ 66 kJ/mol of recrystallisation of hyper-quenched glassy water [20]. This shows that the vHDA to LDA' transition energy is very close to the recrystallisation limit of low-density amorphous ice modifications. parameter, but includes any study using some external force, e.g. pressure [19], to which the non-ergodic sample is forced to respond. Resemblance in terms of homogeneity still does not mean that we deal physically with the same system. Only time depending experiments give us information on the energy states explored by the system [19,23,24]. In fact, HDA and HDA' are unequivocally different in this respect.
The present findings do in no way question the two-liquid scenario. The seemingly homogeneous structures vHDA and LDA' are good candidates for the glassy counterparts of the thermodynamic liquid states [3]. Concerning local structure both HDA' and HDA belong to the vHDA basin of states. The transition from vHDA to LDA' involves heterogeneous intermediate stages among which we find HDA'. It bears all signs of a phase transition including the abrupt nearly singular change of local structural order between HDA' and LDA'. No other transitions or transition stages justifying the presence of more than the two homogeneous structures could have been identified [5]. As far as the properties of the wide-angle S(Q) reported here and in reference [11] are concerned they are in agreement with recently published results from molecular dynamics simulations [8,22]. A computation of the properties observed in S(Q) in the low-Q range, however require a larger simulation box size than the one used.
|
2018-04-03T05:36:31.418Z
|
2005-04-01T00:00:00.000
|
{
"year": 2005,
"sha1": "51a89213707acf669aba70e53a04951be52577f5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0506532",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2f715b92d21e49abb2eb93c70344b86bb8058867",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Medicine"
]
}
|
235653305
|
pes2o/s2orc
|
v3-fos-license
|
A Three-Year Longitudinal Study Comparing Bone Mass, Density, and Geometry Measured by DXA, pQCT, and Bone Turnover Markers in Children with PKU Taking L-Amino Acid or Glycomacropeptide Protein Substitutes
In patients with phenylketonuria (PKU), treated by diet therapy only, evidence suggests that areal bone mineral density (BMDa) is within the normal clinical reference range but is below the population norm. Aims: To study longitudinal bone density, mass, and geometry over 36 months in children with PKU taking either amino acid (L-AA) or casein glycomacropeptide substitutes (CGMP-AA) as their main protein source. Methodology: A total of 48 subjects completed the study, 19 subjects in the L-AA group (median age 11.1, range 5–16 years) and 29 subjects in the CGMP-AA group (median age 8.3, range 5–16 years). The CGMP-AA was further divided into two groups, CGMP100 (median age 9.2, range 5–16 years) (n = 13), children taking CGMP-AA only and CGMP50 (median age 7.3, range 5–15 years) (n = 16), children taking a combination of CGMP-AA and L-AA. Dual X-ray absorptiometry (DXA) was measured at enrolment and 36 months, peripheral quantitative computer tomography (pQCT) at 36 months only, and serum blood and urine bone turnover markers (BTM) and blood bone biochemistry at enrolment, 6, 12, and 36 months. Results: No statistically significant differences were found between the three groups for DXA outcome parameters, i.e., BMDa (L2–L4 BMDa g/cm2), bone mineral apparent density (L2–L4 BMAD g/cm3) and total body less head BMDa (TBLH g/cm2). All blood biochemistry markers were within the reference ranges, and BTM showed active bone turnover with a trend for BTM to decrease with increasing age. Conclusions: Bone density was clinically normal, although the median z scores were below the population mean. BTM showed active bone turnover and blood biochemistry was within the reference ranges. There appeared to be no advantage to bone density, mass, or geometry from taking a macropeptide-based protein substitute as compared with L-AAs.
Introduction
Optimal bone mass is key to preventing the risk of fractures later in life, and many factors influence peak bone mass accretion including genetics, physical activity, body composition, and quality of diet. Severe dietary restriction may be problematic in conditions such as phenylketonuria (PKU) which require rigorous exclusion of many natural foods [1]. In children with classical PKU, the majority of protein is provided by a low phenylalanine, semisynthetic protein (protein substitute), with some limited dietary phenylalanine given from natural foods according to individual metabolic tolerance and disorder severity. Dependency on a synthetic protein may compromise both peak bone mass attainment and bone geometry [2,3].
Protein substitutes, are traditionally derived from essential and non-essential amino acids and are usually supplemented with added vitamins, minerals, and trace minerals aimed at achieving optimal growth, bone mass, and body composition. Protein substitutes are necessary lifelong, but long-term adherence is difficult to sustain particularly during adolescence [4,5], which is a vulnerable time for maximising bone mass, density, mineralization, and growth potential. Amino acids (AAs) contribute to the structural components of bone in addition to those of growth and tissue maintenance [2,6,7].
Protein has a positive effect on bone [6,7], and protein intake promotes peripubertal bone growth and delays bone loss [8,9]. Several long-term prospective observational studies [10,11] have shown significant positive associations between protein intake and bone mineral content, periosteal circumference, cortical area, and an index of strength strain. These studies reinforce that a moderate to high protein diet promotes bone accretion. The acid ash theory suggests that a high protein intake including protein substitutes based on amino acids are detrimental to bone accretion [8,12]. Protein substitutes are acidic, producing sulphuric acid from sulphur containing amino acids. The hypothesis suggests that calcium stored primarily in bones is slowly excreted to buffer the acidic pH, and this process leads to a decreased bone mineral density [13][14][15][16]. However, systematic and meta-analysis studies have dismissed this theory [17,18]. Although the urine pH is lower when taking a protein rich diet, the pH of the extracellular fluid is undisturbed due to regulatory control by the kidneys [8].
The use of casein glycomacropeptide supplemented with amino acids (CGMP-AA) has been associated with improved bone mass in PKU animal models [19], but CGMP (a bioactive peptide) compared with AAs and their influence on bone mass, density, and geometry has not been studied in children with PKU.
In this longitudinal prospective controlled study over 36 months, we investigated the efficacy of CGMP-AA as compared with L-AA protein substitutes on bone mass, density, geometry, and turnover markers in children with PKU.
Methods
The inclusion criteria included the following: children with PKU diagnosed by newborn screening, children aged 5-16 years and not treated with sapropterin dihydrochloride, known adherence with protein substitutes, and maintenance of 70% of blood phenylalanine concentrations within the European PKU target therapeutic range for 6 months prior to study enrolment [20]. Target blood phenylalanine ranges for children aged 5-12 years were from 120 to ≤360 µmol/L, and for children 12 years and older they were from 120 to ≤600 µmol/L.
Ethical Approval
This study was registered by the Health Research Authority and was given a favourable ethical opinion by the South Birmingham Research Ethical Committee (referenced 13/WM/0435 and IRAS (integrated research application system) number 129497). Written informed consent was given by at least one caregiver with parental responsibility and written consent was obtained from the subjects if appropriate for their age and level of understanding.
CGMP-AA and L-AA Protein Substitutes
The CGMP-AA (a test product by Vitaflo International Ltd., Liverpool, UK) was a flavoured powder. Each 35 g sachet contained 20 g protein equivalent, and 36 mg phenylalanine, mixed with 120 mL of water. The flavoured L-AA was either a powder mixed with water or a ready-prepared liquid that provided 10, 15, or 20 g of protein equivalent. The CGMP-AA and L-AA products both had a similar nutritional and AA profile, except CGMP-AA contained residual phenylalanine and higher amounts of threonine and leucine.
Selection into the CGMP Group or L-AA Group
The children chose the product they preferred, depending on their taste preference, i.e., the CGMP-AA group or L-AA group. They remained on this formula for the duration of the study.
Study Design
The primary aim of this 3-year longitudinal study was to compare bone mass, density and geometry of children with PKU taking CGMP-AA or L-AA as their primary protein source. The following examinations were conducted: dual-energy X-ray absorptiometry (DXA), together with blood bone biochemistry and blood and urine bone turnover markers. Peripheral quantitative computed tomography (pQCT) of the forearm was performed at 36 months only ( Figure 1 and Table 1). aged 5-12 years were from 120 to ≤360 µmol/L, and for children 12 years and older they were from 120 to ≤600 µmol/L.
Ethical Approval
This study was registered by the Health Research Authority and was given a favourable ethical opinion by the South Birmingham Research Ethical Committee (referenced 13/WM/0435 and IRAS (integrated research application system) number 129497). Written informed consent was given by at least one caregiver with parental responsibility and written consent was obtained from the subjects if appropriate for their age and level of understanding.
CGMP-AA and L-AA Protein Substitutes
The CGMP-AA (a test product by Vitaflo International Ltd., Liverpool, UK) was a flavoured powder. Each 35 g sachet contained 20 g protein equivalent, and 36 mg phenylalanine, mixed with 120 mL of water. The flavoured L-AA was either a powder mixed with water or a ready-prepared liquid that provided 10, 15, or 20 g of protein equivalent. The CGMP-AA and L-AA products both had a similar nutritional and AA profile, except CGMP-AA contained residual phenylalanine and higher amounts of threonine and leucine.
Selection into the CGMP Group or L-AA Group
The children chose the product they preferred, depending on their taste preference, i.e., the CGMP-AA group or L-AA group. They remained on this formula for the duration of the study.
Study Design
The primary aim of this 3-year longitudinal study was to compare bone mass, density and geometry of children with PKU taking CGMP-AA or L-AA as their primary protein source. The following examinations were conducted: dual-energy X-ray absorptiometry (DXA), together with blood bone biochemistry and blood and urine bone turnover markers. Peripheral quantitative computed tomography (pQCT) of the forearm was performed at 36 months only ( Figure 1 and Table 1). A previous pilot study [21] demonstrated that the residual phenylalanine in the CGMP-AA group led to compromised phenylalanine control in some children. Therefore, the CGMP-AA group was subdivided into: (1) CGMP100 group, in which the children took the entire protein substitute as CGMP-AA and (2) CGMP50 group, in which children took a combination of L-AA and CGMP-AA. There was also a third group of children who remained on their usual L-AA only (L-AA group). A GE Lunar iDXA and Encore™ software version 13.1 g (GE Healthcare, Madison, WI, USA) was used to measure bone density at enrolment and at the end of 36 months. Trunk thickness and body weight were used as a guide for scanning each child in the most appropriate acquisition mode. Children lay supine on a bed, while the DXA scan was completed. The following measurements were performed: lumbar spine (L2-L4) areal bone mineral density (L2-L4 BMDa) in g/cm 2 , lumbar spine (L2-L4) bone mineral content (L2-L4 BMC) in g, total body mineral content (BMC) in g, total body less head BMDa (TBLH) in g/cm 2 , and size corrected outcome measures included lumbar spine bone mineral apparent density (L2-L4 BMAD) in g/cm 3 . At 36 months, in addition to the DXA assessment, pQCT was also performed.
pQCT
The pQCT (Stratec XCT 2000 L, Pfozheim, Germany) measurements were taken at the 4% and 66% region of the non-dominant forearm, evaluating volumetric bone mineral density, together with muscle and bone geometry, size, and strength. At the 4% site, trabecular and total cross-sectional area were measured, while at the 66% site, cortical density, as well as muscle, bone, and fat area were measured. The pQCT also measured the strength strain index as a surrogate marker of bone strength.
Serum Blood and Urine Bone Turnover Markers
Fasting, early morning, venous blood samples were collected at enrolment, 6, 12, and 36 months for the following serum bone markers: procollagen type 1 N-terminal propeptide (P1NP), type 1 collagen β crosslinked C-telopeptide (β-CTX), and bone alkaline phosphatase (bone ALP). A urine sample, the second sample of the day, was collected at enrolment, 6, 12, and 36 months for urine creatinine adjusted free urine pyridinoline (fPYD/Ur Cr) and urine free deoxypyridinoline crosslinks (fDPD/Ur Cr), and urinary calcium/creatinine ratio (Ur Ca/Cr). Urine samples were collected in containers, which were wrapped in tin foil and put into an envelope to shield them from any light. All urine samples were taken immediately to the laboratory for processing and stored at −80 • C. β-CTX and P1NP were analysed using an electro-chemiluminescence immunoassay (ECLIA) on a COBAS e601 analyser (Roche Diagnostics, Mannheim, Germany). The inter-assay coefficient of variation (CV) for β-CTX was <3% across the analytical range, between 0.01 and 6.0 µg/L, with a sensitivity of 0.01 µg/L. The inter-assay CV for P1NP was <3%, between 5 and 1200 µg/L, with a sensitivity of 5 µg/L. The serum bone ALP was determined by MicroVue™ enzyme-linked immunosorbent assay ELISA kit (Quidel Corporation, San Diego, USA). The inter-assay CV for bone ALP was <5.8%, between 0.5 and 150 U/L, with a detection limit of 0.7 U/L.
The analyses for urinary fPYD and fDPD were performed using the liquid chromatography tandem mass spectrometry (LC-MS/MS) method, as described by Tang et al [22]. In brief, 0.5 mL of urine sample/calibration/quality control materials pretreated with 0.5 mL hydrochloric acid (40% concentrate) was extracted using a solid phase extraction (SPE) column packed with cellulose slurry. Pyridinium crosslinks were eluted from the SPE columns and analysed by LC-MS/MS coupled with an electrospray ionisation (ESI) source operated in positive mode. The inter-assay CVs were ≤10.3% for PYD in the concentration range of 5-2000 nmol/L and ≤13.1% for DPD between 2 and 1000 nmol/L. The lower limit of quantification was 6 nmol/L for fPYD and 2.5 nmol/L for fDPD.
Urine creatinine was measured to obtain the fPYD/ and fDPD/urine creatinine ratios and the urine calcium/creatine ratio. Samples were analysed using Roche kinetic colorimetric assays performed on a COBAS ® C501 analyser (Roche, Burgess Hill, UK), according to the manufacturer's instructions. The inter-assay CV ranged from 1.3 to 2.1% across the assay working range for Ur Ca of 0.20-7.5 mmol/L and Ur creatinine of 0.355 mmol/L.
Blood Biochemistry Markers
Overnight fasting blood samples for serum calcium, magnesium, phosphate, vitamin D, and parathyroid hormone were collected at enrolment, 6, 12, and 36 months.
Blood Phenylalanine/Tyrosine Monitoring
Throughout the 36-month study, trained caregivers collected weekly overnight fasting morning blood spots at home for phenylalanine and tyrosine. Blood specimens were sent via the post to the Birmingham Women's and Children's Hospital Laboratory. The blood spot filter cards used were Perkin Elmer 226 UK standard NBS (Perkin Elmer, Waltham, MA, USA). All the cards had a standard thickness, and the blood phenylalanine and tyrosine concentrations were calculated on a 3.2 mm punch by tandem mass spectrometry.
Pubertal Status
A general medical examination and pubertal status was measured at enrolment using the Tanner picture index. Stages 1 and 2 are classified as pre-pubertal, and Stages 3, 4, and 5 are classified as pubertal.
Anthropometric Measurements
Weight and height were measured once every 3 months by one of two metabolic dietitians. Height was measured using a Harpenden stadiometer (Holtain Ltd., Crymych, Wales, UK).
Statistical Methods
Continuous data are presented as median and interquartile ranges and categorical data are presented as frequencies of counts with associated percentages. Longitudinal data are presented graphically using profile plots to show the average change over time.
Correlations between continuous covariates were evaluated using Pearson's correlation coefficient. Comparisons between treatment groups were performed using analysis of covariance (ANCOVA) techniques, to analyse the follow-up data, while including baseline measures as adjusting covariates. Models also included covariates for patients' gender, age, and puberty status (supplementary data are provided for these parameters). A p-value of 0.05 was used throughout to determine statistical significance. All analyses were performed using R (Version 3).
Subjects
Fifty children (28 boys and 22 girls) with PKU were recruited. Forty-seven children were of European origin and three children were of Asian origin. Forty-eight children completed the study, 29 children in the CGMP-AA group and 19 children in the L-AA group. At enrolment, the median age (range) in the CGMP100 group was 9.2 years (5-16 years) (n = 13); in the CGMP50 group, the median age was 7.3 years (5-15 years) (n = 16), and in the L-AA group, the median age was 11.1 years (5-16 years) (n = 19). Only six children were able to tolerate >10 g/day of natural protein (CGMP100 n = 2, CGMP50 n = 1, and L-AA n = 3), all the others received <10 g/day of natural protein.
Subject Drop Out
One boy and one girl (both aged 12 years) in the CGMP-AA group were excluded from the study as both failed to comply with the study protocol. One failed to return blood phenylalanine samples and both had poor adherence to the low phenylalanine diet.
Median DXA Z Score Measurements for CGMP100, CGMP50, and L-AA Groups
Overall, there were no significant differences among the groups for any of the measured DXA parameters. Bone density was on the lower side of normal but within a normal reference range (Table 2). Table 2. Median z scores (range) for L2-L4 bone mineral density (BMDa), lumbar spine bone mineral apparent density (L2-L4 BMAD), and total body less head BMDa (TBLH). Other parameters measured include median (range) L2-L4 bone mineral content and total bone mineral content for CGMP100, CGMP50, and L-AA groups, at enrolment and 36 months. Similar to the DXA z score measurements, overall, there were no significant differences among the groups, but cortical density at the 66% site was statistically significantly different between the CGMP100 and L-AA groups (Table 3). Table 3. Results from the pQCT scan measuring median z scores (range) for trabecular, cortical, and total densities at the 4% site; bone, muscle, and fat areas; strength strain index; and bone area/muscle area at 36 months in the CGMP100, CGMP50, and L-AA groups.
Nutritional Bone Biochemistry Markers
Median concentrations for all the biochemistry markers (calcium, phosphate, magnesium, vitamin D, and parathyroid hormone) were within normal reference ranges for all the groups over the 36-month study period (Table 4). There were no statistically significant differences within or among the groups.
Measurement for Bone Formation Markers and Urine Calcium
The urine calcium/creatinine ratio (Ur Ca/Cr) a measure of renal acid excretion was normal with no indication of excess calcium excretion (Table 5). Similarly, serum and urine BTM showed a physiological decrease with age, and no evidence of a disturbance between formation and resorption. (6,8) Legend: M, males; F, females; CGMP, casein glycomacropeptide; CGMP100, children taking all their protein substitute as casein glycomacropeptide; CGMP50, children taking a combination of casein glycomacropeptide and amino acids; L-AA, amino acids; β-CTX, type 1 collagen β crosslinked C-telopeptide; bone ALP, bone alkaline phosphatase; P1NP, procollagen type 1 N-terminal propeptide; fDPD, urine free deoxypyridinoline; fDPD/Ur Cr, deoxypyridinoline (free)/creatinine ratio; fPYD, urine free pyridinoline; fPYD/Ur Cr, pyridinoline (free)/creatinine ratio; Ur Ca/Cr, urine calcium/creatinine ratio; Ur Cr, urine creatinine. Standard references for children are not available.
A strong positive correlation was observed between PN1P and β-CTX at 36 months (r = 0.82) (Figure 2). The ANCOVA analysis performed on PN1P indicated that the level of PN1P was somewhat dependent on age, with older subjects having a lower PN1P level. Furthermore, there was evidence of an increase in PN1P at 36 months associated with CGMP100 as compared with L-AA (p = 0.041) (Figure 3). There was no difference between the CGMP50 and L-AA groups (p = 0.80).
Anthropometry
We have previously reported height, weight, and body mass index in this group of children [23]. At 36 months, all groups had a median positive height z score: L-AA, 0.2 (range 0 to 0.5); for CGMP50, 0.3 (range −0.1 to 0.7); and for CGMP100, 0.6 (range 0.1 to 0.7). Median weight for height z scores and BMI z scores were above the ideal reference mean, indicating an overweight group of children (Table 6). Table 6. Median z scores (range) for height, weight, and BMI in the L-AA, CGMP100, and CGMP50 groups, measured annually from enrolment to 36 months in PKU children taking either L-AA, CGMP50, or CGMP100.
Anthropometry
We have previously reported height, weight, and body mass index in this group of children [23]. At 36 months, all groups had a median positive height z score: L-AA, 0.2 (range 0 to 0.5); for CGMP50, 0.3 (range −0.1 to 0.7); and for CGMP100, 0.6 (range 0.1 to 0.7). Median weight for height z scores and BMI z scores were above the ideal reference mean, indicating an overweight group of children (Table 6). Table 6. Median z scores (range) for height, weight, and BMI in the L-AA, CGMP100, and CGMP50 groups, measured annually from enrolment to 36 months in PKU children taking either L-AA, CGMP50, or CGMP100.
Blood Phenylalanine Concentrations
The median phenylalanine concentrations for this study have been previously reported. Median phenylalanine concentrations were within recommended target reference ranges for children aged ≤11 and ≥12 years old [23].
Discussion
In this 36-month longitudinal study in children with PKU, bone mass, density, and geometry were comprehensively examined by DXA and pQCT, in addition to serum BTM and blood biochemistry. With the exception of cortical density at the 66% site, none of the other bone measurements showed any benefit of CGMP100 over L-AA or CGMP50, suggesting that CGMP-AA had no advantage over L-AA for bone development. Similarly, there was no evidence to suggest any differences in bone mass, density, or geometry by gender, age, or puberty (Supplementary Tables S1 and S2).
A strong positive correlation between β-CTX and P1NP was observed in all three study groups, with P1NP being lower in the older age subjects, and an increased P1NP being evident in the CGMP100 group. This synergy between bone formation and resorption shows active bone turnover and reflects appropriate bone growth, since these markers derive from physiological processes. Our results contrast with those reported by Casto et al. [24], which suggested a trend towards increased bone resorption in subjects with PKU. This controlled study, was the first to monitor bone mass and density using two separate imaging technologies (DXA and pQCT), and holistically assesses serum bone, urine, and blood biochemistry parameters in PKU. Similar to findings from two systematic reviews [24,25], the overall bone density values for the groups were below the population mean but within the normal reference values. Imaging results met the International Society for Clinical Densitometry (ISCD) recommendations (ISCD 2013) [26]. There were no differences in biochemical or BTM among the groups, suggesting no changes in bone metabolism attributed to the type of protein substitute. Naturally, BTM concentrations decreased in older adolescents towards those of lower adult levels, as a physiological phenomenon expected in a healthy population [27].
Unlike the findings of Schwahn et al., Mc Murry et al., and Fernandez et al. [28][29][30], we found no evidence to suggest that mineralization defects began in childhood, and then became more evident in adolescents. In this study, the groups of children were overweight. The relationship between overweight, obesity and bone is contentious.
Evidence [31] suggests that in early childhood obesity confers a structural advantage on the developing skeleton, but with age this relationship is reversed and becomes detrimental to skeletal development. Clarke et al [32] reported a positive relationship between adiposity and bone mass accrual in 3082 healthy children, while others [33,34] have reported opposite findings. Lean body mass has been shown to be the strongest predictor of bone mineral content [35,36] and relates to bone mass and skeletal development in children. Our previous study [37] indicated a trend towards improved lean body mass in the CGMP100 group; however, there was no evidence to suggest a similar beneficial effect on bone density in this group.
In PKU mouse models, CGMP as compared with L-AA has been shown to increase bone strength measured by biochemical mechanisms. Solverson [19] gave PKU and wild type mice different dietary regimens, i.e., a normal diet or a low phenylalanine diet supplemented with L-AA or CGMP protein substitutes. The PKU mice, regardless of protein substitute type, had lower bone density as compared with wild type mice, and those taking L-AA had inferior bone strength as compared with the CGMP protein substitute group. The authors proposed that the peptide structure of CGMP could possibly account for the positive influence on bone radial size improving biochemical performance. Alternatively, the high acid load due to L-AA could decrease bone strength via excreting higher amounts of calcium. However, both these suggestions were conjecture, as they did not measure net acid excretion, bone collagen, and markers of bone biomechanical performance. The results from our study in our cohort of children would suggest that neither of these mechanisms are active. BTM monitoring collagen were physiologically normal and there was no evidence of net acid excretion with a normal calcium/creatinine ratio.
Although many studies have identified lower BMD in PKU [38][39][40][41], not all of these studies included a size correlation for DXA output and there has been little agreement about lower BMD pathophysiology. Dobrowolski et al. [42] studied bone mineralization in PKU mice and showed phenylalanine toxicity inhibited bone mineralization. However, in human studies, there is a discord on the link between hyperphenylalaninemia and bone mass, with some studies showing a correlation and others not [38,40,43,44].
Within the three groups (CGMP100, CGMP50, and L-AA) there were expected physiological changes in the concentrations of BTM. In adults, BTM mainly represent bone remodelling; in children, BTM are released during bone remodelling, modelling, and perpendicular growth. Millet et al. [44] measured urine DPD and bone ALP in patients with PKU and compared these with a healthy paediatric group; bone remodelling was active in children with PKU aged 7-14 years, and bone ALP, as expected, was found to be significantly lower in the oldest group of patients (aged >18 years), although significantly higher DPD concentrations independent of age were reported. In our study, bone resorption and formation markers were consistently lowest in the L-AA group, particularly noticeable in the L-AA girls who had reached late puberty with a median age of 17 (8-18 years) at 36 months [27,43,45,46]. In contrast, the youngest group of CGMP50 boys showed an increase in BTM over the 36 months.
The interpretation of BTM is difficult and their concentrations vary widely in children, affected by a multitude of factors including age, gender, puberty, growth velocity, the rate of mineral accrual, hormonal regulation, nutritional status, circadian, and even day-to-day changes [47]. Paediatric reference data are available for some BTM [48][49][50][51], although UK specific data are lacking, which hampers appropriate interpretation. Specificity for bone tissue as well as sensitivity and specificity of the measurement assays lead to variations, rendering comparisons among study groups difficult [50,52]. Despite these challenges, in our study in which children were followed for 36 months, BTM followed the expected variations for age with no differences between the groups. These children had an active bone turnover profile, supportive of a normal bone mineral density. The reason why their bone mineral densities were below the population median was unclear, but these groups were not at any increased clinical risk of fractures.
There are limitations to this study. Patient numbers in each group were small which reduced the power of this study. An extended follow-up period of >3 years may be needed for any differences to emerge between protein substitute sources, as noted, P1NP was increased in the CGMP100 group. We also did not have a healthy control group, which would have been beneficial to compare differences with the children with PKU. The ages of the children were significantly different in all three groups, and CGMP was given at two different concentrations making any absolute differences difficult to recognize, although statistical modelling was used to account for this variable. Age influences bone changes and children entered puberty over the study period. In children, no bone marker is specific for any of the three different biological processes of modelling, remodelling, and changes in endochondral ossification. However, our findings were consistent, i.e., all measurements were taken via DXA or pQCT and showed a below average bone density, with no significant differences among the groups taking CGMP-AA or L-AA. Bone markers appeared to follow a similar pattern to that in healthy children. We did not measure exercise activity in these groups of children, but a high proportion (60%) participated in regular activities such as football, dancing, and gymnastics.
Conclusions
In this detailed and comprehensive study measuring global bone development, using both two-and three-dimensional imaging in addition to serum BTM and blood biochemistry, a complete assessment of bone mass, density, geometry, and bone turnover was conducted. There were no statistical differences in the groups of children, who had good metabolic control when taking either L-AA or CGMP-AA protein substitutes. Bone density was normal and similar to the findings from systematic reviews, which suggests it was lower than the population norm but carried no increased osteoporotic risk. Bone remodelling processes appear to be active in children with PKU, with both L-AA and CGMP-AA protein substitutes supporting normal bone growth.
|
2021-06-28T05:06:14.023Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "a5c0e64e3adb2e68fac3c0052f809bd3c63e29d4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/13/6/2075/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5c0e64e3adb2e68fac3c0052f809bd3c63e29d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258196138
|
pes2o/s2orc
|
v3-fos-license
|
NpPP2-B10, an F-Box-Nictaba Gene, Promotes Plant Growth and Resistance to Black Shank Disease Incited by Phytophthora nicotianae in Nicotiana tabacum
Black shank, a devastating disease affecting tobacco production worldwide, is caused by Phytophthora nicotianae. However, few genes related to Phytophthora resistance have been reported in tobacco. Here, we identified NpPP2-B10, a gene strongly induced by P. nicotianae race 0, with a conserved F-box motif and Nictaba (tobacco lectin) domain, in the highly resistant tobacco species Nicotiana plumbaginifolia. NpPP2-B10 is a typical F-box-Nictaba gene. When it was transferred into the black shank-susceptible tobacco cultivar ‘Honghua Dajinyuan’, it was found to promote resistance to black shank disease. NpPP2-B10 was induced by salicylic acid, and some resistance-related genes (NtPR1, NtPR2, NtCHN50, and NtPAL) and resistance-related enzymes (catalase and peroxidase) were significantly upregulated in the overexpression lines after infection with P. nicotianae. Furthermore, we showed that NpPP2-B10 actively regulated the tobacco seed germination rate, growth rate, and plant height. The erythrocyte coagulation test of purified NpPP2-B10 protein showed that NpPP2-B10 had plant lectin activity, and the lectin content in the overexpression lines was significantly higher than that in the WT, which could lead to accelerated growth and improved resistance of tobacco. SKP1 is an adaptor protein of the E3 ubiquitin ligase SKP1, Cullin, F-box (SCF) complex. We demonstrated that NpPP2-B10 could interact with the NpSKP1-1A gene in vivo and in vitro through yeast two-hybrid (Y2H) and bimolecular fluorescence complementation (BiFC), indicating that NpPP2-B10 likely participates in the plant immune response by mediating the ubiquitin protease pathway. In conclusion, our study provides some important insights concerning NpPP2-B10-mediated regulation of tobacco growth and resistance.
Introduction
Tobacco black shank is a devastating disease caused by Phytophthora nicotianae, which harms the roots, stems, and leaves of tobacco at various growth stages, causing water-like disease lesions, yellowing, and even death of the plants [1]. Black shank-resistant cultivated tobacco germplasm sources are scarce, and only the resistant cultivars 'Florida 301' [2] and 'Beinhart 1000' [3] are commonly mentioned. Nevertheless, some wild species in the genus Nicotiana, such as N. rustica, N. longiflora, and N. plumbaginifolia, are highly resistant to black shank [4,5]. Their resistance has been successfully transferred to cultivated tobacco by distant hybridization [6]. A study by Goins and Apple [7] showed that the resistance of genes are most likely to play a role in the innate immune response of plants. However, the F-box-Nictaba proteins in tobacco have not been implicated in the molecular function of plant disease resistance. Tobacco has always been severely affected by black shank disease, and there is no evidence related to Phytophthora resistance in the study of F-box-Nictaba proteins in other species. Therefore, it is particularly important to further elucidate the role of F-box-Nictaba genes in tobacco disease resistance.
In this study, we identified the F-box-Nictaba gene NpPP2-B10 in N. plumbaginifolia and overexpressed it in the black shank-susceptible tobacco cultivar 'Honghua Dajinyuan'. We found that NpPP2-B10 can improve the resistance of tobacco to P. nicotianae race 0 and could positively regulate the growth and development of tobacco. This is of great significance to the agricultural production of tobacco. Furthermore, our study revealed that NpPP2-B10 possesses both F-box protein properties and plant lectin activity, suggesting that NpPP2-B10 has the ability to bind polysaccharides and participate in protein degradation. In conclusion, our results provide an important reference for further understanding the nature of black shank resistance in N. plumbaginifolia.
Characterization of NpPP2-B10
In previous studies, we analyzed the differentially expressed genes of the highly resistant tobacco species N. plumbaginifolia and of the moderately resistant cultivar 'Yunyan 87' identified by transcriptome sequencing [9]. A Unigene, c62451.graph_c0, was specifically expressed in N. plumbaginifolia. In further studies, it was found that the c62451.graph_c0 was rapidly induced 6-72 h after infection with P. nicotianae race 0, which was verified by using real-time quantitative polymerase chain reaction (RT-qPCR). The expression level of the c62451.graph_c0 was the highest at 72 h after infection, at which point it was 15 times higher than that in the control (Figure 1a), suggesting that the gene responds to the infection of P. nicotianae race 0.
Plant hormones play an important role in the expression and regulation of plant defense genes and responses to biotic and abiotic stresses. Salicylic acid (SA), jasmonic acid (MeJA), and ethylene (ETH) are important signaling molecules in plants that can activate downstream defense-related genes [29]. The expression of c62451.graph_c0 in N. plumbaginifolia was significantly higher than that in the control group at 6-72 h after SA treatment, and the expression level was 8.5 times higher than that in the control group after 48 h (Figure 1b). In addition, c62451.graph_c0 was also induced by MeJA and inhibited by ETH ( Figure S1), but c624511.graph_c0 was more responsive to SA than these two hormones. These results suggest that c62451.graph_c0 may be regulated by the SA signaling pathway.
To further investigate the function of this gene, we conducted a BLAST search in NCBI using the partial sequence. Based on the BLAST results, we cloned the gene and named it NpPP2-B10 (GenBank: OM264753). The total length of the coding sequence (CDS) of the gene was 786 bp (Figure S2), the open reading frame (ORF) encoded 261 amino acids, and the calculated molecular weight (MW) was 29,643.71. Conserved domain analysis revealed that the N-terminus of the gene contained an F-box protein motif, and that the C-terminus contained a PP2 structure (Nictaba-related lectin domain), which had an additional F-box motif that common Nictaba lacks (Figure 1c). Therefore, NpPP2-B10 can be classified as an F-box-Nictaba class protein. In previous studies, two tryptophan residues in the Nictaba domain (Trp 15 and Trp 22) were considered key to the binding of tobacco lectin to carbohydrates [30]. Similarly, these two Trp residues are also highly conserved in NpPP2-B10. This suggests that NpPP2-B10 has potential lectin activity.
To examine the subcellular location of NpPP2-B10, we constructed a 35S::NpPP2-B10-eGFP fusion expression vector controlled by the CaMV35S promoter and used Agrobacterium tumefaciens for transient expression in the leaves of N. benthamiana. After 48-72 h of dark culture, the transgenic leaves were stained with 4,6-diamidino-2-phenylindole (DAPI) and observed under a fluorescence microscope. As shown in Figure 1d, green fluorescence was observed in the nucleus and cytoplasm of tobacco transferred into the 35S::NpPP2-B10-GFP recombinant plasmid. These results indicate that NpPP2-B10 is a nuclear and cytoplasmic localized gene, which is consistent with the subcellular localization results of other Nictaba-related proteins [26]. To further investigate the function of this gene, we conducted a BLAST search in NCBI using the partial sequence. Based on the BLAST results, we cloned the gene and named it NpPP2-B10 (GenBank: OM264753). The total length of the coding sequence (CDS) of the gene was 786 bp (Figure S2), the open reading frame (ORF) encoded 261 amino acids, and the calculated molecular weight (MW) was 29,643.71. Conserved domain Figure 1. Characterization of NpPP2-B10. (a,b) Expression pattern of the c62451.graph_c0 gene in tobacco treated with P. nicotianae race 0 and SA. Asterisks denote significant differences (compared with the control group): ** p < 0.01, by Student's t-test; ns, no significant difference. (c) Sequence alignment of NpPP2-B10 with the F-box-Nictaba protein of Arabidopsis thaliana and the Nictaba protein of N. tabacum.
The exact same residue; : residues that are particularly similar in nature; . residues that are slightly similar in nature. (d) The subcellular localization of NpPP2-B10 in N. benthamiana epidermal cells. GFP: Green fluorescent protein. DAPI: Nucleic marker. Merged: Merged image of the bright field, GFP, and DAPI results. The above image is a random field of view taken from a fluorescent region of the tobacco epidermis.
Overexpression of NpPP2-B10 Promoted Seed Germination and Plant Growth in Tobacco
To investigate the function of NpPP2-B10, we constructed a 35S::NpPP2-B10 expression vector and then transformed it into the black shank-susceptible variety 'Honghuadajinyuan'. Nine 35S::NpPP2-B10 transgenic lines were obtained by kanamycin screening and PCR identification ( Figure S3). Three self-crossing T1 seedlings with the highest expression levels were used to study the phenotypes of the transgenic plants. We made the selfcrossing progenies of the three transgenic tobacco lines and wild-type (WT) seeds grow on Murashige and Skoog (MS) medium and observed that the germination of seeds from the transgenic lines was better than that of the WT seeds one week after seeding (Figure 2a). We seeded the NpPP2-B10-OE lines and the WT on wet filter paper and recorded their germination rates over the course of one week. Statistical results showed that the seed germination rate of the NpPP2-B10-overexpressing lines was significantly higher than that of WT in 4-7 days after seeding ( Figure 2b). Then, we transferred the germinated seedlings to sterilized soil for growth, and it was observed that the seedlings of the overexpression lines were larger than the WT seedlings ( Figure S4). At the bolting stage, we observed that the height of plants in the NpPP2-B10-OE lines was higher than that of WT plants ( Figure 2c). We calculated the height of T1 generation plants from the NpPP2-B10-OE lines and the height of the WT offspring, and the results showed that the height of the plants in the NpPP2-B10-OE lines was significantly higher than that of the WT plants ( Figure 2d). These results suggest that the expression of NpPP2-B10 in tobacco promotes plant growth and development. Lectins are abundant proteins in seeds and vegetative tissues and are considered storage proteins that provide nitrogen sources for plants [18]. We hypothesize that the increased lectin content in NpPP2-B10-OE lines caused this phenotype.
Overexpression of NpPP2-B10 Enhanced Tobacco Resistance to Black Shank Disease
Based on previous studies on plant F-box-Nictaba proteins, we speculated that NpPP2-B10 might play a role in plant disease resistance. We selected five-week-old seedlings of the NpPP2-B10-OE lines and WT at the same growth stage and treated them with the P. nicotianae race 0 spore suspension by means of root irrigation. Three days after root irrigation, tissue necrosis and decay appeared in the roots and stems of the WT seedlings, while no corresponding symptoms appeared in the stems and roots of the NpPP2-B10-OE lines (Figure 3a). At four to nine days after root irrigation, we observed that both the WT and NpPP2-B10-OE lines showed symptoms of black shank disease. As shown in Figure
Overexpression of NpPP2-B10 Enhanced Tobacco Resistance to Black Shank Disease
Based on previous studies on plant F-box-Nictaba proteins, we speculated that NpPP2-B10 might play a role in plant disease resistance. We selected five-week-old seedlings of the NpPP2-B10-OE lines and WT at the same growth stage and treated them with the P. nicotianae race 0 spore suspension by means of root irrigation. Three days after root irrigation, tissue necrosis and decay appeared in the roots and stems of the WT seedlings, while no corresponding symptoms appeared in the stems and roots of the NpPP2-B10-OE lines (Figure 3a). At four to nine days after root irrigation, we observed that both the WT and NpPP2-B10-OE lines showed symptoms of black shank disease. As shown in Figure 3b, the incidence of infection in the NpPP2-B10-OE lines was lower than that in the WT. The statistical results showed that the OE-3 line seedlings and OE-12 line seedlings had a significantly lower incidence of infection than the WT seedlings during the period from four to nine days after root irrigation, and that the OE-44 line seedlings (the line of seedlings with the lowest NpPP2-B10-OE expression) had a significantly lower incidence of infection than the WT seedlings only at four dpi ( Figure 3c). These results indicated that NpPP2-B10 positively regulates tobacco resistance at the seedling stage.
Validation of Lectin Activity of the NpPP2-B10 Protein
Based on the existence of conserved carbohydrate-binding sites (Trp 15 and Trp 22) in NpPP2-B10, we speculated that NpPP2-B10 would have potential lectin activity. To verify this conjecture, we constructed the prokaryotic expression vector pET-SUMO-NpPP2-B10. The purified NpPP2-B10 protein was obtained by prokaryotic expression (Figure 4b). One percent mouse red blood cells and two percent rabbit red blood cells were used to validate the lectin activity of the protein, and the red blood cells, the target protein, and physiological saline were proportionally mixed and allowed to sit for half an hour. If the target protein had lectin activity, it would combine with polysaccharides on the surface of red blood cells, thereby causing agglutination between cells, and erythrocyte sedimentation would not occur. Instead, red blood cells from the lectin-free wells would settle to the bottom of the V-shaped well [31]. The results showed that the red blood cells agglutinated and did not precipitate in the wells added with NpPP2-B10 protein and in the positive control (ConA) (Figure 4a). The results of gradient dilution showed that at least 31 µg/mL of NpPP2-B10 could agglutinate 1% of mouse erythrocytes, and at least 15.5 µg/mL of NpPP2-B10 could agglutinate 2% of rabbit erythrocytes. This indicates that the In the field, tobacco black shank mainly harms the stems and roots of adult plants. Therefore, we further observed the root phenotype of tobacco infected with P. nicotianae race 0 in mature plants. Due to the higher lignin content in adult tobacco, both the NpPP2-B10-OE lines and the WT had enhanced resistance to black shank, so the phenotype of NpPP2-B10-OE lines at this time was not as obvious as it was at the seedling stage. Two days after pathogen infection, we observed the development of lesions on the stems of the tobacco plants, and the area of the lesions on the NpPP2-B10-OE plants was significantly lower than that on the WT plants (Figure 3d). The resistance of the OE-44 plants was still lower than that of OE-3 and OE-12 plants, which was consistent with the results from the root irrigation experiment. These results indicate that the NpPP2-B10 gene positively regulates tobacco resistance to black shank disease at the mature stage.
Validation of Lectin Activity of the NpPP2-B10 Protein
Based on the existence of conserved carbohydrate-binding sites (Trp 15 and Trp 22) in NpPP2-B10, we speculated that NpPP2-B10 would have potential lectin activity. To verify this conjecture, we constructed the prokaryotic expression vector pET-SUMO-NpPP2-B10. The purified NpPP2-B10 protein was obtained by prokaryotic expression (Figure 4b). One percent mouse red blood cells and two percent rabbit red blood cells were used to validate the lectin activity of the protein, and the red blood cells, the target protein, and physiological saline were proportionally mixed and allowed to sit for half an hour. If the target protein had lectin activity, it would combine with polysaccharides on the surface of red blood cells, thereby causing agglutination between cells, and erythrocyte sedimentation would not occur. Instead, red blood cells from the lectin-free wells would settle to the bottom of the V-shaped well [31]. The results showed that the red blood cells agglutinated and did not precipitate in the wells added with NpPP2-B10 protein and in the positive control (ConA) (Figure 4a). The results of gradient dilution showed that at least 31 µg/mL of NpPP2-B10 could agglutinate 1% of mouse erythrocytes, and at least 15.5 µg/mL of NpPP2-B10 could agglutinate 2% of rabbit erythrocytes. This indicates that the specificity of NpPP2-B10 for rabbit erythrocytes was higher than that for mouse erythrocytes. Taken together, these results suggest that the NpPP2-B10 protein possesses lectin activity.
Interaction between NpPP2-B10 and NpSKP1-1A
In Arabidopsis, wheat, rice, and other crops, the SKP1 protein has been shown to be the joint protein of the SCF complex in E3 ubiquitin ligase. The interaction with SKP1 in yeast has been used to determine that F-box proteins are a component of the SCF complex [32]. To determine whether NpPP2-B10 plays a role as part of the SCF complex, we screened two genes annotated as SKP1 family genes, C74462.graph_c0 (NpSKP1-1A) and C85975.graph_c0 (NpSKP1-21), from N. plumbaginifolia transcriptomes by using qPCR. These two genes responded to black shank disease infection. The expression level of C74462.graph_c0 in the infected plants was 73 times higher than that in the uninfected controls at 6 h after infection, and the expression level of C85975.graph_c0 was 9 times higher in the infected plants than in the uninfected controls at 6 h after infection (Figure During the tobacco seedling stage, we selected leaves from the NpPP2-B10-OE plants and the WT plants to measure the lectin content. The results showed that the lectin content of the NpPP2-B10-OE lines was significantly higher than that of WT (Figure 4c). The OE-3 line had the highest lectin content, 1.6 times that in the WT. These results indicate that the NpPP2-B10 protein has lectin activity and was successfully overexpressed in the NpPP2-B10-OE lines.
Interaction between NpPP2-B10 and NpSKP1-1A
In Arabidopsis, wheat, rice, and other crops, the SKP1 protein has been shown to be the joint protein of the SCF complex in E3 ubiquitin ligase. The interaction with SKP1 in yeast has been used to determine that F-box proteins are a component of the SCF complex [32].
To determine whether NpPP2-B10 plays a role as part of the SCF complex, we screened two genes annotated as SKP1 family genes, C74462.graph_c0 (NpSKP1-1A) and C85975.graph_c0 (NpSKP1-21), from N. plumbaginifolia transcriptomes by using qPCR. These two genes responded to black shank disease infection. The expression level of C74462.graph_c0 in the infected plants was 73 times higher than that in the uninfected controls at 6 h after infection, and the expression level of C85975.graph_c0 was 9 times higher in the infected plants than in the uninfected controls at 6 h after infection (Figure 5a,b). Then, we cloned these two genes ( Figure S5) and named them NpSKP1-1A (OM264752) and NpSKP1-21 (OM264751), respectively. Analysis of the conserved domains of both showed that NpSKP1-1A had conserved Cullin-binding sites and F-box protein-binding sites (Figure 5c), while NpSKP1-21 did not ( Figure S6). results are shown in Figure 5d. NpPP2-B10 interacts with NpSKP1-1A but does not interact with NpSKP1-21 (Figure 5d). We linked NpPP2-B10 and NpSKP1-1A to the vectors SPYNE173 and pSPYCE, respectively, and successfully constructed the expression vectors SPYNE173-NpPP2-B10 and pSPYCE-NpSKP1-1A. Then, A. tumefaciens carrying two recombinant vectors was mixed and injected into tobacco plants and cultured for 48-72 h for fluorescence observation. The results showed that there was no fluorescence in the negative control, but fluorescence could be seen in the nucleus and cytoplasm in the experimental group (Figure 5e). These results indicate that NpSKP1-1A interacts with the NpPP2-B10 protein in vitro and in vivo, suggesting that NpPP2-B10 may mediate the ubiquitin protease pathway to regulate tobacco resistance to black shank disease.
NpPP2-B10 Enhanced the Expression of Disease Resistance-Related Genes and the Activity of Disease Resistance-Related Enzymes
NtPR1, NtPR2, and NtCHN50 are genes induced by SA and are related to disease resistance [33]. The enzymes catalase (CAT), peroxidase (POD), and phenylalanine ammonium lyase (PAL) have been reported to be related to plant disease resistance [34,35]. To further study the downstream disease resistance-related pathways triggered by NpPP2-B10, we detected the expression levels of disease resistance-related genes and the We fused NpPP2-B10 into the GAL 4-DNA-binding domain and NpSKP1-1A and NpSKP1-21 into the GAL 4-active domain for the yeast two-hybrid experiment, and the results are shown in Figure 5d. NpPP2-B10 interacts with NpSKP1-1A but does not interact with NpSKP1-21 (Figure 5d). We linked NpPP2-B10 and NpSKP1-1A to the vectors SPYNE173 and pSPYCE, respectively, and successfully constructed the expression vectors SPYNE173-NpPP2-B10 and pSPYCE-NpSKP1-1A. Then, A. tumefaciens carrying two recombinant vectors was mixed and injected into tobacco plants and cultured for 48-72 h for fluorescence observation. The results showed that there was no fluorescence in the negative control, but fluorescence could be seen in the nucleus and cytoplasm in the experimental group (Figure 5e). These results indicate that NpSKP1-1A interacts with the NpPP2-B10 protein in vitro and in vivo, suggesting that NpPP2-B10 may mediate the ubiquitin protease pathway to regulate tobacco resistance to black shank disease.
NpPP2-B10 Enhanced the Expression of Disease Resistance-Related Genes and the Activity of Disease Resistance-Related Enzymes
NtPR1, NtPR2, and NtCHN50 are genes induced by SA and are related to disease resistance [33]. The enzymes catalase (CAT), peroxidase (POD), and phenylalanine ammonium lyase (PAL) have been reported to be related to plant disease resistance [34,35]. To further study the downstream disease resistance-related pathways triggered by NpPP2-B10, we detected the expression levels of disease resistance-related genes and the activities of disease resistance-related enzymes in tobacco after pathogen irrigation. The RT-qPCR results showed that the SA induction-related genes NtPR1, NtPR2, NtCHN50, and NtPAL were upregulated in the tobacco plants after infection and that the expression levels of the above four genes in the NpPP2-B10-OE lines were significantly higher than those in the WT at 3 dpi and 7 dpi (Figure 6a, Figure S7). Of the four genes, the NtPR1 gene had the strongest response to pathogen infection, with the response in the OE-12 line being 43 times higher than the response in the WT at 7 dpi (Figure 6a). The results of the enzyme activity detection showed that CAT enzyme activity in tobacco first increased and then decreased from 0 to 96 hpi, that CAT enzyme activity in the NpPP2-B10-OE line was significantly higher than that in WT at 12, 24, and 48 hpi, and that CAT enzyme activity in the OE-3 line was 6 times higher than that in the WT at 12 hpi (Figure 6b). However, the POD and PAL enzymes showed no significant change in patterns compared to the CAT enzyme. The POD activity of the NpPP2-B10-OE line was significantly higher than that in the WT at 6 hpi, 12 hpi, 24 hpi, and 48 hpi, while the PAL activity in the NpPP2-B10-OE line was significantly higher than that in the WT only at 6 hpi and 72 hpi ( Figure S8). activities of disease resistance-related enzymes in tobacco after pathogen irrigation. The RT-qPCR results showed that the SA induction-related genes NtPR1, NtPR2, NtCHN50, and NtPAL were upregulated in the tobacco plants after infection and that the expression levels of the above four genes in the NpPP2-B10-OE lines were significantly higher than those in the WT at 3 dpi and 7 dpi (Figure 6a, Figure S7). Of the four genes, the NtPR1 gene had the strongest response to pathogen infection, with the response in the OE-12 line being 43 times higher than the response in the WT at 7 dpi (Figure 6a). The results of the enzyme activity detection showed that CAT enzyme activity in tobacco first increased and then decreased from 0 to 96 hpi, that CAT enzyme activity in the NpPP2-B10-OE line was significantly higher than that in WT at 12, 24, and 48 hpi, and that CAT enzyme activity in the OE-3 line was 6 times higher than that in the WT at 12 hpi (Figure 6b). However, the POD and PAL enzymes showed no significant change in patterns compared to the CAT enzyme. The POD activity of the NpPP2-B10-OE line was significantly higher than that in the WT at 6 hpi, 12 hpi, 24 hpi, and 48 hpi, while the PAL activity in the NpPP2-B10-OE line was significantly higher than that in the WT only at 6 hpi and 72 hpi ( Figure S8).
Discussion
Unlike most Nictaba-related lectin genes, NpPP2-B10 carries an F-box motif at its Nterminus, which may result in functional differences [27]. Nictaba lectin was first discovered in N. tabacum and is normally unexpressed in leaves, but it is induced by MeJA treatment [20]. However, when N. plumbaginifolia was treated with MeJA, the expression of NpPP2-B10 first decreased and then increased, which was different from the expression pattern of Nictaba lectin of N. tabacum. Furthermore, NpPP2-B10 was strongly induced from 0 h to 72 h after SA treatment in N. plumbaginifolia, and the expression level was the highest at 72 h. This is consistent with the expression pattern of other F-box families [36]. For example, the TaPP2-A13 gene of wheat was induced by SA treatment, and the expression level was the highest at 24 h after SA treatment [37]. These results reflect that the expression of F-box family genes is closely related to salicylic acid.
Discussion
Unlike most Nictaba-related lectin genes, NpPP2-B10 carries an F-box motif at its N-terminus, which may result in functional differences [27]. Nictaba lectin was first discovered in N. tabacum and is normally unexpressed in leaves, but it is induced by MeJA treatment [20]. However, when N. plumbaginifolia was treated with MeJA, the expression of NpPP2-B10 first decreased and then increased, which was different from the expression pattern of Nictaba lectin of N. tabacum. Furthermore, NpPP2-B10 was strongly induced from 0 h to 72 h after SA treatment in N. plumbaginifolia, and the expression level was the highest at 72 h. This is consistent with the expression pattern of other F-box families [36]. For example, the TaPP2-A13 gene of wheat was induced by SA treatment, and the expression level was the highest at 24 h after SA treatment [37]. These results reflect that the expression of F-box family genes is closely related to salicylic acid.
The subcellular localization of NpPP2-B10 was in the nucleus and cytoplasm, which may be related to the specific binding between the lectin translated by the NpPP2-B10 gene and the glycoproteins in the cytoplasm and nucleus. For example, Nictaba binds to the O-GlcNAc unit of histone in N. tabacum, thus indirectly changing the transcription of some genes [38]. This may also be related to the F-box characteristics of the NpPP2-B10 gene. The F-box protein of maize mainly exists in the cytoplasm (37%), chloroplast (28%), and nucleus (25%) [39]. In wheat, 803 (79.3%) F-box proteins were only located in the nucleus, while 132 (13.0%) F-box proteins seemed to have dual or multiple loci [40]. This may be related to the fact that the F-box protein-mediated ubiquitin protease pathway is involved in the degradation of various proteins in plant cells.
We induced the overexpression of the NpPP2-B10 gene in the susceptible variety 'Honghua Dajinyuan' and found that it could significantly improve the resistance of tobacco to black shank disease. In studies on the pathogenesis of citrus Huanglong disease, phloem protein (PP2) has been shown to bind with filamentous protein to exacerbate duct blockage, resulting in blockage of the vessel [41], which leads to the onset of citrus Huanglong disease. However, in this study, the overexpression of the NpPP2-B10 gene promoted tobacco resistance to black shank. This is likely related to the F-box protein properties (mediated ubiquitin protease pathway) of NpPP2-B10. To investigate this further, we assessed the lectin activity of NpPP2-B10 by using an erythrocyte coagulation assay, and the interaction between NpPP2-B10 and the hypothesized SKP1 family protein NpSKP1-1A in vitro and in vivo was assessed by using a yeast two-hybrid assay and a BiFC assay. These results indicate that NpPP2-B10 possesses both F-box protein and lectin properties. It is speculated that this plant carbohydrate-binding F-box protein may have a similar glycoprotein degradation function as the mammalian FBS protein [27]. Tobacco black shank is a disease caused by the oomycete Phytophthora that mainly harms the roots and stems of tobacco in the field. In previous studies, the pathogenic factor has been speculated to be the toxin produced by P. parasitica, and researchers currently believe that the toxin is a glycoprotein [14]. We hypothesized that glycoproteins secreted by pathogens were degraded by the glycoprotein degradation function of the NpPP2-B10 protein, which led to the enhancement of tobacco resistance, and we will carry out further studies to test this hypothesis in the future.
Due to the anti-insect/bacterial/fungal/virus functions of plant lectins in nature, it has been speculated that plant lectins may be part of plant immunity through their action of binding to glycoproteins [42], but the specific regulatory mechanism has not been explained. In our study, NpPP2-B10 was strongly induced by SA, and the expression levels of the SA-dependent defense pathway-related genes NtPR1 and NtCHN50 in the NpPP2-B10-OE lines were significantly higher than those in the WT after infection with P. nicotianae race 0. Similar results were found in a study of the F-box Nictaba protein in A. thaliana, wherein overexpression of this gene led to increased resistance and the expression of the PR1 gene significantly increased after pathogen infection [43]. Therefore, we hypothesized that the NpPP2-B10 gene may affect plant immune function by participating in the regulation of the SA pathway.
Interestingly, in contrast to most plant defense-related genes, the NpPP2-B10 gene also promoted the growth and development of tobacco, mainly showing that NpPP2-B10-OE lines were superior to the WT in terms of the seed germination rate, growth rate, and plant height. Previously, researchers overexpressed the fungal lectin CCL2 in A. thaliana and found that it positively affected plant growth, mainly showing that the fresh and dry weights of CCL2-overexpressing plants were higher than that of WT plants [44]. However, the researchers did not explain the reasons for the positive effects of CCL2 on plant growth and development in greater detail. Plant lectins, as storage proteins, could provide nutrients for seeds. We speculated that the lectin activity of NpPP2-B10 promotes plant growth, because we detected a higher lectin content in the NpPP2-B10-OE lines than in the WT.
In our study, we demonstrated that NpPP2-B10 promoted tobacco resistance to black shank disease and enhanced tobacco growth and development. Meanwhile, we also demonstrated that the NpPP2-B10 protein has F-box protein characteristics and lectin activity. We speculated that this is related to tobacco resistance, but the specific regulatory mechanism remains to be studied. Future studies can be carried out in two areas. One is to verify whether NpPP2-B10 protein can degrade exogenous glycoproteins of P. nicotianae race 0 to explain the direct cause of the reduction in the tobacco black shank disease spot area. The second area of focus is to investigate the substrate targeted by NpPP2-B10 to further promote the specific mechanism of the NpPP2-B10-mediated ubiquitin protease pathway in regulating resistance.
Based on our findings, we hypothesized a putative model for NpPP2-B10 regulation of tobacco resistance (Figure 7). In the normal environment, NpPP2-B10 interacts with SKP1 proteins to form the SCF complex, which degrades an unidentified negative regulator through the ubiquitin-proteasome system (UPS) pathway and then maintains the normal expression of some plant disease resistance-related proteins and enzymes (such as PR1, PR2, PAL, and CAT), which maintains tobacco resistance. At the same time, due to the tobacco lectin characteristics of NpPP2-B10, it may regulate plant resistance by recognizing glycoproteins of exogenous fungi.
Plant Materials and Growth Conditions
The following materials were selected for this study: the resistant tobacco species N. plumbaginifolia, the tobacco variety 'Honghua Dajinyuan', N. benthamiana, and three independent T1 lines of transgenic tobacco.
The seeds of N. plumbaginifolia were placed on wet filter paper and treated with 100 µM of GA3 to break seed dormancy [45]. After germination, the seeds were transferred to sterilized soil. The seeds of the other tobacco types were directly sown in sterilized soil. All the above tobacco types were cultured in a humidity-controlled environment (
Plant Materials and Growth Conditions
The following materials were selected for this study: the resistant tobacco species N. plumbaginifolia, the tobacco variety 'Honghua Dajinyuan', N. benthamiana, and three independent T1 lines of transgenic tobacco.
The seeds of N. plumbaginifolia were placed on wet filter paper and treated with 100 µM of GA 3 to break seed dormancy [45]. After germination, the seeds were transferred to sterilized soil. The seeds of the other tobacco types were directly sown in sterilized soil. All the above tobacco types were cultured in a humidity-controlled environment (16 h light/8 h dark cycles, 24 • C). N. plumbaginifolia was used for gene cloning and expression analysis under various treatments, 'Honghua Dajingyuan' was used for the transgenic test, N. benthamiana was used for transient expression, and the transgenic tobacco lines were used for phenotypic analysis.
Pathogen Infection and Hormone Treatment
P. nicotianae race 0 spore suspension with a concentration of 1.4 × 10 3 /mL was prepared according to the previously described methods [46]: The spores, SA (2 mM), MeJA (1 mM), and ETH (50 mM) were applied to 2-month-old tobacco seedlings, and a mixture of 0.1% KNO 3 , 0.1% ethanol, and H 2 O was applied to controls. Samples were taken at 0, 6,12,18,24,36,48, and 72 h after treatment, and the relative expression level of the NpPP2-B10 gene was measured by qRT-PCR.
RNA Extraction, Reverse Transcription, and qPCR
Total RNA was extracted from the tobacco tissues using an RNA extraction kit (DP441, TIANGEN, Beijing, China). First-strand cDNA was synthesized by using a PrimeScriptTM RT reagent Kit with gDNA Eraser (RR047A, TAKARA, Shiga, Japan).
qRT-PCR was conducted to detect changes in gene expression by using qTOWER3 G (Analytik Jena, Jena, Germany). The PCR amplification procedure was as follows: 1 cycle of 95 • C for 30 s, followed by 40 cycles of 95 • C for 20 s, and 60 • C for 1 min. The system was a Novo Startfi SYBR qPCR SuperMix Plus 20 µL system (E096-01A, Novoprotein, Shanghai, China). The 2 −∆∆Ct method was used to calculate the relative expression values [47], and NpEF-1a and NtEF-1a were used as controls for N. plumbaginifolia and cultivated tobacco, respectively. All the primers for qRT-PCR are shown in Table S1.
Overexpression of NpPP2-B10 in Cultivated Tobacco
Using a seamless cloning technique, the NpPP2-B10 gene was linked to the vector pCAMBIA2300 with the CaMV 35S promoter, and through this, the recombinant plasmid pCAMBIA2300 NpPP2-B10-OCS was obtained. Referring to a previous method [46], the constructed recombinant plasmid was transferred into the A. tumefaciens LBA4404 strain and used to infect the 'Honghua Dajinyuan' tobacco plants by means of the leaf-disc method. After callus differentiation and tissue culture, regenerated plants with kanamycin resistance were ultimately obtained. The tobacco total DNA was extracted by using the improved CTAB method [48], and positive plants were verified by means of specific PCR amplification of the NpPP2-B10 gene. The relative expression level of the NpPP2-B10 gene was measured by qRT-PCR. The same method was used to detect positive seedlings from the self-crossing 1 generation of the NpPP2-B10 transgenic line. The primers are shown in Table S2.
E. coli Expression and Purification of the Recombinant Protein
The NpPP2-B10 gene was linked to the bacterial expression vector PET-SUMO by using seamless cloning technology, and the recombinant plasmid was transferred into Escherichia coli (DE3). After colony PCR and sequence verification, the purified plasmid was transformed into E. coli BL 21(DE3) for protein production. The primers are shown in Table S2.
Protein extraction and purification were conducted as previously described [49]. Isopropyl β-D-thiogalactoside (IPTG) inducer was added to induce protein expression to a final concentration of 0.1 mM, and the bacteria were cultured for 8 h at 30 • C with shaking. Lysozyme was added to a final concentration of 0.1 mg/mL after the bacteria were collected, and the bacteria were lysed by ultrasonication. Then, 10 µL of the supernatant and precipitate were analyzed by SDS-PAGE. The protein solution was placed in a dialysis bag and dialyzed with NTA buffer for 48 h (4 • C) at a flow rate of 1 mL/min. The column was washed with NTA-0 buffer solution (pH 8.0) until no protein was found in the effluent. Imidazole elution was performed, and the eluent was collected in stages. The column was washed with three times the volume of deionized water and sealed with 20% ethanol. The collected eluent was concentrated by dialysis, and 12 µL was taken for SDS-PAGE analysis.
Detection of Lectin Activity and Content
Then, 50 µL of 0.9% NaCl normal saline was successively added to the V-shaped 96-well plate from left to right, and 50 µL of lectin protein solution (500 µg/mL) was added to the first well and evenly mixed. The protein concentration in the remaining wells was successively diluted by a gradient of 2 times with each successive well. The last well was treated with 0.9% NaCl normal saline as a negative control. Then, 50 µL of the corresponding red blood cells was added to each well, thoroughly mixed, and allowed to stand for half an hour to enable the agglutination of red blood cells to be observed. The rationale for this technique was that when the protein has lectin activity, it causes red blood cells to agglutinate into a network that evenly spreads throughout the well, such that the red blood cells do not precipitate. However, the red blood cells in the negative control wells will all precipitate to the bottom of wells, enabling observation of the red spots at the bottom of the wells.
Fresh tobacco leaf tissue was thoroughly ground in liquid nitrogen, and then extracts of 9 times the sample volume (pH 7.4, PBS) were prepared and centrifuged for 30 min (8000 rpm, 4 • C), and the supernatant was collected and temporarily stored at 4 • C for future use. A plant lectin quantitative detection kit (RX1400205PL, Ruixin, Quanzhou, China) was used to detect the tobacco lectin content. The absorbance (OD value) was measured at a wavelength of 450 nm with a microplate reader. The standard concentration was taken as the ordinate (6 standard wells, plus 1 zero well, and a total of 7 concentration points), and the corresponding absorbance (OD value) was taken as the abscissa. Computer software was used to fit the absorbance with a four-parameter logistic curve (4-PL). The standard curve equation was established, and the OD value was used to calculate the concentration of the sample.
Yeast Two-Hybrid Assay and BiFC
The NpPP2-B10 gene was linked to the vector pGBKT7, and NpSKP1-1A and NpSKP1-21 were linked to the plasmid pGADT7 by using seamless cloning technology. Then, 50 µL of Y2HGold cells was placed in an ice-water mixture, and 1-3 µg of precooled recombinant plasmid, 5 µL of carrier DNA (prepared at 95-100 • C for 5 min, followed by a rapid ice bath, and then this process was repeated), and 250 µL of PEG/LiAc were successively added, and the sample was pipetted and pumped several times to mix. The mixture was then placed in a water bath at 30 • C for 30 min, and then placed in a 42 • C water bath for 20 min. After centrifugation for 40 s (5000 rpm), the supernatant was removed. The bacteria were resuspended in 400 µL of ddH 2 O, and the supernatant was removed after centrifugation for 1 min. The bacteria were suspended in 50 µL of ddH 2 O, coated onto the dual-deficient screening medium (SD/-Trp/-Leu), and cultured upside down in a 28 • C incubator for 48-72 h after the bacterial solution had completely dried. Several single colonies were randomly selected on multi-deficient screening medium (SD/-Trp/-Leu/-His/-Ade), and the nutrient-deficient plates were divided into two conditions (with or without X-α-Gal) and cultured inverted in the dark at 28 • C for 3-5 days. Color was observed, recorded, and photographed. The primers are shown in Table S2.
NpPP2-B10 and NpSKP1-1A were cloned into the pSPYCE and pSPYNE vectors by a seamless cloning technique, and then the two recombinant plasmids were transformed into the A. tumefaciens (GV3101) strain. The bacterial solution (OD = 0.1) containing two recombinant plasmids was mixed in equal volumes and injected into the lower epidermis of the flat leaves of N. Benthamiana. After 48-72 h of dark culture, the epidermis was torn off, and the fluorescence signal was captured with an Observer DP80 fluorescence microscope (Olympus, Tokyo, Japan). The primers are shown in Table S2.
Detection of Disease Resistance-Related Enzyme Activities
The tobacco POD content was determined by using the guaiacol method [50]. CAT content was determined according to another previously described method [51], and PAL activity was determined using a PAL activity detection kit (BC0210, Solarbia, Beijing, China).
|
2023-04-19T15:39:29.899Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "dc2bf509b43c4d8af9bbb1fc573ebadf8521a909",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7bef00f940ef223bc977b5d33b4e9ca09d1bab92",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119184643
|
pes2o/s2orc
|
v3-fos-license
|
Small-angle scattering from fat fractals
A number of experimental small-angle scattering (SAS) data are characterized by a succession of power-law decays with arbitrarily decreasing values of scattering exponents. To describe such data, here we develop a new theoretical model based on 3D fat fractals (sets with fractal structure, but nonzero volume) and show how one can extract structural information about the underlying fractal structure. We calculate analytically the monodisperse and polydisperse SAS intensity (fractal form factor and structure factor) of a newly introduced model of fat fractals and study its properties in momentum space. The system is a 3D deterministic mass fractal built on an extension of the well-known Cantor fractal. The model allows us to explain a succession of power-law decays and respectively, of generalized power-law decays (superposition of maxima and minima on a power-law decay) with arbitrarily decreasing scattering exponents in the range from zero to three. We show that within the model, the present analysis allows us to obtain the edges of all the fractal regions in the momentum space, the number of fractal iteration and the fractal dimensions and scaling factors at each structural level in the fractal. We applied our model to calculate an analytical expression for the radius of gyration of the fractal. The obtained quantities characterizing the fat fractal are correlated to variation of scaling factor with the iteration number.
I. INTRODUCTION
The small-angle scattering (SAS; X-rays, neutrons, light) [1,2] has been established as a powerful experimental technique for structural investigations of various types of disordered systems (biological, polymeric) at nano-and microscale. The technique yields the differential elastic cross section per unit solid angle as a function of the momentum transfer which describes, through a Fourier transform, the spatial density-density correlations of the system. Since a large class of systems show the property of self-similarity across the scales, the concept of fractal geometry [3,4] is very useful in modeling their structure and in describing the correlations between the microscopic and macroscopic properties. The effectiveness of the SAS method in investigating the fractal microstructure arise from the ability to differentiate between surface and mass fractals [5,6]. The difference is accounted through the value of the scattering exponent of the power-law decay of SAS intensity in the fractal region, with I(q) ∝ q −τ , where τ = D m for mass fractals and τ = 6 − D s for surface fractals. Here D m and D s are the mass and, respectively, surface fractal dimension [3] and lie within 0 < D m < 3 for a mass fractal [7], and within 2 < D s < 3 for a surface fractal [5,6].
Experimental SAS data can show a succession of mass and/or surface fractal power-law regions, whose scattering exponents take arbitrarily decreasing values [8][9][10] and the existing theoretical models either provide an insufficient microstructural description from this type of * anitas@theor.jinr.ru SAS data, either they can not describe the full spectrum (note that the case of increasing values of scattering exponents can be explained in the framework of multiphase systems [11]). SAS data modeled by the classical Beaucage model [12] can explain only a succession of power-law decays and involve a generic hierarchical structure linked to the fractal power-law regime. It gives fractal dimensions and the size of each structural, together with the specific surface when the Porod region (I(q) ∝ q −4 ) is present. These parameters provide important informations about the spatial organization of the fractals, but a more complete characterization is needed since to a large number of structures may correspond a given set of fractal dimensions. In addition, recent technological progress has allowed the development of deterministic fractal structures at nano/micro scales [13][14][15]. These type of structures are characterized by a generalized power-law decay (superposition of maxima and minima on a power-law decay, with the scattering exponent equal to the fractal dimension of the fractal) in momentum space and therefore due to log-oscillations, additional information can be obtained about the fractals describing the hierarchical structures, such as fractal iteration number, scaling factor and the number of structural units of which the fractal is composed, thus greatly improving our understanding on their structural properties [7,16].
To explain such a succession of (generalized) powerlaw decays and illustrate the SAS properties we have calculated analytically the fractal form and structure factor from a system of randomly oriented, non-interacting, monodisperse and polydisperse deterministic 3D fat fractals (in mathematics known also as ǫ-Cantor sets [17]), which are sets with fractal structure, but nonzero volume (positive Lebesgue measure). The method is based on the theoretical approach which was successfully employed to describe SAS from thin fractals (known in literature simply as fractals) [7,16,18,19]. In the case of monodisperse fractals it gives a generalized power-law decay. The polydispersity smooths the scattering intensity and leads to the simple power-law behavior observed in experimental data. The fat fractal system suggested here is built by a set of iterative rules, with the scaling factor increased as a function of the iteration number which, in turn, give rise to various fractal regions with different lengths and scattering exponents.
In this paper, it is shown that scattering intensity from deterministic fat fractals includes successive fractal regions with arbitrarily decreasing values of the scattering exponents and allows us to take full advantage of the properties of deterministic thin fractals [7,16]. We derive analytically the main properties in momentum space: fractal form and structure factor, and explain how to extract the main structural characteristics of mono and polydisperse fat fractals from SAS data. In particular, we focus on determining the edges of the fractal regions, fractal dimensions and scaling factors at each structural level, and the fractal radius of gyration.
A. Fat fractals
Fat fractals are characterized by the dependence of their apparent size on the scale resolution and they are quite different from the familiar thin fractals. To make the distinction between fat and thin fractals more clear, we consider the well-known 3D Cantor set [7]. In the later case, the initial cube (m = 0; m being the fractal iteration number) is divided into 27 parts, the eight cubes are left in the corners (m = 1), with side length 1/3 from the initial cube, and the 19 parallelepipeds are removed. Then we repeat the same operation on each of the remaining eight cubes, thus leaving 64 cubes of side length 1/3 2 (m = 2) and so on. The thin Cantor fractal is obtained in the limit m → ∞ and has zero volume (Lebesgue measure) and fractal dimension log 8/ log 3. The "fattened" version of this thin fractal is obtained by keeping the cubes instead of side length 1/3 (m = 1), then 1/3 2 (m = 2), then 1/3 3 (m = 3), etc. The resulting fractal is topologically equivalent to the thin Cantor fractal, but the holes decrease in size sufficiently fast so that, when m → ∞, the fractal has nonzero and finite volume, and fractal dimension 3 (see below). The resolution dependent volume V (ǫ) can be calculated by covering the fractal with balls of size ǫ. Then the volume can be written [20] where A is a constant which depends on the units used and V (0) is the volume in the limit ǫ → 0. Using Eq. (1) one can define the scaling exponent η in the following way [20] where, by definition 0 ≤ η ≤ ∞ (η is equal to ∞ for non-fractal sets and is finite for fractal sets) and provides a useful way to quantifies the fractal properties as opposed to the fractal dimension, since the fat fractal definition implies they have an integer fractal dimension.
Although it is an essential parameter from which we can distinguish fat fractals (η is independent of d) from thin fractals (η = 3 − d) [20,21] where d is the fractal dimension, the connection between this scaling exponent and the small-angle scattering is beyond the scope of this paper.
B. Small-angle scattering
We revise in this Section the theoretical formalism of SAS scattering (neutron, X-ray, light, or electron diffraction) from a two-phase sample consisting of microscopic objects with the scattering length b j and scattering length density (SLD) ρ m immersed into a solid matrix of SLD ρ p , and neglect multiple scattering. Then the total cross section is given by [2] r is the total scattering amplitude and V ′ is the total volume irradiated by the incident beam. The SLD can be defined with the help of Dirac's δ-function as ρ s (r) = j b j δ(r − r j ), where r j are the microscopic object positions.
In practice, it is convenient to represent the total scattering amplitude as a sum of amplitudes of rigid objects. For instance, considering the scattering from stiff fractals, whose spatial positions and orientations are uncorrelated, one can choose them as the objects. Then the scattering intensity (that is, the cross section per unit volume of the sample) is given by where n is the fractal concentration, V is the volume of each fractal, ∆ρ = ρ m − ρ p is the scattering contrast and F (q) is the normalized form factor obeying the condition F (0) ≡ 1. The brackets · · · stand for the ensemble averaging over all orientations of the fractals. If the probability of any orientation is the same, then it can be calculated by averaging over all directions n of the momentum transfer q = qn, that is, by integrating over the solid angle in the spherical coordinates q x = q cos ϕ sin ϑ, q y = q sin ϕ sin ϑ and q z = q cos ϑ Once a deterministic fractal is composed of N m objects (e.g. of the same radius R), then the form factor can be written as where ρ q = j e −iqrj is the Fourier transform of the density of ball centers, r j are the center-of-mass positions of balls and m is the iteration number. Then, by using Eq. (3), the scattering intensity becomes [16] where I(0) = n|∆ρ| 2 V 2 is the intensity in zero angle, F 0 (qR) is the subunit form factor and S(q) is the fractal structure factor defined by The choice of the subunit form factor F 0 (qR) and of the fractal structure factor S(q) is rather arbitrarily and depends on the shape of the scattering units and, respectively, on their relative positioning. In a physical system scatterers almost always have different sizes. Therefore, a more realistic description should involve size polydispersity. Here we consider an ensemble of fractals with various sizes and forms. The distribution function D N (l) of the scatterer sizes is defined in such a way that D N (l)dl gives the probability of finding a fractal whose size falls within the interval (l, l + dl). Specifically, we choose the log-normal distribution (see Ref. [16] for details) where the mean length l 0 and its relative variance σ r are given by Thus, the average in Eq. (3) is taken both over angles and sizes. Polydispersity obviously smears the intensity curves, and the oscillations become smoother [16].
III. CONSTRUCTION OF THE FAT FRACTAL
The scattering exponents τ in SAS intensities, as already mentioned before, are related to the fractal dimensions of the system. Therefore, in order to describe successive power-law regimes with decreasing values of the scattering exponents, the model based on deterministic fractals shall be specified by taking into consideration increasing values of the scaling factors with the iteration number, not after each iteration, but after a given number of iterations (every second, every third iteration etc).
The construction process of the fat fractal, embedded into 3D space, is very similar to that of mass generalized thin Cantor fractals [7] and of mass generalized thin Vicsek fractals [16]. One follow a top-down approach in which an initial structure is repeatedly divided into a set of smaller structures of the same type, according to a given rule [22]. One and the same rule is kept from one iteration to another but the scaling factor is increased after every second iteration.
We start with a cube of edge l 0 (called zero-order iteration or initiator) and specify it in Cartesian coordinates as a set of points satisfying the conditions −l 0 /2 ≤ x ≤ l 0 /2, −l 0 /2 ≤ y ≤ l 0 /2, −l 0 /2 ≤ z ≤ l 0 /2. The origin lies in the cube center, and the axes are parallel to the cube edges. The iteration rule (generator ) is to replace the initial cube by eight cubes of edge β where 0 < α < 1 and the exponent p m is defined as where m = 1, 2, · · · , and the symbol ⌊ ⌋ stands for the floor function, then the scaling factor at the mth iteration can be written such as The characteristics of the model, together with Eq. (10) show that at the mth iteration the number of cubes is The side length of each cube is given by Therefore, the components of the a j vectors, for arbitrarily m, can now be written as The fractal dimension of the set can be determined, in the limit of large number of iterations, from relation (using Eqs. (13 and 14)) In addition, denoting v 1 the volume removed at m = 1, v 2 the relative volume removed at m = 2 and so on, we find that the total volume remaining after the mth iteration and therefore the model fulfills the defining properties of fat fractals [20,21]. Fig. 1 shows the construction process for the first five iterations of 1D projection of the fat fractal at α = 1/3. According to Eq. (11), the dimensionless parameter γ m and the scaling factor are decreased and, respectively, increased at every second iteration. Since the construction assumes equal values of the scaling factors for two consecutive iterations, the structure is in fact a deterministic thin fractal structure in this "range" of iterations, each one having a different fractal dimension, given by [7]
IV. FRACTAL FORM AND STRUCTURE FACTOR
We consider as basic subunits of the fractal, cubes with initial edge length l 0 . Therefore, the subunit form factor can be written as [2] F 0 (ql 0 ) = sin(q x l 0 /2) q x l 0 /2 sin(q y l 0 /2) q y l 0 /2 sin(q z l 0 /2) q z l 0 /2 .
In order to calculate the fractal form factor, we apply here the analytical method developed in [7,16] for calculating the fractal form factor of thin fractals. We could use, in principle, the standard Debye formula [2] but its application to deterministic fractals can be cumbersome even for iteration as low as m = 3, since the number of subunits in the fractal increase exponentially with the iteration number.
The fractal form factor at the ith generation is calculated analytically by means of the generative function, which is determined by the positions of the centers of cubes inside the fractal for each iteration. We consider that the position correspond to a GCF [7] structure, and therefore the generative function reads as where G 0 (q) ≡ 1 and the coefficients are given by The coefficients u i properly takes into account, both, the sizes of subunites through β Finally, by introducing Eq. (21) into Eq. (3), and averaging according to Eq. (5), the normalized scattering intensity can be written as The Fourier component of the density of cubes centers are obtained from Eq. (6) and Eq. (21) Then, the structure factor is obtained by introducing Eq. (23) into Eq. (8) which results in
V. RESULTS AND DISCUSSION
The numerical results for several iterations for both monodisperse and polydisperse fractal form and structure factor, at fixed γ, are shown in Fig. (2) and, respectively, in Fig. (3). At low and intermediate values of q the common feature for the scattering intensities is the appearance of all the regions as seen in SAS experimental data: Guinier at low q and a succession of power-law regimes with decreasing values of the scattering exponents at intermediate q. However, at high q, the scattering intensity (Eq. (22)) shows a Porod region while the fractal structure factor (Eq. (24)) shows an asymptotic region.
The Guinier region (a plateau on a double logarithmic scale) is determined by the overall fractal size and can be seen in the region where ql 0 < ∼ 1. In this region the scattering intensity can be approximated by [2] The fractal radius of gyration R (m) g at the mth iteration can be determined by expanding the form factor given by Eq. (21) in power series in ql 0 and substituting the result in Eq. (22). Therefore, we obtain where R g0 = l 0 /2 for a uniform cube. When all the scaling factors β (i) s and the coefficients β (i) t are equal, Eq. (26) reduces to the well-known expression for the radius of gyration of thin Cantor fractals [7].
The succession of power-law regimes (the fractal region of the fat fractal) is determined by the maximal and minimal distances between the cube centers. Since the smallest distances are of the order of u m (Eq. (20)) then, the beginning of the first power-law regime and the end of the last power-law regime will be found in In particular, for the monodisperse case (Fig. (2a) and Fig. (3a)), we have a succession of generalized power-law decays, while for the polydisperse case (Fig. (2b) and Fig. (3b)), one obtains a succession of simple power-law decays common to experimental SAS data, where the minima and maxima are smeared out [7,16]. Then the scaling factor at each structural level (iterations with constant scaling factor) can be determined from the periodicity in double logarithmic scale of the quantity I(q)q Dm vs. q while the number of fractal iteration can be obtained from the number of periods of the function I(q)q Dm [16].
In the fractal region of the fat fractal, the structure factor (Eq. (8)) approximates very well the scattering intensity (Eq. (7)), since in this region The position of minima are obtained when the cubes inside the fractal interfere out of phase, and since the most common distances between the center of mass of the cubes are given by 2u m , we have the condition 2u m = π/q, which gives the position for the minima (vertical lines in Fig. (2a) and Fig. (3a) q with k = 1, · · · , m. The scattering intensities in Fig. (2) and Fig. (3) have three main characteristics specific to fat fractal structures, which result from the property that the values of the scaling factors increase with iteration number, according to Eq. (12). First, the length of each subsequent power-law regime, in momentum space, decreases. This behavior can be clearly seen in Fig. (2b) and Fig. (3b). Second, the transition between consecutive power-law regimes is through a "knee". This is due to the fact that the values of the coefficients β t in Eq. (15), which are responsible for the distances between cubes (see Eq. (19)), also decrease. This is in contrast with scattering from multiphase systems, where the "knee" position depends on the scattering length density of each component phase [11,23]. Third, since the values of β (m) s depend on the initial value β (1) s , the fractal dimension for each range can be determined only by specifying the fractal dimension at m = 1, 2 and the value of γ 1 in Eq. (10).
Beyond the last power-law regime (or the last GPLD in the monodisperse case) we have q > ∼ 1/u m . In this region the fractal structure factor in Eq. (24) is S(q) ≃ 1 [16] and therefore we find that the asymptotic values 9)). The horizontal dottedlines represent the asymptotes of the structure factor. tend to 1/N m (Fig. (3a) and Fig. 3b)), as for the case of thin fractals [7,16]. From another hand, in this region, the scattering intensity (Eq. (22)) follows the Porod law (Fig. (2a) and Fig. 3b)), since the size of the initiator is of the same order as l 0 . The beginning of the Porod region allows us to obtain the size of the smallest unit (here cube) constituting the fractal.
VI. CONCLUSIONS
We develop a fat fractal model based on an extension of the generalized Cantor fractal [7], and which allows us to explain experimental SAS data showing a succession of power-law regimes (or GPLD regimes in the monodisperse case) with arbitrarily decreasing values of the scattering exponents in the range from zero to three. The main feature of the model, which allows to explain such type of data, is the increase of the scaling factor after a given number of iterations (here, every second iteration) which, in turn, implies arbitrarily decreasing values of the fractal dimensions. We derive an analytical expression for the form and structure factor, which describe scattering from non-interacting, mono-and polydisperse, randomly oriented, 3D fat fractals. We have calculated analytically the radius of gyration of the fractal.
We have shown that the present analysis allows us to obtain three main structural characteristics of fat fractals. First, the edges of all the fractal regions (through the positions of minima in Eq. (28)), which can independently be controlled by choosing various expressions for the floor function p m defined in Eq. (11). Second, the fractal dimensions and the scaling factors corresponding to each structural level, which can be controlled by the parameter α in Eq. (10). Third, the number of particles composing the fractal, from the asymptote of the structure factor in Eq. (24), and which can be controlled by a different definition of the iterative operation (generator). In addition, from the calculated radius of gyration (Eq. (26)) and from the scattering intensity (Eq. (22)) one can obtain information about the overall size of the fractal and respectively, about the sizes of the smallest units composing the fractal.
The model could serve to describe and analyze growth phenomena of biological objects or clusters at nano-and micro scales.
|
2014-07-07T11:28:51.000Z
|
2014-06-01T00:00:00.000
|
{
"year": 2014,
"sha1": "2d97488c01d92a3d4b7691fa5e8f4e4c6a4aacc3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1407.1673",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2d97488c01d92a3d4b7691fa5e8f4e4c6a4aacc3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
243047874
|
pes2o/s2orc
|
v3-fos-license
|
Stress Strain Behavior of Compressed Stabilised Earth Blocks Using Geogrids
The Existing Situation is employed to concern the soil that is more desirable for compressed stabilised earth block. The behavior of Compressed stabilised earth block (CSEB) on the contrary compressive load, sturdiness, water suction etc. are controlled by the kinds of fine material and stable substance as stickum. The fine material is blended with relevant part of stabilised like fine ashes, cement, coconut fibers and Chemicals are consolidated by labouringly or by using machines. From the review it should be identified that by enumerate Unlike stabilizer in fine aggregate up to a few agreed proportion that rising a practical things of a block. The Specimen is additional strength than typical burnt clay brick. The Soil is checked in a laboratory and considered as a favorable for CSEB on the argument of compactness for the preparation by Montmorillonite Soil. Now it can be decided that the fine aggregate has higher strength than the casual brick but it not please the quality of absorption of water.
INTRODUCTION
Masonry is an assemblage of masonry units and mortars. Masonry properties and behavior are controlled by the characteristics of masonry units and mortar as well as the bond between them. Burnt clay bricks and cement mortars are the most commonly used materials for the construction. Significant studies have been made on properties and behavior of brick masonry in cement mortar. Apart from bricks several types of masonry units and mortars are used. Now a day's Compressed stabilised earth blocks is abundantly used as a replaceable to the normal clay bricks in the construction of masonry. A large number of Compressed stabilised earth blocks constructions are performing satisfactorily for the past several years. Popularity of this new material can be mainly attributed to the advantages such as low cost, low energy content, decentralized production, utilization of locally available soil, better shape and pleasing appearance. These blocks were prepared by applying a pressure and processed soil-cement propotions in a manually operated machine and can be produced in the construction site itself. Soil is one of the essential ingredients used in the production of Compressed stabilised earth blocks. Soil is composed mainly of gravel, sand, silt and clay. Percentages of these constituents, block density and quantity of cement are the factors which influence the strength and durability characteristics of Compressed stabilised earth blocks. Percentage of constituents in the soil may vary from place to place.In this studies for the production and properties of a Compressed stabilised earth blocks have been carried out by several investigations. These investigations reveal that the properties of stabilised blocks are support greatly influenced by soil-cement composition percentage and block density. Mitra (1951) after examining 9 types of local soils has concluded that block of soils of sandy in nature with five percent of cement provides a required strength and safe keeping resist to weathering. Sarangapani [1992 ] had investigated in detail, The suitability of soils in and around kengeri for the production of Compressed stabilised earth blocks. Reddy and Jagadeesh(1995) have also found the soils have sandy in nature are suitable for stabilization showing better strength and durability characteristics. There is hardly any information regarding the compositions of the soils available in and around Kengeri and their relation to the strength of stabilisation of earth blocks by compressing. Hence in present investigation is aimed at determining the stress strain behaviour of Compressed stabilised earth blocks produced by using locally available soils and geogrid as a reinforcing material.
II. PRELIMINARY INVESTIGATION OF MATERIALS AND METHODOLOGY USED FOR PREPARING MASONRY SPECIMENS
Cement 43 grade of cement confirming to Indian standards code IS 8112:1989 is applied in this investigation.
Soil
The quality of soil for the preparation of stabilised blocks should be suited. Commonly, it consists of a minerals of clay and dormant substances like silt and sand.
Water
The water is used for the preparations of blocks is to be resist the properties like strength and sturdiness.
Geo-grid
It is used as a reinforcement.
III. METHODOLOGY
Soil is the old construction material used in early ages. The soil is used widely from compressed block to earthen dams. cement or lime. Therefore, we prefer today and to called them as a Compressed Stabilized Earth Blocks(CSEB). The input of the soil stabilization allowed to build higher with thinner walls, which have a much better compressive strength and water resistance. With cement stabilization, the blocks must be cured for 28days after manufacturing. After this, they can be dry freely and be used like common bricks with a soil cement stabilized mortar. The first tries for compressed earth blocks were tried within the period of the nineteenth century in Europe. The CSEB can be stabilized or not. But most of the days, they are stabilized with cement or lime. Therefore, we prefer today and to called them as a Compressed Stabilized Earth Blocks(CSEB). The input of the soil stabilization allowed to build higher with thinner walls, which have a much better compressive strength and water resistance. With cement stabilization, the blocks must be cured for 28days after manufacturing. After this, they can be dry freely and be used like common bricks with a soil cement stabilized mortar. According to the IS standards basic test for cement mortar and soil is carried out. Using 8% of the cement stabilizer compressed stabilised earth blocks are prepared(CSEB). Stack bound prisms and Masonry valets prepared by using CSEB. Stress strain behavior of the prisms is observed for different reinforcement conditions. Stress strain behavior of the valets is observed for different reinforcement conditions. Compare those stress strain values of those prisms and valets.
Stress Strain Behavior of Compressed Stabilised Earth Blocks Using Geogrids
The collected soil is tested for suitability of making bricks is dried in air and crushed, then the soil is passed through the 4.75mm IS sieve and stored for making of bricks Weight of the cement and soil is taken to the required proportion and water is added to the soil to get the optimum moisture content. Then the lumps in the mixture is breakdown thoroughly. The mould is oiled and base of the mould is covered with a glass sheet in order to obtain level surface. The soil is added in 2 coats and it is consolidated by a wooden hammer with enough tamping. For de-molding surface of the bricks are straightened. The mould is allowed to make 16 bricks at a time and around 800 bricks are done and cured for 28 days by using gunny bags.
Construction of Stack Bound Prisms and valets:
Stack bound prisms and valets is the combination of the and cement mortar and soil blocks. The ratio of height and width is decided as per Indian Standards. The ratio of cement and sand is 1:6. The size of cement coat is 10mm to be kept and the ratio of height and width of 3.45 is to be kept and it is more than two and less than five it is prescribed in the IS masonry code. Here geo-grids is used as a reinforcement and for the horizontal plane condition it is nailed into a specific size of the soil block. it is nailed to the top of the soil block. In case of the perpendicular reinforcement and geo-grid were tiedup in greater side of the brick masonry. The 3 prisms and 3valets are constructed for the different combinations of the reinforcement totally 12 prisms and 12 valets are allowed for 28 days of curing.
|
2019-10-31T09:05:33.447Z
|
2019-07-10T00:00:00.000
|
{
"year": 2019,
"sha1": "7c6379d4fdf65a72dcde72d547fddd57934ec99f",
"oa_license": null,
"oa_url": "https://doi.org/10.35940/ijitee.h7134.078919",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0737dba4c4a34800c8c2ef3770f2b506b72cd353",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
}
|
7780083
|
pes2o/s2orc
|
v3-fos-license
|
Laparoscopic Sacral Colpopexy: A Proposed Technique
This case report describes a laparoscopic sacral colpopexy using Mersilene mesh in a patient with complete vaginal vault prolapse. Mersilene mesh was placed as a hammock between the vaginal apex and the anterior surface of the sacrum, using intracorporeal needles and an extracorporeal knot tying technique. Minor modifications are made from the traditional abdominal approach, because the patient had previously undergone a pelvic lymphadenectomy and vaginal cuff radiation for a stage IB grade 1 adenocarcinoma of the endometrium.
INTRODUCTION
Vaginal surgeons will be faced with an increasing numberofpatients in which surgical correction ofvaginal vault prolapse is necessary. This is largely due to the continually enlarging geriatric population and compounded by the large numbers of women who have had a hysterectomy. Both aging and inadequate support of the vaginal vault at the time of hysterectomy contribute to vaginal vault prolapse (1). There are a variety ofprocedures available to correct this problem. In the patient who desires preservation of a coitally functional vagina, an abdominal sacral colpopexy with retroperitoneal interposition of a synthetic suspensory hammock between the vaginal apex and the anterior surface of the sacrum can be used. Sacral colpopexy as first described in 1973, and subsequent modifications, require a midline abdominal incision and the accompanying hospitalization and recovery time associated with a laparotomy (2). We report the use of operative laparoscopy for this procedure to circumvent these disadvantages of laparotomy.
Case Report
Our patient is a 69-year-old, gravida four, para three white female, with a history of stage 1B grade endometrial cancer. Her malignancy was managed with a laparoscopically assisted vaginal hysterectomy and pelvic lymphadenectomy, and anterior and posterior colporrhaphy for prolapse. This was followedby high dose vaginal cuffradiation, with 1500 cGy administered in 3 fractions. Approximately two years post-operatively, the patient presented complaining ofprotrusion of the vaginal vault and pelvic pressure. An examination revealed complete vaginal vault prolapse without an accompaning cystocele or rectocele. She was initially treated with a donut pessary which the patient was not satisfied with so surgical correction was undertaken.
The procedure was performed with the patient in the dorsal lithotomy position and the bladder drained. A 10 mm umbilical port, two 5 mm lateral lower abdominal ports, and one 12 mm suprapubic port were used. The vagina was elevated by a sponge on a ring forceps in the vaginal vault. A Moschcowitz culdoplasty was performed to obliterate the deep cul-de-sac ( Fig. 1). Three circumferential purse string sutures included the remnants of uterosacral ligaments, the posterior vagina, and peritoneum, and shallow bites of serosa over the sigmoidcolon laterally. This was performed using 000 coated nonabsorbable braided Ethibond sutures (Ethicon, Sommerville, MA) on a CT needle. Knots were tied extracorporeally using the Clarke's knot pusher. The Mersilene mesh (Ethicon, Sommerville, MA) was then placed through the 12 mm port. With the index and middle fingers in the vagina, the vaginal apex was elevated. Three Ethibond sutures were placed in a single row in the vaginal wall apex, and then through one end of the 44 C.C. GOLDBERG et al. Mersilene mesh, and tied using an extracorporeal knot tying technique. By palpating the needle with the fingers in the vagina, good linear bites of the vagina could be obtained without entering the vagina with the suture (Fig. 2).
The parietal peritoneum overlying the promontory of the sacrum was then sharply entered on the right side of the mesentery of the rectosigmoid. The right ureter was identified, and the sacral promontory was cleaned free of its peritoneal and lymph vascular tissue. The periosteum over the sacral promontory was used as an anchoring point. Three sutures were placed anchoring the Mersilene gauze to the sacral promontory using O Ethibond suture. The redundant Mersilene gauze was cut and extracted through the suprapubic port. The gauze bridge was then peritonealized by closing the peritoneum of the sacral promontory, using the clip applier through the 12 mm port (Fig. 3). The vagina was then packed for 24 hours. The operative time was one hour and fifty-two minutes, with an estimated blood loss of 50 ml.
On post-operative day number one, the foley catheter was removed and the patient was discharged home with stool softeners to prevent constipation and associated straining. On examination six weeks postoperatively, the vaginal apex was well supported centrally. Twelve months postoperatively, the patient is asymptomatic and sexually active without problems.
DISCUSSION
Abdominal colpopexy by suspending a mesh hammock between the prolapsed vaginal vault and sacrum is an accepted surgical technique with good results in the repair of vaginal vault prolapse. This method has been chosen for patients in whom a coitally functional vagina is paramount. Addison described 54 of 56 patients, followed on average 3 years after abdominal sacral colpopexy, to have exhibited good vault suspension and no difficulty with coitus (3). Timmons had good support at 9-18 months follow-up with only 2% complicated by recurrent enteroceles (4).
Variations in the surgical procedure have been reported by numerous authors. The Halban technique to vertically close the cul-de-sac has been used in place of the Moschcowitz culdoplasty (5). Different bridge materials and techniques to attach the bridge to the vagina have been reported (3,4). (Timmons et al, 1992). Whether the hollow of the sacrum or the sacral promontory should be used as the fixation point and whether simultaneous prophylactic urethropexy should be performed are debated issues.
In our patient, the bladder and rectum were not near the previously radiated vaginal apex, therefore dissection of the peritoneum off the vaginal mucosa was not necessary. We attached the graft to the anterior sacrum, not the hollow of the sacrum because of its easier accessibility, minimal venous plexous and better periosteum for anchoring sutures. Short-term follow up of this patient has demonstrated this angle to be without consequence. The lateral angle created by the sacrospinal ligament fixation, while not entirely natural, is without problems as well (6).
There are multiple benefits of a minimally invasive laparoscopic approach. The obvious benefits include a smaller incision, decreased blood loss and decreased hospitalization and recovery time. Theoretically, there should be an overall decrease in the complications associated with a laparotomy, such as wound infection and dehiscence, deep venous thrombophlebitis, adynamic ileus, and prolonged recovery.
We have described a surgical technique for laparoscopic sacral colpopexy using Mersilene mesh and its relative advantages. However, a larger series ofpatients is required before conclusions can be reached concerning its feasibility, safety, limitations and success rate. Nezhat has reported 5 cases of laparoscopic sacral colpopexy, but has also been unable to present the long-term benefits (7). Initial experience by ourselves and others indicate that in properly selected patients this technique is worthy of further investigation. 46 C.C. GOLDBERG et al. Figure 3 The Mersilene mesh has been "reperitonealized" to avoid its exposure to intraperitoneal structures.
|
2016-05-08T05:06:31.605Z
|
1995-01-01T00:00:00.000
|
{
"year": 1995,
"sha1": "da7e7160be120f4d0f8d0eebf16585c7b9c42c38",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/1995/136498.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da7e7160be120f4d0f8d0eebf16585c7b9c42c38",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249283997
|
pes2o/s2orc
|
v3-fos-license
|
Honey Bee Genetic Stock Determines Deformed Wing Virus Symptom Severity but not Viral Load or Dissemination Following Pupal Exposure
Honey bees exposed to Varroa mites incur substantial physical damage in addition to potential exposure to vectored viruses such as Deformed wing virus (DWV) that exists as three master variants (DWV-A, DWV-B, and DWV-C) and recombinants. Although mite-resistant bees have been primarily bred to mitigate the impacts of Varroa mites, mite resistance may be associated with increased tolerance or resistance to the vectored viruses. The goal of our study is to determine if five honey bee stocks (Carniolan, Italian, Pol-Line, Russian, and Saskatraz) differ in their resistance or tolerance to DWV based on prior breeding for mite resistance. We injected white-eyed pupae with a sublethal dose (105) of DWV or exposed them to mites and then evaluated DWV levels and dissemination and morphological symptoms upon adult emergence. While we found no evidence of DWV resistance across stocks (i.e., similar rates of viral replication and dissemination), we observed that some stocks exhibited reduced symptom severity suggestive of differential tolerance. However, DWV tolerance was not consistent across mite-resistant stocks as Russian bees were most tolerant, while Pol-Line exhibited the most severe symptoms. DWV variants A and B exhibited differential dissemination patterns that interacted significantly with the treatment group but not bee stock. Furthermore, elevated DWV-B levels reduced adult emergence time, while both DWV variants were associated with symptom likelihood and severity. These data indicate that the genetic differences underlying bee resistance to Varroa mites are not necessarily correlated with DWV tolerance and may interact differentially with DWV variants, highlighting the need for further work on mechanisms of tolerance and bee stock–specific physiological interactions with pathogen variants.
Honey bees exposed to Varroa mites incur substantial physical damage in addition to potential exposure to vectored viruses such as Deformed wing virus (DWV) that exists as three master variants (DWV-A, DWV-B, and DWV-C) and recombinants. Although miteresistant bees have been primarily bred to mitigate the impacts of Varroa mites, mite resistance may be associated with increased tolerance or resistance to the vectored viruses. The goal of our study is to determine if five honey bee stocks (Carniolan, Italian, Pol-Line, Russian, and Saskatraz) differ in their resistance or tolerance to DWV based on prior breeding for mite resistance. We injected white-eyed pupae with a sublethal dose (10 5 ) of DWV or exposed them to mites and then evaluated DWV levels and dissemination and morphological symptoms upon adult emergence. While we found no evidence of DWV resistance across stocks (i.e., similar rates of viral replication and dissemination), we observed that some stocks exhibited reduced symptom severity suggestive of differential tolerance. However, DWV tolerance was not consistent across mite-resistant stocks as Russian bees were most tolerant, while Pol-Line exhibited the most severe symptoms. DWV variants A and B exhibited differential dissemination patterns that interacted significantly with the treatment group but not bee stock. Furthermore, elevated DWV-B levels reduced adult emergence time, while both DWV variants were associated with symptom likelihood and severity. These data indicate that the genetic differences underlying bee resistance to Varroa mites are not necessarily correlated with DWV tolerance and may interact differentially with DWV variants, highlighting the need for further work on mechanisms of tolerance and bee stock-specific physiological interactions with pathogen variants.
One strategy to mitigate Varroa mite impacts on honey bee colony health and survival is breeding for mite resistance (Buchler, 1994;Harbo and Harris, 1999a;Rinderer et al., 2010;Mondet et al., 2020). Previous honey bee breeding programs have selected several parameters such as hygienic and grooming behaviors meant to decrease the mite load in the colony (Harbo and Harris, 1999b;Guzman-Novoa et al., 2012;Bąk and Wilde, 2015;Guichard et al., 2020;Mondet et al., 2020). Hygienic behavior, known to reduce the levels of diseases such as American and European foulbrood by the removal of damaged pupae, has also been shown to reduce Varroa mite loads in European honey bee colonies (Rothenbuhler, 1964;Spivak and Reuter, 2001a;Spivak and Reuter, 2001b). Varroa-sensitive hygienic bees (VSH) including the Pol-Line stock are a subselected group exhibiting hygienic behavior that targets miteinfested pupal cells rather than damaged pupae, removing reproductive Varroa mites, minimizing mite population growth (Harris, 2007;Rinderer et al., 2010;Danka et al., 2013;Danka et al., 2016;Mondet et al., 2016;Saelao et al., 2020;Spivak and Danka, 2021). Another mite-resistant stock, the Russian honey bee, was bred starting from a population historically associated with Varroa mites and was selected more generally for low mite population growth, which involves several mechanisms of defense such as brood breaking and heightened grooming (de Guzman et al., 2007;Rinderer et al., 2010). Similarly, the Saskatraz stock has also undergone selection primarily for survivorship and low mite population growth without targeting a single, specific mechanism (Robertson et al., 2014).
Furthermore, the genotypic differences underlying miteresistant behaviors may be associated with reduced levels of mite-vectored viruses such as DWV (Locke et al., 2014;Khongphinitbunjong et al., 2016;de Guzman et al., 2017;Mendoza et al., 2020;Weaver et al., 2021;O'Shea-Wheller et al., 2022). Such bee genotype × DWV interactions may be the result of enhanced virus resistance or tolerance in certain bee stocks, with prior data indicating a greater potential for tolerance (Strauss et al., 2013;Locke et al., 2014;Khongphinitbunjong et al., 2015;Khongphinitbunjong et al., 2016;Thaduri et al., 2018;Thaduri et al., 2019;Locke et al., 2021). Resistance is the ability of the bee to prevent an infection from establishing or increasing after exposure, while tolerance is the ability of the individual to maintain health and functionality (i.e., reduced symptoms), even while having an infection (Locke et al., 2014;Mordecai et al., 2016a;Burgan et al., 2018). Host genotype × virus interactions may be direct through individual host immune responses to the virus or indirect via colony-level miteremoval traits (Chen and Siede, 2007;Khongphinitbunjong et al., 2015;de Guzman et al., 2019;Bouuaert et al., 2021;Weaver et al., 2021). For instance, genotype may mediate the bee's ability to detect signals such as mite-related cuticular hydrocarbons; then, these detection differences can translate into differential hygienic responses resulting in cascading differences in mite and vectored virus levels (Mondet et al., 2021;Wagoner et al., 2019). In addition, virus levels may potentially feed back into this system by further disrupting the bee's mite detection ability (Wagoner et al., 2019;Mondet et al., 2015). If direct mechanisms drive genotype-associated differences in virus levels, we expect to observe genotype differences in laboratory studies of virus loads as the need for mite-removal behavior is not necessary for virus reduction (Locke et al., 2014). Complicating this matter are potential differences among DWV variants, with conflicting reports indicating context-dependent variant virulence and likelihood of inducing symptoms or mortality (Mordecai et al., 2016a;McMahon et al., 2016;Brettell et al., 2017;Natsopoulou et al., 2017;Gisder et al., 2018;Barroso-Arévalo et al., 2019;Kevill et al., 2019;Tehel et al., 2019;Dubois et al., 2020;Norton et al., 2020).
In this study, we evaluated potential resistance or tolerance to DWV in five honey bee genetic stocks (Carniolan, Italian, Pol-Line, Russian, and Saskatraz), with varying levels of resistance to the Varroa mite using a laboratory environment to control for indirect behavioral mechanisms. The Italian and Carniolan stocks are commonly used throughout the commercial beekeeping industry and have been bred for honey production and colony size; whereas Russian, Pol-Line, and Saskatraz bees have been specifically bred for Varroa mite resistance (de Guzman et al., 2007;Robertson et al., 2014;Danka et al., 2016;Caron, 2021;O'Shea-Wheller et al., 2022). These genetic stock differences represent different genotypes in a broad sense as they have clear differences in phenotype and are derived from different breeding populations. The specific genotypic information was not further assessed as part of this study though it can be found, in part, in prior studies (Jiang et al., 2016;Wang, 2016;Saelao et al., 2020). To determine potential interactions of bee stock and DWV, we exposed honey bee pupae, the life stage that is most commonly infested with mites (Ifantidis, 1983(Ifantidis, , 1984Donzé and Guerin, 1994), to Varroa mites or injected them with a sublethal dose (10 5 ) of DWV sourced from symptomatic (e.g., deformed wings) adult bees. Upon adult emergence, we determined DWV levels and dissemination throughout different tissue types [abdomen, head, hypopharyngeal gland, and a rear leg as in the study by Penn et al. (2021)], the number of days until adult emergence, and DWV symptom presence and severity. The tissue types selected were chosen as legs have been used to indicate viral dissemination in other arthropods (Boncristiani et al., 2009;Diagne et al., 2015); the head has been an indicator of overt bee infections (Yue and Genersch, 2005;Möckel et al., 2011); hypopharyngeal glands may provide possible transmission by food trophallaxis (Chen Y. et al., 2006, Chen et al., 2006Möckel et al., 2011); and the abdomen as the site of mite feeding and our injection treatment groups. Since there are two common variants (A and B) of DWV in the United States and the potential virulence and mortality effects may differ based on interactions between host and viral genotypes (Dainat et al., 2012;Ryabov et al., 2017;Kevill et al., 2019;Loope et al., 2019;Grindrod et al., 2021), we tested for RNA copy levels of both DWV-A and DWV-B variants.
Source Colonies
All honey bee colonies were started from 0.90 to 1.10 kg "packages" made on 3 May 2018 from 10 established colonies of the USDA Honey Bee Breeding, Genetics and Physiology Research Unit in Baton Rouge, LA, United States (30°22′56″ N, 91°10′40″ W). Naturally mated queens from five genetic stocks were sourced from the USDA laboratory (Pol-Line and Russian), a Canadian collaborating breeder (Saskatraz), or purchased from commercial California suppliers (Carniolan and Italian). Queens were queen candy released into colonies on 4 May 2018. Colonies (N = 3 colonies/stock) were not sampled until 6 weeks post queen introduction and supplementation to allow time for population turnover to reflect queen genetics. All colonies were similarly maintained in three yards near the USDA laboratory (with Carniolan, Italian, and Saskatraz sharing one yard, while Pol-Line and Russian colonies were maintained in two separate yards). To allow for direct comparison, the same colonies were used in a complementary study following DWV levels over 10 days in newly emerged adult bees (Penn et al., 2021).
Viral Isolation and Mite Sourcing
To obtain the DWV viral solution for injection, 20 adult honey bees with overt DWV symptoms were frozen at −80°C, ground to a fine powder, homogenized in 10 ml 1X PBS, and centrifuged at 5,000 rpm for at 4°C for 20 min. The resulting supernatant containing viruses was filtered through a 0.2-micron filter (milex-GS syringe filter unit #SLGS033SS, Millipore Sigma, Burlington, MA, United States) to remove small tissue debris, fungi, and bacteria. qPCR was conducted to test for non-target viruses (acute bee paralysis virus, black queen cell virus, chronic bee paralysis virus, Israeli acute paralysis virus, Kashmir bee virus, and Lake Sinai virus) using the methods mentioned later. DWV quantification using general DWV (not variant-specific) primers (Supplementary Table S1) was performed by absolute quantification using the standard curve method. All methods were previously established based on standard protocols (Simone-Finstrom et al., 2018). One sample stock solution was selected based on negative results for non-target viruses and used to create the injection stock solution. The stock solution was diluted to 10 5 viral copies of DWV, a biologically relevant but sublethal to adult bees (Gisder et al., 2009). For the mite inoculation treatment group, Varroa mites were obtained from non-study hives using powdered sugar rolls to dislodge live mites from nurse bees (Macedo et al., 2002). Dislodged mites were removed from powdered sugar using a paint brush and stored in a petri dish containing a maximum of 100 mites, allowed to feed on FIGURE 1 | Experimental design: experimental design to determine DWV symptoms and levels based on bee stock and treatment of white-eyed pupae (no injection control/naturally occurring infection, PBS sham injection, Varroa mite exposure, or sublethal DWV injection). DWV symptoms were analyzed in emerged adults (N = 12-17 bees per stock/treatment combination); then a subset of three bees was dissected into four tissue types (heads, hypopharyngeal glands, abdomen, and rear legs) subsequently used for viral analyses.
Pupal Assay
From mid-July to early October 2018, frames with white-eyed pupae were removed from each colony and brought back to the laboratory where the Varroa mite-free (no mites in pupal cells or on body) pupae were removed, placed on a folded filter paper in petri dishes, and stored in an incubator at 34°C and 85% relative humidity. Pupae exhibiting any discoloration due to damage during handling after 2 h were removed from the experiment, while healthy pupae (17 treatment/colony) were assigned to one of four treatment groups-1) no manipulation (included since bees had naturally occurring DWV infections, referred to as "control"), 2) 3.0 µl 1X PBS injection to control for injection damage (PBS), 3) 3.0 µl 10 5 DWV injection to simulate the vectoring of DWV without mite presence (DWV), or 4) Varroa mite inoculation (mite). Control bees were placed into an individual, size "1" gel capsules (Capsule Connection, Prescott, AZ, United States), with a small hole in the top created using an insect pin (Nazzi and Milani, 1994;Piou et al., 2016;Posada-Florez et al., 2019). All pupae in the PBS and DWV treatment groups were injected using an UltraMicroPump with an SYS-Micro4 controller (World Precision Instruments, Sarasota, FL, United States) with an infusion flow rate of 1.0 μl/s, following the manufacturer's parameters. For each injection, a 30G needle (Hamilton Company, Reno, NV, United States) was inserted into the lateral abdomen between the fourth and fifth pleurites, based on established protocols (Simone-Finstrom et al., 2018). After injection, pupae were then transferred to individual gel capsules as mentioned earlier. For the mite treatment group, pupae were transferred to an individual gel capsule followed by an individual mite . All pupae were incubated at 34°C and 85% relative humidity until adult emergence, with all bees checked daily and necrotic individuals removed. Mite treatments where the mite was dead or did not defecate (an indication that no feeding occurred) by adult bee emergence were not used in subsequent analysis but were replaced with replicates meeting the requirements. Upon emergence, all adult bees were evaluated for DWV symptoms and rated on a scale of severity from 0 to 3 (Supplementary Figure S1), with 0 indicating normal wings, 1 indicating slight malformation, 2 indicating major malformations but with wings present, and 3 indicating completely malformed. Mites were removed from respective emerged bees; then all mites and bees were placed into individual sterile 1.5-ml centrifuge tubes and stored at −80°C. The first emerging bees per colony per stock per treatment group combination were evaluated in this manner (n = 12-17 bees/colony/treatment group/stock, N = 981 total emerged bees) and then a random subset of three bees was selected for tissue dissection and DWV quantification ( Figure 1).
Adult Assay
The adult assay was part of a complementary study on DWV dissemination in injected adult bees over time (Penn et al., 2021). The same colonies were used for the adult injection as the pupal injection experiment and throughout the same timeframe (July through October 2018). Varroa mite-free newly emerged bees were treated similarly, except for the no mite treatment group, and all bees were placed on ice for 2 m prior to injection to reduce movement. Bees were injected using the same methods as aforementioned with a 3.0 µl of DWV inoculum from an aliquot of the same inoculum as used for the pupae experiment (DWV) or 3 µl 1X PBS (PBS) or had no injection (control) were implemented as controls. The bees were housed in cages of maximum 30 bees and provided with 50% sucrose solution and pollen substitute. The bees were maintained in an incubator at 34°C and 85% relative humidity and then sacrificed at 1, 2, 4, 7, and 10 days post injection. Only day seven data (N = 135 bees and 540 associated RNA extractions) were included within this study as 7 days allows for a similar period of viral replication compared to pupae data.
Dissection and RNA Isolation
To determine virus dissemination through the bee body over time, the randomly selected three-bee subset from each colony/ treatment group/stock combination was dissected in the same manner as the complementary study on adult bees (Penn et al., 2021). Dissections were conducted over dry ice with individuals dissected with a new, sterilized blade. The body was separated into legs, wings, head, thorax, and abdomen; the head was embedded into beeswax (new per individual bee) and the hypopharyngeal gland was removed according to the previously published methods (Corby- Harris and Snyder, 2018;Penn et al., 2021). Dissected tissues were stored in separate sterilized tubes on dry ice during dissection and long term at −80°C.
Total RNA was extracted for a single rear leg, the head (sans hypopharyngeal gland), hypopharyngeal gland, and abdomen for each of the three bees representing each combination of colony/ treatment group/stock (N = 180 bees total and N = 720 RNA extractions in total) and all mites (N = 45). Individual hypopharyngeal glands and mites were placed in 30 µl lysis buffer and 30 μl Maxwell homogenization buffer and vortexed. The leg (cut into pieces), head, and abdomen were placed in 200 µl lysis buffer and 200 μl Maxwell homogenization buffer (Promega Corporation, Madison, Wisconsin, United States), manually ground with a pestle (Sigma-Aldrich, St. Louis, Missouri, United States), and vortexed. All samples were then incubated for 90 min at 4°C. After incubation, 320 μl Maxwell homogenization buffer was added to the hypopharyngeal gland and mite samples. The samples were then extracted using the Maxwell RSC 48 cartridges (Promega Corporation, Madison, Wisconsin, United States). The total RNA was extracted according to standard procedures using RSC simplyRNA tissue extraction kits and program (Promega Corporation, Madison, Wisconsin, United States). RNA was stored in 0.6ml elution tubes wrapped in parafilm (Bemis NA, Neenah, Wisconsin, United States) at −80°C.
cDNA Synthesis and RT-PCR
Frozen RNA samples were thawed on −20°C metal beads, briefly vortexed, and then centrifuged. Each RNA sample was nano-dropped (NanoDrop One, Thermo-Fisher Scientific Inc., Waltham, Massachusetts, United States) twice using 1 μl of sample. The mean ng/µl NanoDrop One readings (for 260/280 and 260/230) were calculated per sample and then used to determine adequate sample purity and the quantities of sample and nuclease-free water required to dilute each sample to a concentration of 250 ng of RNA. cDNA was then synthesized in two steps using Qiagen QuantiTect Reverse Transcription kits (Thermo-Fisher Scientific Inc., Waltham, Massachusetts, United States). For step 1, 2 µl of gDNA wipeout buffer was added to the mix of RNA and water for a total reaction volume of 14 µl per sample. The samples were incubated at 42°C for 2 min in a Bio-Rad T100 Thermal Cycler (Bio-Rad, Hercules, California, United States). The samples were then briefly vortexed and centrifuged before the addition of 4 µl 5X buffer, 1 µl of RT primer mix, and 1 µl of RT enzyme per sample. The samples were again vortexed and centrifuged and then placed into the Bio-Rad T100 Thermal Cycler (42°C for 25 min then 95°C for 3 min). cDNA was stored in strip tubes wrapped in parafilm at −80°C.
All samples were tested with DWV-A and DWV-B primers using qRT-PCR to determine infection levels (primers in Supplementary Table S1), and each sample was replicated two times per primer pair. All qRT-PCRs consisted of 5 µl SsoFast Universal SYBR Green Supermix (Bio-Rad, Hercules, California, United States), 3 µl nuclease-free water, 0.5 µl forward primer, 0.5 µl reverse primer, and 1 µl cDNA from each sample. All reactions were run in Bio-Rad CFC 96 or Connect thermal cyclers (Bio-Rad, Hercules, California, United States), with all reactions of a specific primer pair occurring in the same machine. The PCR cycling protocol for DWV-A was 95°C for 1 min followed by 40 cycles of 95°C for 10°s and 60°C for 15 s then 65°C for 5°s; while the protocol for DWV-B was 95°C for 5 min followed by 40 cycles of 95°C for 5°s and 52.5°C for 10 s then 72°C for 10 s. The thermal protocols included a melt-curve dissociation analysis to confirm the product size. DWV-A and DWV-B results were quantified using the standard curve method using linearized plasmid constructs (ranging from 10 5 -10 12 ). Quantified virus RNA copy levels (DWV RNA equivalents per 10 ng of RNA) were log-transformed for analyses.
Statistical Analyses
All statistical analyses were conducted in R 4.0.2 (R Core Team, 2020), and all graphs were plotted using ggplot2 (Wickam, 2016). To determine factors influencing DWV-A and DWV-B levels and dissemination to different tissue types, the three bee sub-samples with virus data were used with all tissue types included in the analyses. Separate general linear mixed models (GLMMs) were generated for each DWV variant (DWV-A and B) using the lme4 (glmer function with Gaussian distribution) package (Bates et al., 2015). Variables for model selection included bee stock, treatment group, tissue type, all interaction terms of those three variables, and the levels of the alternate DWV variant. Colony and individual bees were used as random effects to account for multiple tissue types coming from the same individual and individuals from the same colony. The Italian bee stock, control treatment group, and head tissue were specified as model intercept values. Backward model selection was conducted using a combination of minimum AICc and BIC values; models with significantly lower scores (Δ > 4) were used. It is to be noted that the stock × treatment group interaction term was tested in model selection and was neither significant nor did it contribute to model fit so was not included in the final model. Final model information can be found in Supplementary Table S2. Significance values were estimated using Satterthwaite approximation with the lmerTest package (Kuznetsova et al., 2017). Post hoc comparisons were conducted with Kenward-Roger estimation in the emmeans package (Lenth, 2020).
This experiment also allowed us the unique opportunity to evaluate DWV levels and dissemination differences between this experiment and prior work with injected newly emerged adults from the same colonies at the same time of the same year of collection (Penn et al., 2021). To determine if differences occurred between these two experiments, the mite treatment group was dropped from this pupae experiment as it was not included as a treatment group in the adult experiment. Similarly, we only evaluated the day 7 time-point from the adult experiment as that matched the pupae experiment in terms of number of days post injection. Day 7 comparison models were conducted as aforementioned except that experiment and associated interaction terms were included in the model and the adult experiment was used in the model intercept.
The time (days) until adult emergence was modeled using a GLMM using the lme4 (glmer function with Gaussian distribution) package (Bates et al., 2015). To avoid redundant sampling per bee, only head tissue (sans hypopharyngeal glands) data were used as this particular tissue type was the best representation of infections for both DWV variants (Schurr et al., 2019;Dubois et al., 2020;Penn et al., 2021). Variables considered for model selection included bee stock, treatment group, stock × treatment group, and DWV-A and -B levels; colony was used as a random effect. The Italian bee stock and the control treatment group were specified as model intercept values. Model selection and post hoc comparisons were conducted as mentioned earlier. The presence of DWV symptoms was analyzed similarly using a GLMM using the lme4 (glmer function with binomial distribution) package but with a logit link function within a binomial logistic regression (presence/ absence of symptoms).
DWV symptom severity (scale of 0-3) was modeled using an ordered logistic regression using the polr function (logistic) in MASS . Marginal effects and predicted probabilities for the treatment group and stock comparisons were calculated using the effects package (Fox, 2003;Fox and Hong, 2009;Fox and Weisberg, 2019). Variables and methods for model selection were as mentioned previously (time to emergence and symptom presence). We further synthesized the results of the symptom presence and severity models using classification trees. The two-response categorical variable for DWV symptom presence (0, 1) and the four-response ordinal variable for DWV symptom severity (wind deformity on a scale of 0-3) were used to construct two classification trees with the same predictor variables as the associated logistic models. The classification trees were created and plotted using recursive Frontiers in Genetics | www.frontiersin.org June 2022 | Volume 13 | Article 909392 partitioning provided in the rpart and rpart plot packages (Milborrow, 2017;Therneau et al., 2018). All predictor variables were maintained in the final classification trees with each having a complexity parameter >0.01 after pruning. The relationships among bee stock, treatment group, DWV levels, and the DWV symptom severity scale were further modeled using multiple correspondence analysis (MCA) and factor analysis of mixed data (FAMD) using the FactoMineR package (Husson et al., 2008). MCA allows for the analysis of multiple qualitative variables (similar to PCA with quantitative data), whereas FAMD analyzes the relationships between the combinations of qualitative and quantitative variables. Variables presented closer together on the plot are more similar to each other than those further away (like on the opposite side of the origin). For ease of graphical representation, PBS, mite, and DWV pupae treatment groups were also compared against the control treatment group with regards to log-transformed DWV-A and DWV-B levels, days until emergence, and symptom severity. This was performed by calculating the mean for the control data for each of the three replicates per tissue type, treatment group, and colony. The mean control value was assumed to be the baseline (y = 0 value on Figure 2) value for that colony × tissue type × treatment group combination and was then subtracted from each of the three replicates per treatment group for that colony and tissue type. Statistical significance represented in Figure 2 was determined by using post hoc comparisons for the treatment group, described earlier.
Graphical representation of DWV dissemination (Figures 3, 4) was conducted in Microsoft 365 Excel and PowerPoint. The mean log-transformed DWV-A and DWV-B values (across all bee stocks) per treatment group were calculated and copied and pasted into a column in an Excel spreadsheet with an associated treatment group and variant labeling in the prior FIGURE 2 | Treatment values relative to naturally occurring levels. Comparison of (A) log-transformed DWV-A and (B) DWV-B levels in head tissues indicates that the pupae DWV injection treatment sourced from symptomatic adults successfully increased DWV-A but not DWV-B levels in all bee stocks. (C) Treatment × bee stock interactions decreased the number of days from pupal treatment until adult emergence compared to the control. (D) DWV symptom severity generally increased in the DWV treatment but interacted with bee stock. Values were made relative to the untreated control bees from the same colony with naturally occurring infections (y = 0). Boxplots are in the style of Tukey where the box limits represent the lower 25% quantile and upper 75% quantile with the line representing the median. Individual points indicate individual bees (N = 9 bees/treatment/stock) with shapes associated with colony replicate (ColonyRep) and color indicating stock. * Denotes significant differences between the treatment and control values (Dunnett's test; p < 0.05).
Treatment Impacts on Viral Load
When we analyzed DWV levels in the newly emerged adult bees, we found that the treatment group was significant but differed with the DWV variant ( Table 1; Supplementary Table S3).
Overall, DWV-A levels were higher in the DWV-injected bees than in the untreated control bees with naturally occurring infection (Kenward-Roger estimation, t = 9.279, p < 0.001), PBS (t = 6.061, p < 0.001), or mite treatments (t = 7.595, p < 0.001) (Figure 2). Unlike DWV-A, PBS injection and mite infestation resulted in higher DWV-B levels than those of the untreated bees (Kenward-Roger estimation, compared to PBS: t = 5.208, p < 0.001; compared to mite: t = 4.463, p < 0.001) and the DWV injection treatment group (compared to PBS: t = 3.218, p = 0.008; compared to mite: t = 2.489, p = 0.065). Moreover, the DWV treatment group did not differ in DWV-B levels from the untreated control group (
Deformed Wing Virus Levels in Different Tissues
Bee stock did not significantly impact overall DWV-A levels ( Figure 3). Within the control treatment group, heads, legs, and hypopharyngeal glands had similar levels of DWV-A; but heads and legs exhibited higher DWV-A levels than those of the abdomens ( Table 2). For DWV-B in the control treatment group, heads had the greatest virus levels and legs had higher levels than those of abdomens but were not different from hypopharyngeal glands ( Table 2). In the PBS treatment group, both heads and legs had higher DWV-A levels than abdomens and hypopharyngeal glands ( Table 2). Abdomens had lower DWV-A levels than all other tissue types in the mite treatment group ( Table 2). For DWV-B in the PBS treatment group, the only significant difference was between heads and hypopharyngeal glands; and there were no observed differences in DWV-B levels among tissue types in the mite treatment group ( Table 2). In the DWV injection treatment group, hypopharyngeal glands had lower DWV-A levels than all other tissue types ( Table 2). For DWV-B levels in the DWV treatment group, abdomens had lower levels than all other dissection components levels, which were not different from each other ( Table 2).
Deformed Wing Virus Levels in Injected Pupae and Adults After 7 Days
In order to compare DWV levels and dissemination differences based on the life stage (pupae or newly emerged adult) at the time of treatment, we modeled DWV-A and DWV-B levels for this experiment in conjunction with data from a complementary study on adult bees (Penn et al., 2021). While different cohorts of bees were used within each colony for the pupae and adult studies, the sampling of these cohorts occurred within the same months. In the adult bee study, newly emerged bees from the same colonies and time of year were injected with different aliquots of DWV from the same inoculum at the same concentration. Comparisons of data from the newly emerged bees treated as pupae in the current study were made to the data from 7 days post injection in the adult study as this timeline allows for a similar period of viral replication between the two datasets. We found that for both DWV-A and DWV-B, the experiment (adult vs. pupae injection) was significant, interacted with the treatment group and tissue type individually ( and tissue type in a three-way interaction ( Table 3). Within each treatment group, pupae had significantly greater levels of DWV-A than adults (Kenward-Roger estimation, control: t = −2.921, p = 0.004; PBS: t = −3.151, p = 0.002; DWV: t = −4.760, p < 0.001). However, this differed for DWV-B levels, where adults had higher amounts than pupae within all treatment groups but was only significant in the DWV treatment group (control: t = 1.795, p = 0.074; PBS: t = 1.349, p = 0.179; DWV: t = 8.150, p < 0.001). The experiment also exhibited a three-way interaction with bee stock and treatment for DWV-A but did not interact with bee stock for DWV-B (Table 3).
Time Until Adult Emergence
After treatment, pupae were checked every 24 h for adult emergence. The time (days) until adult emergence was impacted by the treatment group, stock × treatment group interactions, and DWV-B levels (Tables 4, 5) and was monitored. In the model, the mite treatment group was significantly negatively associated with the number of days until emergence ( Table 5; Supplementary Table S4). Pol-Line bees assigned to the mite treatment group took less than half a day longer to emerge than Italian (Kenward-Roger estimation, t = 3.332, p = 0.022), Carniolan bees (t = 3.227, p = 0.027), and Saskatraz (t = 2.822, p = 0.065) bees exposed to mites ( Table 5; Supplementary Table S4). In contrast, higher DWV-B levels decreased the number of days until adult emergence (Table 5; Supplementary Table S4), so bees with higher DWV-B developed more quickly.
Symptom Presence
The treatment group and the DWV variant impacted the probability of emerging bees exhibiting any DWV symptoms (i.e., wing deformities) ( Table 6). The treatment group had a significant impact on the presence of symptoms, with DWV injections (associated with increased DWV-A levels) inducing symptoms in a higher number of bees relative to all other treatment groups (compared to control: z = 3.962, p < 0.001; compared to PBS: z = 3.037, p = 0.013; compared to mite: z = 2.814, p = 0.025) ( Table 6). Symptomatic bees had higher DWV-A (10 9.71 ± 0.20 ) and DWV-B (10 10.14 ± 0.12 ) levels than asymptomatic bees (DWV-A: 10 8.45 ± 0.14 ; DWV-B: 10 9.57 ± 0.11 ) ( Figure 5). When DWV levels were compared between symptomatic and asymptomatic bees split out by bee stock, we found that Italian and Pol-Line bees exhibited higher DWV-A levels in symptomatic bees, while only Pol-Line bees exhibited higher DWV-B levels in symptomatic bees (Supplementary Figure S3).
Symptom Severity
When the severity of DWV symptoms (Supplementary Figure S1) was analyzed (Supplementary Table S5), bee stock, treatment group, and stock × treatment group were significant. All bee stocks were significant within the model, indicating potential stock-based differences in DWV tolerance compared to Italian bees-Carniolan (ordered logit parameter estimation, t = 15.934, p < 0.001), Pol-Line (t = 12.980, p < 0.001), Russian (t = 19.370, p < 0.001), and Saskatraz (t = 20.278, p < 0.001). Stocks varied both in the proportion of observations per symptom severity category ( Figure 6) and in their associated DWV-A and -B levels (Supplementary Table S6). Within the ordered logit model (Supplementary Table S5), all treatment groups were significant relative to the control treatment group-mite (t = 20.849, p < 0.001), PBS (t = 20.240, p < 0.001), and DWV (t = 18.982, p < 0.001). DWV injections had the greatest level of symptom severity followed by the PBS and mite treatment groups. No manipulation controls had low-level symptom severity. These same trends mostly held for stock × treatment group interactions but with severity levels differing among stocks within each treatment group ( Figure 6). Overall trends from the ordered logit model did a fairly good job predicting symptom severity per treatment and stock; but interpreting precise model results needs to be carried out with care and in conjunction with the following multivariate analyses for a more complete picture as no symptoms were observed for Italian bees in control treatment and Carniolan bees in the PBS treatment group, skewing the ordered logit model values.
Multivariate Analyses of Symptom Outcomes
To better understand the relative importance of bee stock, treatment group, and DWV levels for DWV symptoms, we used classification trees to analyze the probability of symptom presence and severity. We found that DWV levels of both variants in combination with bee stock and the treatment group played a critical role in predicting the probability of symptom presence ( Figure 7A). In the first decision node of the symptom presence (1)/absence (0) tree, bees with DWV injection had the highest probability of symptoms (0.89). The second decision node split probabilities using DWV-B levels (cut off of 10 10.91 ), where bees whose heads contained levels above this cutoff were more likely to exhibit symptoms. The next two nodes indicated equal importance of DWV-A levels (cut off of 10 10.92 ) and bee stock. This node grouped Italian, Pol-Line, and Saskatraz bees together as being more likely to exhibit symptoms (>78% of bees likely to show symptoms) and Carniolan and Russian bees together as less likely to exhibit symptoms (>53% of bees likely to not exhibit morphological symptoms). The classification tree for symptom severity ( Figure 7B) was more simplified than that for symptom presence. Like the symptom presence tree, the first decision node in the severity tree was the DWV injection treatment group and the second was the level of DWV-B (with the same cut off of 10 10.91 ). The third node was solely based on honey bee stock, again grouping Italian, Pol-Line, and Saskatraz bees together, indicating that these stocks have an increased probability of having more severe symptoms than Carniolan and Russian stocks. DWV-A levels did not explicitly play a part in the symptom severity tree, but DWV-A levels may have been partially accounted for through the DWV injection treatment as DWV-A levels increased in this treatment group whereas DWV-B levels did not (Table 1; Figure 2).
Using multiple correspondence analysis (MCA) to further validate the relationships among stock, treatment group, and symptom severity ( Figure 8A), we found that treatment groups fell along a gradient of symptom severity on dimension 1 (Dim.1: 0.685). DWV injections were represented at the far positive end of Dim.1, and no manipulation controls at the far negative end with In FAMD, the grouping by stock shifted such that Italian bees joined the Pol-Line and Saskatraz grouping with more severe symptoms (positive along Dim.1), while Carniolan and Russian bees still grouped together with less severe symptoms (negative along Dim.1). This grouping of stocks was consistent with the classification tree results for both symptom presence and severity. However, the stocks and treatment groups can also be grouped by DWV levels, where Pol-Line, Russian, and Saskatraz bees (the three mite-resistant stocks within the study) and mite and PBS treatment groups were grouped with DWV-B on the positive end of Dim.2, while Carniolan and Italian bees loaded on the negative end of Dim.2 with DWV-A.
DISCUSSION
The goal of this study was to determine if Varroa mite-resistant honey bee stocks (Pol-Line, Russian, and Saskatraz) exhibited differential resistance or tolerance to DWV compared to mitesusceptible stocks (Carniolan and Italian). We exposed whiteeyed pupae to mites, injected them with a sublethal dose (10 5 ) of DWV obtained from symptomatic adults, injected them with PBS as sham injection, or did not manipulate them at all as a control. After adult emergence, we evaluated dissemination of DWV variants A and B throughout four tissue types and adult emergence time and symptom severity. The DWV variants differed in their dissemination patterns and associations with treatment groups and, potentially, in their contributions to symptom severity. While bee stocks did not differ in overall virus RNA copy levels or dissemination patterns associated with either DWV variant (not indicative of resistance), bee stock interacted with the treatment group and the DWV level to impact symptom severity (indicative of tolerance). However, these stock interactions were not consistently correlated with known mite resistance.
The levels of both DWV variants differed with the treatment group but not in relation to bee stock, indicating that, in this study, stocks were not resistant to DWV unlike the FIGURE 6 | Observed rates of DWV symptom severity. The observed rates of symptom severity for each stock and treatment combination for the subset of bees used for viral analyses (N = 9 bees per stock × treatment combination, N = 180 bees total) indicate significant bee stock × treatment interactions. Contrary to expectations, these interactions are not consistent for all Varroa mite-resistant stocks (Pol-Line, Russian, and Saskatraz). Zero represents no symptoms and 3 represents debilitating morphological symptoms based on Supplementary Figure S1. Supplementary complementary study on injected adults where some stocks appeared to exhibit DWV resistance (Penn et al., 2021). Sublethal pupal injections of DWV (inoculum obtained from symptomatic adults screened using general DWV primers) induced elevated levels of DWV-A but not DWV-B, which is also dissimilar to responses in DWV-injected adults from Frontiers in Genetics | www.frontiersin.org June 2022 | Volume 13 | Article 909392 13 different cohorts collected within the same timeframe from the same colonies using an aliquot from the same inoculum (both variants were elevated with DWV injections) (Penn et al., 2021). Conversely, PBS injection and mite exposure pupal treatment groups induced higher levels of DWV-B but not DWV-A ( Figure 2). The elevated levels of DWV-B in the PBS and mite treatment groups are also similar to prior responses seen in adult bees but not necessarily in pupae (Wu et al., 2017;Dubois et al., 2020;Norton et al., 2020;Penn et al., 2021;Ray et al., 2021). The differential responses to viral infection between pupae and adults have been previously documented both in infection levels and in the ability of dsRNA treatments to mitigate infections (Möckel et al., 2011;Desai et al., 2012;Thaduri et al., 2019). Such ontogenetic responses to viral infections are likely related to differences in immune gene regulation among life stages, with the introduction of infection or stress at certain points such as the pupal stage being more critical due to lower levels of immune response (Wilson-Rich et al., 2008;Laughton et al., 2011;Koleoglu et al., 2017). These results highlight the importance of including life stage as a factor when investigating or selecting for viral resistance.
Despite having no symptoms and no prior exposure to Varroa mites, the untreated control group bees exhibited some DWV infection, possibly from vertical transmission or horizontal transmission in larval food as mites were not present within the pupal cells (Yue et al., 2007;Möckel et al., 2011). Therefore, the increased levels of DWV in the PBS group compared to those of the control group are most likely due to injection trauma-inducing viral replication (Gusachenko et al., 2020;Penn et al., 2021). The Varroa mites used in this study had greater DWV-B levels (10 4.82 ± 0.20 ) than DWV-A (10 2.42 ± 0.34 ), with similar DWV levels observed in previous studies (Gisder et al., 2009;Wu et al., 2017;Tentcheva et al., 2018). However, the observed mite DWV levels were not correlated with the associated bee levels for either DWV variant (unlike Wu et al., 2017). Mites may lose their DWV infection quite rapidly, particularly if mites feed on bees with low virus levels (Posada-Florez et al., 2019); and different mite loads do not necessarily increase bee DWV levels but may be associated with certain DWV variants (Ryabov et al., 2014). Given these data and that the mites were fed on pupae exchanged daily for 24-48 h prior to experimental use, the similar responses to the mite and PBS injection treatment groups in this study are not unprecedented.
Patterns of DWV dissemination to different tissue types following pupal exposure varied with the virus variant and the treatment group but not bee stock, reflecting complementary data on injected adult bees from the same colonies (Penn et al., 2021). For dissemination, DWV infection of both variants was readily evident in bee heads, consistent with work showing high DWV infections within the head particularly in overt infections (Shah et al., 2009;Zioni et al., 2011;Penn et al., 2021). The levels of DWV in other tissue types were variant-dependent. Unless directly injected with DWV, abdomens had lower levels of DWV-A than other tissue types (Figure 3) but had levels of DWV-B matching that of other tissues in the mite and PBS treatment groups (Penn et al., 2021). Treatment group differences in dissemination may indicate that replication of latent viral infections may not only be induced by non-viral stressors but also depend on virus variant and bee immune responses (Anderson and Gibbs, 1988;Yang and Cox-Foster, 2005;Boncristiani et al., 2013;Kuster et al., 2014;Khongphinitbunjong et al., 2015;Annoscia et al., 2019;Remnant et al., 2019;Tehel et al., 2019;Mookhploy et al., 2021). For instance, phenol oxidase and antimicrobial peptides have been shown to be upregulated when bees were injected with DWV but not PBS compared to controls, whereas hemocyte counts increased following both DWV and PBS injections (Ryabov et al., 2016;Millanta et al., 2019).
Both DWV variants contributed to symptom presence -DWV-B levels and DWV injection (associated with elevated DWV-A levels) were significant in the model (Table 5) and the symptom presence/absence classification tree ( Figure 7A), similar to prior work (Tehel et al., 2019;Dubois et al., 2020). Symptom severity was linked to all non-control treatment groups in the model (DWV-A associated with DWV injection and DWV-B associated with PBS injection and mite exposure) (Supplementary Table S5). Though previous data on how DWV levels impact symptom severity have had mixed results (Möckel et al., 2011;Desai et al., 2012;Tehel et al., 2019;Dubois et al., 2020;Yañez et al., 2020;Mookhploy et al., 2021), we observed that the DWV treatment group (elevated DWV-A) and higher levels of DWV-B were associated with the most severe symptoms (Figure 7). The two variants may also differ in which symptom severity they cluster with ( Figure 8B) where DWV-A (and the DWV treatment group) clusters with the severity of 3 (most severe), whereas DWV-B clusters closer severities 1 and 2. Variant-related differences in mortality (not measured here)-where DWV-A-infected bees had higher mortality than DWV-B infected bees-have also been documented in Australian, DWV-naïve bee populations (Norton et al., 2020). Using the classification tree data (Figure 7), we found that DWV levels of 10 10.91 appeared to be the initial cutoff point for symptom presence and severity. However, these values varied more with the variant in the symptom logit model; symptomatic bees had an average of 10 9.71 ± 0.20 for DWV-A and 10 10.14 ± 0.12 for DWV-B compared to asymptotic bees with an average of 10 8.45 ± 0.14 DWV-A and 10 9.57 ± 0.11 DWV-B. The two DWV variants were also significantly positively correlated with each other as seen in similar studies of both injected pupae and adults (Dubois et al., 2020;Penn et al., 2021), potentially reflecting the presence of variant recombinants that will need to be addressed in future work (Gisder et al., 2018;Jamnikar-Ciglenecki et al., 2019;Martin and Brettell, 2019).
Bee stock differences could exacerbate symptoms and infection levels after viral exposure as immune responses and rates of both pupal and adult exposure have been shown to shift with host genetics (Santillán-Galicia et al., 2010;Laughton et al., 2011;Möckel et al., 2011;Chang et al., 2021;Hinshaw et al., 2021;Posada-Florez et al., 2021;Weaver et al., 2021). For instance, hygienic bees may have higher exposure rates to DWV than nonhygienic bees since hygienic bees exhibit increased rates of cannibalism on mite-infested pupae (Posada-Florez et al., 2021). When we considered adult emergence time and DWV symptom presence and severity, bee stocks appeared to differ in their tolerance to DWV; though these results were not necessarily consistent with stocks bred for mite resistance (Khongphinitbunjong et al., 2016;Locke et al., 2021). In the multivariate and classification tree analyses (Figures 7, 8), the Russian stock most consistently grouped with Carniolan as being DWV-tolerant (high level of the virus with fewer symptoms and/ or lower severity), while Pol-Line grouped with Italian and Saskatraz as more DWV susceptible, particularly when considering symptom severity (Dim.1 axis on Figures 8A,B). However, when focusing on DWV-B levels ( Figure 8B), the stock groupings were consistent with mite resistance (positive on Dim.2 axis). Russian, Pol-Line, and Saskatraz grouped near the higher DWV-B loading and the mite and PBS treatment groups on the positive side of Dim.2 in the FAMD ( Figure 8B). As prior work has shown that mites are more likely to be infected with DWV-B compared to DWV-A (Gisder and Genersch, 2021), we might expect that mite-resistant stocks would group together when based on variant B.
Differentiation of mite-resistant stocks from each other is not surprising based on some limited prior research (Khongphinitbunjong et al., 2016;Penn et al., 2021) and given their different selection regimens (Rinderer et al., 2010;Robertson et al., 2014;Danka et al., 2016;Saelao et al., 2020). The separation of Pol-Line from Russian and Carniolan bees is similar to the grouping by genetic sequencing conducted by Saelao et al. (2020), where Pol-Line splits into a separate group from Carniolan, Italian, and Russian stocks (Saelao et al., 2020). Although Saelao et al. (2020) did not include Saskatraz bees in their analysis and our Italian bees did not segregate similarly to theirs in our analyses, it appears that the Pol-Line separation from Carniolan and Russian bees may be due, in part, to differential susceptibility to DWV in addition to genetic clustering for other traits (e.g., the strong selection for Varroa sensitive hygienic behavior). When we compared results from this study and the complementary study on injected adult bees from the same colonies (Penn et al., 2021), injected Pol-Line adults exhibited the greatest resistance to DWV-A compared to the other bee stocks but had the lowest tolerance following DWV injections of pupae in this study (Figures 6-8). Russian bees appeared tolerant to DWV overall as injected adults had the highest levels of DWV-A in the other study and the greatest tolerance in this study. These data indicate that there might be tradeoffs in breeding for mite resistance, DWV resistance, and symptom expression or tolerance. Regardless of the particular mechanisms underpinning these differences, for example, physiological responses to DWV or prevention of Varroa parasitism and DWV exposure (Navajas et al., 2008;Khongphinitbunjong et al., 2015;Posada-Florez et al., 2021), stock-related virus resistance or tolerance may help account for differences in colony survival in the field (Locke et al., 2014;Thaduri et al., 2019).
CONCLUSION
These data suggest that the genetic stock of the honey bee host is important for viral tolerance when pupae are exposed to DWV and other stressors but is not always perfectly aligned with breeding for mite resistance. Furthermore, we found that DWV variants differ in their dissemination in newly emerged adults and contribute significantly to DWV symptom presence though differentially to symptom severity. More severe symptoms were associated with DWV injections that elevated DWV-A levels, while less severe symptoms were associated with DWV-B, injection injury, and mite feeding. Given prior work on the same colonies showing that certain stocks of injected adults bees are more resistant to DWV-A infection over time, further study into the timing of infection and stock-mediated immune responses is vital to unraveling how genetic stocks interact with DWV. Comparison of these studies also indicates that more study of isolated viral variants rather than the naturally occurring combination of variants from overtly symptomatic adults is needed to parse out the full physiological impacts of each variant. More broadly, this research continues to highlight the importance of considering the combination of host genotype and pathogen genotype in epidemiological studies.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
MS-F, KH, and YC designed the research. HP performed the experiments, conducted the statistics, analyzed the data, and wrote the manuscript. All authors edited the manuscript, contributed to this article, and approved the submitted version.
ACKNOWLEDGMENTS
We would like to thank Phil Tokaraz for his assistance in creating injection inoculum and for troubleshooting extraction protocols and Lilia de Guzman for her expertise on maintaining the mite treatment group. We would like to thank Evan Bramlet for help in pulling pupae, Hunter Martin for pupal injections, and Natalie Martin and RaeDiance Fuller for tissue dissections, RNA extraction, and RT-PCR. We also thank Albert Robertson for supplying Saskatraz queens. Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S. Department of Agriculture. The USDA is an equal opportunity provider and employer.
|
2022-06-03T13:20:42.520Z
|
2022-06-03T00:00:00.000
|
{
"year": 2022,
"sha1": "6af5bda2b6390945d9ba4f82f0fad431314f96be",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "6af5bda2b6390945d9ba4f82f0fad431314f96be",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
14202684
|
pes2o/s2orc
|
v3-fos-license
|
The role of Si interstitials in the migration and growth of Ge nanocrystallites under thermal annealing in an oxidizing ambient
We report a unique growth and migration behavior of Ge nanocrystallites mediated by the presence of Si interstitials under thermal annealing at 900°C within an H2O ambient. The Ge nanocrystallites were previously generated by the selective oxidation of SiGe nanopillars and appeared to be very sensitive to the presence of Si interstitials that come either from adjacent Si3N4 layers or from within the oxidized nanopillars. A cooperative mechanism is proposed, wherein the Si interstitials aid in both the migration and coarsening of these Ge nanocrystallites through Ostwald ripening, while the Ge nanocrystallites, in turn, appear to enhance the generation of Si interstitials through catalytic decomposition of the Si-bearing layers.
Background
Silicon-based technology is the prime enabler for highdensity integrated microelectronic circuits, optoelectronics, and photovoltaic devices with ubiquitous applications ranging from mobile devices to high-end computing and communications. As Si complementary metal-oxidesemiconductor (CMOS) circuits are relentlessly scaled down to 16 nm or smaller dimensions, knowledge about fundamental nanoscopic processes in Si is becoming crucial for developing a good understanding on the limitations of nanofabrication and the development of future evolutionary directions for the technology as a whole. Many processing reactions including epitaxial growth, doping, oxidation, and silicidation are affected by the native defects in Si such as vacancies, self-interstitials, and their complexes. It is believed that Si interstitials play an important role in these processes, mostly detrimental, for instance causing such effects as undesirable transient-enhanced diffusion of dopants in p/n junctions [1,2], metal spiking at silicide/Si interfaces [3], interfacial traps along the gate oxide/Si interface [4], and stacking faults/dislocations in the epitaxial layer [1,5,6].
In this paper, we report a unique effect, hitherto unreported, that is attributable to Si interstitials present within oxide layers previously generated by the selective oxidation of polycrystalline-SiGe (poly-SiGe) nanopillars leaving behind Ge quantum dots (QDs) or nanocrystallites when the preferential oxidation of Si is complete. In this novel phenomenon, these Ge QDs or nanocrystallites appear to be very sensitive to the presence of Si interstitials, almost acting as detectors for these interstitial species. The mechanism appears to be complex and long range in comparison to the typical diffusion lengths of Si interstitials within oxide layers.
Methods
Three different cases were considered for our experimental study. All cases consisted of heterostructures as shown in Figures 1,2,3,4. These samples were prepared using a CMOS-compatible approach by the deposition of poly-Si 0.85 Ge 0.15 layers over buffer layers of Si 3 N 4 or SiO 2 on Si substrates using low-pressure chemical vapor deposition. In general, a multilayer deposition of Si 3 N 4 /SiO 2 /Si 0.85 Ge 0.15 / SiO 2 was carried out sequentially on top of a Si substrate. The topmost, thin SiO 2 layer is deposited as a hard mask for subsequent plasma etching for producing SiGe nanopillars. In one case (Figure 2), a thin layer of Si 3 N 4 was deposited immediately prior to the deposition of the SiGe layer.
The poly-Si 0.85 Ge 0.15 layers were lithographically patterned to create nanopillar structures of various diameters (50 to 120 nm) over the buffer oxide layers and then subsequently oxidized at 900°C for 10 to 90 min to produce Ge nanocrystallites embedded within the oxide ( Figure 2). It takes about 20 min to convert a 60-nm-thick, 120-nmwide poly-Si 0.85 Ge 0.15 pillar completely into SiO 2 /Ge nanocrystallites at 900°C by thermal oxidation within an H 2 O ambient. The entire process has been described together with the mechanism for Ge nanocrystallite formation in previous publications [7][8][9]. For yet another sample (Figure 3), the oxidized pillars were subsequently encapsulated via the conformal deposition of a thin capping layer of Si 3 N 4 . Details of the thicknesses of the various layers are provided in the schematic diagrams of various structures. It is our contention that Si interstitials are provided both by the Si 3 N 4 layers and by the oxidized SiGe nanopillars themselves, in the latter case, perhaps generated by the incomplete oxidation of the Si within the SiGe.
Results and discussion
The experimental procedure for the formation of Ge nanocrystallite cluster within SiO 2 is described schematically in Figure 1. The SiO 2 capping layer prevents the evaporation of Ge during the final, high-temperature oxidation process for the generation of Ge QDs from the SiGe layer. The bottom Si 3 N 4 layer (in contact with the Si substrate) also acts as an oxidation mask to protect the Si substrate from oxidation during the thermal oxidation of the SiGe nanopillars. Thermal oxidation preferentially converts the Si from the poly-Si 0.85 Ge 0.15 into SiO 2 , while squeezing the Ge released from solid solution within each poly-SiGe grain into irregularly shaped Ge nanocrystallite clusters that ostensibly assume the crystal orientation and the morphology of the original poly-SiGe grains. Thus, within this newly formed SiO 2 , a self-assembled cluster of Ge nanocrystallites appears in the core of the oxidized nanopillars ( Figure 1) and the Ge nanocrystallites are 5.8 ± 1.2 nm in size with an interspacing of approximately 4.8 nm [7].
The first evidence of a unique growth and migration behavior mediated by the presence of Si interstitials was observed in the sample that contained a thin Si 3 N 4 layer directly below the original SiGe nanopillar ( Figure 2) and which was subjected, following oxidation of the poly-Si 0.85 Ge 0.15 layer, to further thermal annealing at 900°C for 30 min in an H 2 O ambient. The entire cluster of Ge nanocrystallites appears to migrate from its original location within the oxide and ultimately penetrates the Si 3 N 4 layer. We believe that this is because of the Si 3 N 4 layer acting as an initial, local source of Si interstitials via a catalytic decomposition process described elsewhere [9,10]. In brief, the Ge nanocrystallite clusters/QDs migrate through the underlying Si 3 N 4 layer in a two-step catalytic process, during which the QDs first enhance the local decomposition of the Si 3 N 4 layer, releasing Si that subsequently migrates to the QDs. In the second step, the Si rapidly diffuses and is ultimately oxidized at the distal surface of the QDs, generating the SiO 2 layer behind the QDs and thus facilitating the deeper penetration of the QDs in the Si 3 N 4 layer. It is clearly seen in Figure 2 that an increase in the layer thickness of Si 3 N 4 in proximity to the SiGe nanopillars enhances the initial, local source of Si for facilitating the migration of the as-formed Ge nuclei after the SiGe pillar is oxidized. The increased Si content results in a considerable enhancement in the coarsening of the Ge nanocrystallites, as observed when increasing the thickness of buffer Si 3 N 4 from 8 to 15 nm (Figure 2a,b), and also serves to achieve complete coalescence of the nanocrystallites to form a single Ge QD when the buffer Si 3 N 4 is thick enough (22 nm) (Figure 2c). Attendant to the migration process are changes that occur to the crystallographic morphology, crystallinity, and sizes of the Ge nanocrystallites. Thus, the Ge nanocrystallites are undergoing an Ostwald ripening process [11] which also, in addition to the migration, appears to be facilitated by the Si interstitials.
Further evidence of the Si interstitial-mediated Ostwald ripening process was provided by the sample with the Si 3 N 4 capping layer (Figure 3) subjected to thermal annealing at 900°C for 90 min in an H 2 O ambient. In this case, the Ge nanocrystallite clusters within the pillars experience lateral Si interstitial fluxes in all azimuthal directions because of the surrounding Si 3 N 4 . Therefore, the in-plane symmetry of the radial Si interstitial fluxes prevents the Ge nanocrystallite clusters from adopting any one, particular direction for preferential migration as was seen in the previous case ( Figure 2). However, the Ostwald ripening proceeds unhindered and results in significant coarsening of the Ge nanocrystallites by as much as 3 to 4 × ! With the profound understanding gained by the above two cases, we can now examine the case of the nanopillar sample itself, without either the underlying Si 3 N 4 layer or the Si 3 N 4 capping layer but also subjected to the same thermal annealing at 900°C for various times within an H 2 O ambient. In this case, it is observed that the Ostwald ripening process occurs at a much slower rate with a slight change in the average size of the Ge nanocrystallites within the cluster. Starting from an original average size of 5.8 ± 1.2 nm for the as-formed Ge nanocrystallites, Figure 4a shows the time evolution of the Ge nanocrystallite clusters formed after thermal annealing at 900°C under an H 2 O ambient of 120-nmdiameter pillars of previously oxidized Si 0.85 Ge 0.15 for annealing times of 10, 40, 70, and 100 min, respectively. The average nanocrystallite size changes from approximately 7 nm at 10 min of annealing to 8.7 ± 0.9 nm at 40 min, 10.5 ± 1.8 nm at 70 min, and 11.2 ± 2.5 nm at 100 min of annealing (Figure 4b). Based on the above evidence, we believe that the slight coarsening of the Ge nanocrystallites that is observed with increased annealing times is mediated by the small, residual concentration of Si interstitials left behind after thermal oxidation of the SiGe layer. The Ostwald ripening process essentially stops around 70 min when these interstitials are used up, i.e., converted to oxide.
The above TEM observations clearly reveal that the growth and migration behaviors of Ge nanocrystallites are very sensitive to the presence and the content of Si interstitials that are provided either externally by adjacent Si 3 N 4 layers or by small concentrations of residual Si interstitials remaining within the oxidized poly-SiGe pillars. The role of Si interstitials in the growth of Ge nanocrystallites under thermal annealing in an oxidizing ambient is sketched in Figures 2d, 3d, and 4c. Although a large body of work exists in the literature on the generation and role of Si interstitials, to our knowledge, the above phenomenon has never been reported before. Previous work has attributed the thermal oxidation of Si inducing a drastic lateral expansion of the silicon lattice [12] and the generation of silicon self-interstitials as a means of partially relieving the compressive stress in the growing oxide layer that develops as a result of a 2.25× volume expansion when Si is converted to SiO 2 . The majority of these Si interstitials generated during Si oxidation diffuse into the growing oxide layer and are also oxidized [13,14], while a relatively small, but significant, amount of interstitials diffuse into the Si substrate, causing supersaturation of these interstitials and the consequent precipitation as oxidation stacking faults (OSFs) [5,6] or oxidation-enhanced diffusion (OED) [1,2] of some dopants. Interestingly, the OED of boron during the thermal oxidation of Si is effectively suppressed through the introduction of a thin layer of Si 1 − x Ge x or Si 1 − x Ge x C y over the Si substrate or even completely eliminated when the Ge or C concentration is high [15][16][17]. Moreover, the reduction of the Si interstitials has been shown to be Ge concentration dependent. Again, to our knowledge, we have not found previous work describing a cooperative mechanism, wherein the Si interstitials aid in both the migration of Ge nanocrystallites and in the coarsening of these nanocrystallites through Ostwald ripening as clearly shown above. The additional, interesting aspect of this novel mechanism is that as described by us previously [9,10], the Ge nanocrystallites also appear to enhance the decomposition of the Si-bearing Si 3 N 4 layers resulting in further generation of Si interstitials.
The quality of the oxide generated by the thermal oxidation of the poly-Si 0.85 Ge 0.15 could also play a significant role in facilitating the new mechanism that we have discovered. Diffusion lengths of Si interstitials in SiO 2 calculated at 900°C for diffusion times of 10, 40, 70, 100, and 145 min are 0.72, 1.43, 1,89, 2.26, and 2.72 nm, respectively, based on the equation of D = 1.2 × 10 −9 ⋅exp (−1.9/k B T) [18]. Obviously, these diffusion lengths are too small to explain the Si interstitial-mediated mechanism that we have observed. Hence, we believe that the oxide generated from poly-Si 0.85 Ge 0.15 is possibly not as dense as the conventional, thermally generated oxide from Si substrates and therefore allows the faster diffusion of the Si interstitials through the oxide.
Conclusions
In conclusion, we have observed a unique phenomenon of the migration and growth of Ge nanocrystallite clusters within SiO 2 layers that is made possible by the presence of Si interstitials during high-temperature thermal annealing in an oxidizing ambient. The Ge nanocrystallites generated by selective oxidation of SiGe appear to be very sensitive to the presence of Si interstitials that are provided either by adjacent Si 3 N 4 layers or by residual Si interstitials left behind after thermal oxidation of the SiGe. The Si interstitials also facilitate the Ostwald ripening of the Ge nanocrystallites. We have proposed a novel cooperative mechanism for this Si interstitial-mediated growth and migration of Ge nanocrystallites under thermal oxidation. We envisage further scientific exploration of this unique phenomenon and the demonstration of new device geometries with Ge QDs buried within various Si-containing layers.
|
2017-06-26T11:57:14.619Z
|
2014-07-07T00:00:00.000
|
{
"year": 2014,
"sha1": "02f974a6feddf9cef14ae68b0fedeae1a682b780",
"oa_license": "CCBY",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/1556-276X-9-339",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51ad14eb71b918b8a0df959fb56c7aab3b014771",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
211265552
|
pes2o/s2orc
|
v3-fos-license
|
A CT-based radiomics nomogram for differentiation of focal nodular hyperplasia from hepatocellular carcinoma in the non-cirrhotic liver
Background The purpose of this study was to develop and validate a radiomics nomogram for preoperative differentiating focal nodular hyperplasia (FNH) from hepatocellular carcinoma (HCC) in the non-cirrhotic liver. Methods A total of 156 patients with FNH (n = 55) and HCC (n = 101) were divided into a training set (n = 119) and a validation set (n = 37). Radiomics features were extracted from triphasic contrast CT images. A radiomics signature was constructed with the least absolute shrinkage and selection operator algorithm, and a radiomics score (Rad-score) was calculated. Clinical data and CT findings were assessed to build a clinical factors model. Combined with the Rad-score and independent clinical factors, a radiomics nomogram was constructed by multivariate logistic regression analysis. Nomogram performance was assessed with respect to discrimination and clinical usefulness. Results Four thousand two hundred twenty-seven features were extracted and reduced to 10 features as the most important discriminators to build the radiomics signature. The radiomics signature showed good discrimination in the training set (AUC [area under the curve], 0.964; 95% confidence interval [CI], 0.934–0.995) and the validation set (AUC, 0.865; 95% CI, 0.725–1.000). Age, Hepatitis B virus infection, and enhancement pattern were the independent clinical factors. The radiomics nomogram, which incorporated the Rad-score and clinical factors, showed good discrimination in the training set (AUC, 0.979; 95% CI, 0.959–0.998) and the validation set (AUC, 0.917; 95% CI, 0.800–1.000), and showed better discrimination capability (P < 0.001) compared with the clinical factors model (AUC, 0.799; 95% CI, 0.719–0.879) in the training set. Decision curve analysis showed the nomogram outperformed the clinical factors model in terms of clinical usefulness. Conclusions The CT-based radiomics nomogram, a noninvasive preoperative prediction tool that incorporates the Rad-score and clinical factors, shows favorable predictive efficacy for differentiating FNH from HCC in the non-cirrhotic liver, which might facilitate clinical decision-making process.
Background
Hepatocellular carcinoma (HCC) is the most common primary liver cancer and the third most common cause of cancer death worldwide [1,2]. Approximately 80% of cases of HCC occur in patients with liver cirrhosis, arising from hepatitis B and C infections or alcoholism [2,3]. In patients with liver cirrhosis, noninvasive diagnosis of HCC can be established by a characteristic feature of arterial phase hyperenhancement followed by portal venous or delayed phase washout on multiphasic contrast CT or MRI. However, an increasing number of HCC arises in a non-cirrhotic liver [3], probably due to transient hepatitis B infection or due to diffuse liver damage caused by non-alcoholic fatty liver disease. In such non-cirrhotic cases, other benign hypervascular liver lesions (hepatocellular adenoma [HCA] and focal nodular hyperplasia [FNH]) should be taken into the differential diagnosis.
FNH is the second most common benign liver tumour in the non-cirrhotic liver, characterized by nodular hyperplasia of the hepatic parenchyma around a central stellate area of fibrosis associated with a congenital vascular malformation [4][5][6][7]. Typical FNH can be diagnosed with confidence by using multiphasic contrast CT or MRI. Atypical FNH may show less intense enhancement, absence of a central scar, pseudocapsular enhancement on delayed images, as well as the presence of hemorrhage, calcification, or necrosis [8,9], making the differential diagnosis between atypical FNH and HCC rather difficult. The distinction between HCC and FNH is critical as the management differs considerably.
Various imaging modalities have been applied in the distinction between HCC and FNH, such as CT [1,9,10], Doppler ultrasound [11,12] and MRI [1,3,5,[13][14][15]. In previous studies, the gadoxetic acid-enhanced MRI is being increasingly used in differentiating focal liver lesions. HCC generally shows definite hypointensity on the hepatobiliary phase (HBP) because of decreased or absent uptake of gadoxetic acid. On the other hand, FNH commonly shows iso-or hyperintensity on the HBP because of their preserved ability to take up gadoxetic acid. However, 10-15% of HCCs show iso-or hyperintensity on the HBP because of overexpression of organic aniontransporting polypeptide (OATP) 1B3, which is one of the uptake transporters of gadoxetic acid into hepatocytes [1]. Approximately 10-12% of FNHs may not show iso-or hyperintensity on the HBP [7]. The paradoxical uptake or lack of uptake may make the differential diagnosis of HCC from FNH rather difficult.
The purpose of this study was to construct and validate a CT based radiomics nomogram that would incorporate a radiomics signature and clinical factors for the preoperative differentiation between HCC and FNH in the non-cirrhotic liver.
Patients
The institutional review board of our hospital approved this retrospective study with a waiver of obtaining informed consent.
Patients were identified by searching the pathology database from one institution (The Affiliated Hospital of Qingdao University) between June 2008 and February 2019 for the diagnosis of FNH or HCC on surgically resected specimens. A total of 156 patients with FNH (n = 55, 32 men and 23 women; mean age, 31.82 ± 12.55 years) and HCC (n = 101, 85 men and 16 women; mean age, 57.10 ± 9.89 years) were enrolled in this study according to the following inclusion criteria: (1) patients with pathologically proven of either FNH or HCC; (2) patients underwent contrast-enhanced CT less than 15 days before surgery; (3) patients with complete clinical and pathologic data. The exclusion criteria were as follows: (1) HCC patients with CT features of liver cirrhosis (The cirrhotic liver may demonstrate a nodular surface, widened fissures between lobes, an atrophied right lobe, hypertrophy of left lobe and/or caudate lobe and other features including portal vein dilation, portosystemic shunts, splenomegaly, and ascites, etc.); (2) HCC patients received chemotherapy or radiotherapy before surgery; (3) Image quality was unsatisfactory for analysis. The patients were divided into two independent sets: 119 patients treated between June 2008 and January 2017 constituted the training set, whereas 37 patients treated between February 2017 and February 2019 constituted the validation set.
Clinical information including age, gender, hepatitis B and C virus (HBV and HCV) infection and serum alpha fetoprotein (AFP) level (> 400 ng/mL; ≤ 400 ng/mL) were derived from medical records.
CT image acquisition
CT scans were obtained with two 64-slice CT scanners (Somatom Sensation 64, Siemens Healthcare, Erlangen, Germany; Discovery 750, GE Healthcare, Milwaukee, USA) using the following parameters: 120 kVp tube voltage, 200 mAs or 250-400 mA (using automatic tube current modulation) tube current, 64 × 0.6 mm or 64 × 0.625 mm detector collimation, a matrix of 512 × 512, a pitch of 1 or 1.375, a gantry rotation time of 0.5 s and a slice thickness of 5 mm. The scanning area covered the entire liver. An 80-90 mL volume of nonionic contrast agent (Iopromide, Ultravist 370; Bayer, Germany) was administered into the antecubital vein by a power injector at a rate of 2.5 mL/s. Pre-contrast CT was first obtained, followed by three post-contrast CT scans of the liver obtained in arterial phase (AP, 30 s), portal venous phase (PVP, 60 s), and delayed phase (DP, 90-120 s).
CT features analysis
The CT images were analyzed in our Picture Archiving and Communication System (PACS, Version 3.2.8, GE Healthcare, Milwaukee, USA) by two radiologists (Reader 1, P.N; Reader 2, G.Y) with eight and 6 years of abdominal imaging experience, respectively. Blinded to the clinic-pathologic data, the two readers interpreted the following subjective CT features by consensus: the diameter of the tumour on the axial CT image; shape (round or not round); a central scar (present or absent, a "central scar" was defined as a central stellate structure showing low attenuation on unenhanced CT images, hypovascular enhancement on AP and PVP phases and delayed enhancement on DP phase); degeneration (present or absent, "degeneration" was considered as a non-enhancing area on dynamic study due to necrosis or hemorrhage. We supposed that low attenuation on unenhanced CT images corresponded to necrosis, whereas high attenuation on unenhanced CT images indicated hemorrhage); fat deposition (present or absent, "fat deposition" was defined as an area showing fat attenuation on unenhanced CT images); calcification (present or absent); a capsule-like rim (present or absent, "a capsule-like rim" was defined as tumour rim showing low attenuation on unenhanced CT images and hypovascular-delayed enhancement on dynamic studies); dysmorphic vessels (present or absent, "dysmorphic vessels" were regarded as prominent or enlarged vessels in or around the tumour); and enhancement pattern (The enhancement pattern on dynamic CT was classified into early enhancement with a washout pattern, early enhancement with no washout pattern and other patterns. Early enhancement was defined as showing higher attenuation than the background liver on AP. Washout was defined as a nodule showing lower attenuation than the background liver on PVP to DP. No washout pattern indicated that the nodule showed equivalent or higher attenuation than the background liver on PVP to DP. Other patterns referred to the enhancement patterns not mentioned above).
Construction of the clinical factors model
Univariate analysis was applied to compare the differences of the clinical factors (including clinical information and CT features) between the two groups, and a multiple logistic regression analysis was used to build the clinical factors model using the significant variables from the univariate analysis as inputs. Odds ratios (OR) as estimates of relative risk with 95% confidence intervals (CI) were obtained for each risk factor.
Tumour segmentation and radiomics feature extraction
Tumor regions of interest (ROIs) were manually segmented in the largest cross-sectional area using ITK-SNAP software (Version 3.8.0). Contouring was drawn slightly within the borders of the tumours on AP, PVP, and DP, but avoiding covering the adjacent hepatic parenchyma and perinephric fat.
Feature extraction was performed using the Radcloud platform (Huiying Medical Technology Co., Ltd). A total of 4227 radiomics features were extracted from the ROIs. The radiomics features are divided into four groups: (1) intensity statistics features, which consists of 19 features that quantitatively delineate the distribution of voxel intensities within the ROI through commonly used and basic metrics; (2) shape features, including 10 two-dimensional features, are used to reflect the shape and size of the ROI; (3) texture features, composed 59 features calculated by gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM) and gray level size zone matrix (GLSZM), quantify the heterogeneity differences of ROI; and (4) filter and wavelet features, which include the intensity and texture features derived from filter transformation and wavelet transformation of the original image, obtained by applying filters such exponential, logarithm, square, square root and wavelet (wavelet-LHL, wavelet-LHH, wavelet-HLL, wavelet-LLH, wavelet-HLH, wavelet-HHH, wavelet-HHL, and wavelet-LLL).
Inter-and intra-class correlation coefficients (ICCs) were used to evaluate the inter-observer reliability and intra-observer reproducibility of feature extraction. We randomly chose 30 cases of CT images (10 FNHs and 20 HCCs), and ROI segmentation was performed by Reader 1 and Reader 2. Reader 1 then repeated the same procedure 1 week later to evaluate the agreement of feature extraction. An ICC greater than 0.75 suggests good agreement of the feature extraction. The remaining image segmentation was conducted by Reader 1.
Construction of the radiomics signature
The radiomics features, which met the criteria of having inter-and intraobserver ICCs greater than 0.75 and being significantly different between the two groups evaluated by one-way analysis of variance (ANOVA), were entered into the least absolute shrinkage and selection operator (LASSO) regression model to select the most valuable features in the training set. The selected features were then combined into a radiomics signature. A radiomics score (Rad-score) was calculated for each patient through a linear combination of selected features weighted by their respective LASSO coefficients.
Development of a radiomics nomogram and assessment of the performance of different models A radiomics nomogram was developed by incorporating the significant variables of the clinical factors as well as the Rad-score. The diagnostic performance of the clinical factors model, the radiomics signature and the radiomics nomogram for differentiating FNH from HCC was assessed by using the area under the receiver operator characteristic (ROC) curve (AUC) in both the training and validation sets. A radiomics nomogram-defined score (Nomo-score) for each patient was calculated in the training and validation sets. To estimate the clinical utility of the nomogram, decision curve analysis (DCA) was performed by calculating the net benefits for a range of threshold probabilities in the training set.
Statistics
Statistical analysis was performed using SPSS (Version 25.0, IBM) and R statistical software (Version 3.3.3, https://www.r-project.org). Univariate analysis was used to compare the differences of the clinical factors between the two groups by using the chi-square test or Fisher exact test for categoric variables, and Mann-Whitney U test for continuous variables, where appropriate. One-way ANOVA was used to compare the value of radiomics features for differentiation of FNH and HCC. The "glmnet" package was used to perform the LASSO regression model analysis. The ROC curves were plotted using the "pROC" package. Nomogram construction was performed using the "rms" package. Differences in the AUC values between these models were analyzed using the Delong test. DCA was performed using the "dca. R." package. P < 0.05 was considered statistically significant.
Clinical factors of the patients and the construction of the clinical factors model
The clinical factors of the patients in the training and validation sets are shown in Table 1. There was significant difference in age, gender, HBV infection, AFP level, central scar, degeneration, capsule-like rim and enhancement pattern between the two groups (P < 0.05), whereas diameter, shape, fat deposition, calcification, and dysmorphic vessels were not significantly different between the two groups (P > 0.05) in the training set. The multiple logistic regression analysis showed that only age (P < 0.001), HBV infection (P = 0.001), and enhancement pattern (P = 0.019) remained as independent predictors in the clinical factors model.
Feature extraction, selection and radiomics signature construction
Of the 4227 radiomics features extracted from AP, PVP and DP CT images, 3441 were shown to have a good inter-and intra-observer agreement, with ICCs from 0.750 to 1.000. 764 radiomics features having significant differences between FNH and HCC (P = 0.001-0.050) were entered into the LASSO logistic regression model to select the most valuable features (Fig.1). Finally, the radiomics signature was built by using 10 features. The Rad-score was calculated using the following formula: Rad-score = 0.0522 × GrayLevelNonUniformityNorm alized. The radiomics nomogram building and assessment of the performance of different models The age, HBV infection, enhancement pattern, and Radscore were incorporated into the radiomics nomogram building (Fig. 2). The diagnostic performance for the clinical factors model, the radiomics signature and the radiomics nomogram is summarized in Table 2. ROC curves of the three models are shown in Fig. 3. In the training set, the AUCs of the radiomics nomogram and the radiomics signature were significantly higher than that of the clinical factors model (both P < 0.001); however, no significant difference in AUC was found between the radiomics nomogram and the radiomics signature (P = 0.253). In the validation set, there were no significant differences in AUC among these three models (the clinical factors model vs. the radiomics signature, P = 0.376; the clinical factors model vs. the radiomics nomogram, P = 0.055; the radiomics signature vs. the radiomics nomogram, P = 0.345). The Nomo-scores for each patient in the training and validation sets are shown in Fig. 4. The DCA (Fig. 5) showed the radiomics nomogram had a higher overall net benefit in differentiating FNH from HCC than the clinical factors model across the majority of the range of reasonable threshold probabilities.
Discussion
The present study shows that the enhanced CT-based radiomics nomogram, which incorporates the radiomics signature and clinical factors, has favorable predictive value for differentiating HCC from FNH in the noncirrhotic liver with the AUC of 0.979 and 0.917, respectively in the training set and validation set. Differentiating HCC from FNH is important to select appropriate treatment and avoid unnecessary interventions. Sufficient clinical and imaging information facilitates the correct distinction of the two lesions. FNH occurs more frequently in young women (male:female ratio = 1:8) [4,7,45]. HCC is often associated with hepatitis virus infection and a higher level of AFP. Five clinical data including age, gender, hepatitis B and C virus infection and serum AFP level were analyzed in this study; and we found that FNH patients had a significantly younger age and female predominance compared with the HCC counterpart, while in the HCC group, there were more hepatitis B virus infectors associated with a higher AFP level. Age and HBV infection were proven as independent predictors by using the multiple logistic regression analysis, which was consistent with previous studies.
Contrast-enhanced CT (CECT) is the first-line imaging modality for the characterization of liver lesions. However, distinguishing an FNH from an HCC on CECT remains a challenge, especially when they lack typical imaging characteristics such as a central scar, suggestive of FNH (reported in about 65% of FNHs larger than 3 cm [6]) or liver cirrhosis, suggestive of HCC. HCC shares overlapping imaging features with FNH in the non-cirrhotic liver. The classic radiological hallmark of HCC is a hyperenhancement on AP and PVP or DP washout. FNH may also present as a hypervascular lesion with intense enhancement and washout on PVP and DP. Therefore, CECT has a limited diagnostic value in the non-cirrhotic liver for the distinction of HCC and atypical FNH.
Various strategies have been proposed to differentiate benign from malignant liver tumours with conventional CT and MR imaging characteristics. Yu et al. [9] enrolled 42 HCCs and 16 FNHs to identify the value of CT spectral imaging in differentiating HCC from FNH during the AP and PVP, and found that the lesion-normal parenchyma iodine concentration ratio in AP had the highest sensitivity (100%) and specificity (100%) in differentiating HCC from FNH. Boas et al. [10] developed and validated a simplified triphasic CT-based model of tumor blood supply that combined hepatic artery and portal vein blood supply coefficients to distinguish benign (n = 32) and malignant (n = 46) liver lesions. In addition to traditional relative enhancement criteria (such as washout), hepatic artery and portal vein blood supply coefficients could be used to classify hypervascular liver lesions, achieving high specificity (97%) and high sensitivity (76%) for malignancy. Fischer et al. [3] included 55 HCCs, 28 FNHs, and 24 HCAs to identify the key MRI features that can potentially be used to differentiate between HCC and benign hepatocellular tumors in the non-cirrhotic liver. Multivariate analysis revealed T1-hypointensity, T2-hypo-or hyperintensity, lack of central tumor-enhancement, presence of satellite-lesions, and lack of liver-specific contrast media uptake were independent MRI features indicating HCC. Kitao et al. [1] tried to identify points useful in the imaging differentiation of HCC, showing hyperintensity on the HBP of gadoxetic acid-enhanced MRI and FNH and FNH-like nodules. The CT and MRI features of 51 HCCs, 10 FNHs, and 16 FNH-like nodules were analyzed. Multivariate logistic regression analysis showed that arterial phase enhancement and washout pattern at dynamic CT and decrease of ADC ratio would be important findings for the diagnosis of hyperintense HCC differentiated from FNH and FNH-like nodule. In the present study, a clinical factors model was developed combining clinical data with subjective CT features by using multivariate logistic regression analysis, and age, HBV infection, and enhancement pattern were found as significant predictors for Fig. 2 The radiomics nomogram, combining age, HBV infection, enhancement pattern, and Radiomics score (Rad-score), developed in the training set. Enhancement pattern 1, 2, and 3 represented early enhancement with a washout pattern, early enhancement with no washout pattern, and other enhancement patterns, respectively (33/37) Note: CI (Confidence interval); * Numbers in parentheses were used to calculate percentages differential diagnosis. By using this clinical factors model, high AUC (0.799 in training set; 0.769 in the validation set) for differential diagnosis of HCC from FNH were achieved.
Radiomics enables the noninvasive profiling of tumor heterogeneity by extracting high throughput of quantitative descriptors from routinely acquired CT and MRI studies. Previous investigations have shown that CT/ MRI-based radiomics can be used for differentiating several hypervascular liver tumours. Raman et al. [41] demonstrated that CT texture analysis could be used to distinguish different hypervascular liver lesions using a random-forest model. Seventeen FNHs, 19 HCAs, 25 HCCs, and 19 cases of normal liver parenchyma were analyzed, and the texture model successfully distinguished the three lesion types and normal liver with predicted classification performance accuracy for 91.2% for HCA, 94.4% for FNH, and 98.6% for HCC. Wu et al. [37] developed and validated an MRI-based radiomics signature to distinguish HCC and HH using four feature classifiers, and found that the logistic regression classifier showed better predictive ability, achieving an AUC of 0.89 for differentiating HCC from HH. Stocker et al. [38] enrolled 55 cases of HCC and 45 cases of benign FNHs) in the non-cirrhotic liver to assess the accuracy of MRI texture features in differentiating benign from malignant liver tumours. One gray-level histogram (skewness) and four run-length matrix features extracted from AP images were regarded as the significant texture predictors aiding distinguishing HCC from benign hepatocellular tumors in the non-cirrhotic liver with an accuracy of 84.5% and an AUC of 0.92. Cannella et al. [43] investigated the texture and subjective MRI features of 32 FHNs and 51 HCAs and found that MRI TA parameters combined with hypointensity on HBP imaging yielded an AUC of 0.979 and an accuracy of 96.4% for the diagnosis of HCA. A similar CT-based texture model was built by Cannella et al. [42] for the distinction of HCA and FNH. The mean, mpp, and entropy of medium-level and coarse-level filtered images on AP were found as independent predictors for the diagnosis of HCA and the model based on all these parameters resulted in the largest AUC of 0.824. In this study, a radiomics nomogram was constructed by combining age, HBV infection, enhancement pattern, and Rad-score. The multiple logistic regression analysis showed that the Rad-score made a major contribution to differential diagnosis. In these independent clinical predictors, age provided much more weightage than enhancement Fig. 4 The radiomics nomogram-defined scores for each patient in the training and validation sets. Orange bars represent the scores for HCC patients, while green bars represent the scores for FNH patients pattern. The result was consistent with previous studies that HCC occurred more frequently in older patients compared with FNH [4,7,45]. Although their enhancement patterns are significantly different, the two entities share overlapping enhancement features [6]. The enhancement pattern has a limited impact on the distinction between HCC and FNH in the non-cirrhotic liver.
Compared to the above radiomics investigations on discrimination of different hepatic tumours, our study had several improvements. First, we chose to focus on the distinction of FNH and HCC in the non-cirrhotic liver, because these tumours are the most difficult to differentiate in routine clinical practice and are often the cause of diagnostic errors. Second, previous studies were mainly based on texture analysis associated with only dozens of texture features. Nowadays, radiomics with much more statistic features are available to provide a more comprehensive description of the tumour. In this study, a total of 4227 radiomics features were extracted and analyzed from the triphasic CT images, and finally, 10 features were selected as the significant predictors to construct the radiomics signature. All the selected features were high-order filter and wavelet features that could not be acquired by using conventional texture analysis. Furthermore, both AP, PVP, and DP CT images were used for feature selection, and 5/10 of selected features were obtained from DP images, indicating a trend toward better lesion classification with DP images for FNH and HCC. In addition, FNH is not associated with any malignant potential, and most lesions are managed conservatively. The FNH confirmed with surgical pathology only accounts for a small portion of the whole cohort. The cases of FNH enrolled in the present study were relatively more than those in previous studies.
We acknowledge the following limitations in our study. First, because of its retrospective character, potential selection bias may hamper the reproducibility and comparability of the results. Thus, the clinical usefulness of this nomogram still needs improvement and independent validation in further studies. Second, this study was a single-center experience limited to our institute, multi-center studies for further validation of its reproducibility with a larger sample are required. Third, the two-dimensional largest tumorous ROIs were delineated for the extraction of radiomics features. Whole tumour analysis appears more indicative of tumour heterogeneity than the largest cross-sectional area [46]. In addition, manual ROI segmentation is time-consuming and complicated, especially for the tumour without a welldefined boundary, the automatic segmentation technique with favorable reliability and reproducibility is needed. Fourth, it is reported that slice thickness can affect the diagnostic performance of radiomics signature, and the thin slice may be more informative [47]. A slice thickness of 5 mm was used in this study, which is usually thick for CT radiomics analysis. The difference of the performance in radiomics analysis between the thin and thick slice thickness images will be assessed in our future study.
Conclusions
In conclusion, the CT-based radiomics nomogram developed and validated for preoperative differentiation of FNH from HCC in the non-cirrhotic liver can potentially supplement conventional imaging modalities. However, the clinical use of this tool remains to be tested.
|
2020-02-25T16:26:53.775Z
|
2020-02-24T00:00:00.000
|
{
"year": 2020,
"sha1": "c9a21f8226e5bd3b457953815c4ffd83574f87dd",
"oa_license": "CCBY",
"oa_url": "https://cancerimagingjournal.biomedcentral.com/track/pdf/10.1186/s40644-020-00297-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9a21f8226e5bd3b457953815c4ffd83574f87dd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218974421
|
pes2o/s2orc
|
v3-fos-license
|
Resources in Underrepresented Languages: Building a Representative Romanian Corpus
The effort in the field of Linguistics to develop theories that aim to explain language-dependent effects on language processing is greatly facilitated by the availability of reliable resources representing different languages. This project presents a detailed description of the process of creating a large and representative corpus in Romanian – a relatively under-resourced language with unique structural and typological characteristics, that can be used as a reliable language resource for linguistic studies. The decisions that have guided the construction of the corpus, including the type of corpus, its size and component resource files are discussed. Issues related to data collection, data organization and storage, as well as characteristics of the data included in the corpus are described. Currently, the corpus has approximately 5,500,000 tokens originating from written text and 100,000 tokens of spoken language. it includes language samples that represent a wide variety of registers (i.e. written language - 16 registers and 5 registers of spoken language), as well as different authors and speakers
Introduction
As a field, theoretical linguistics has moved from idealized descriptive accounts of language toward more explanatory theories that strive to ascertain the effects of language experience on language processing and human cognition in general. Along with this shift has come a greater appreciation of the need to consider linguistic phenomena in all the world's languages. This effort has been greatly aided by the development of computational approaches and digital resources in many of the world's languages. The availability of reliable, easy to access resources is crucial for different linguistics subfields. Bender (2009Bender ( , 2016 argues for the importance of "linguistic knowledge" referring to ways in which different languages vary in their structure, for the development of language-independent natural language processing systems. Corpora are an essential tool in studying typological patterns across and within languages, documenting and preserving natural and endangered languages and developing computational tools for language processing (Gippert, Himmelmann and Mosel, 2006;Crystal 2000;Bradley and Bradley 2002;etc.). Hence, the project described in this paper aims at creating a such resource in one of the world's languages that remains relatively underrepresented and understudied to date. This paper provides a detailed description of the process of creating a representative corpus in Romanian, an Eastern -Romance language with unique grammatical and typological characteristics. The corpus is not only a large but also a balanced repository of written and spoken samples of language in Romanian, that can be used as a reliable tool in linguistic studies. Historically, Romanian has lacked large collections of empirical linguistic data, which has made it difficult across the decades to provide solid, empirically motivated analysis and study of the Romanian language. Although 1 The term register is used in literature as an umbrella term referring to general or more specific language varieties defined by situational characteristics (i.e. language used in novels, vs generally characterized as a Romance language, Romanian has its unique grammatical characteristics, representing a test case for those interested in less classically analytic languages. Unlike other Romance languages, Romanian has kept its strong usage of Latin case-marking and a rich declension system (Kihm, 2012) while developing typological characteristics it shares with Balkan rather than Romance languages as an effect of language contact (Hill, 2004;D'hulst, Coene and Avram, 2004). Presently, it is spoken by approximately 24 -26 million people as a native language and about 4 million people as a secondary language. As a function of its unique grammatical and typological characteristics, Romanian is of interest to linguists. In recent years, new resources have been developed, that enable and facilitate the study of the language. Among these are the Romanian treebank corpora, included in the Universal Dependencies (UD) Project, that contains the following typology of texts: The Romanian Non-standard UD treebank, called UAIC-RoDia with approximately 16,190 sentences; the SiMoNERo (Mitrofan et al., 2019), which is medical corpus of contemporary Romanian extracted from the Biomedical Gold Standard Corpus for the Romanian Language (BioRo) (Mitrofan and Tufiș, 2018), as well as The Romanian UD treebank, called RoRefTrees (Mititelu, Ion, Simionescu and Irimia, 2016), containing 9500 trees annotated according to Universal Dependencies. Other resources, not included in the UD project, are the CHILDES database (Child Language Data Exchange) (MacWhinney, 1996), containing three small Romanian Corpora that represent child language and roTenTen16 (Kilgarriff, 2014), which is a web-based corpus. Though a very large resource, the web-based corpus data is not well balanced (Kilgarriff, 2007) (i.e. web-based language samples are not intentionally selected to proportionall represent different registers 1 of the language, different authors, specific time period, etc.). The Moldavian and journal articles, vs conversations (Biber, 1995), or more specific: language used in the novels of Victor Hugo, language used in the writings of Shakespeare, etc.).
Romanian Dialectal Corpus (MOROCO), that contains over 10 million tokens collected from the news domain and representing the Romanian dialect spoken in Romania as well as the Republic of Moldova (Butnaru and Ionescu, 2019). The largest, to date corpus of the Romanian language is the Reference Corpus of the Contemporary Romanian Language (CoRoLa), 1,257,752,812 tokens. The data in the corpus is distributed in an unbalanced way, containing language samples from the legal, administrative, scientific, journalistic, imaginative, memoirs and blogposts domains. The motivation to create a new resource for Romanian was to build a balanced repository of the language that includes as many registers as possible, written as well as spoken, from different Romanian authors, regardless the spoken dialect. We envision to add as many tokens of spoken data as we will collect for written data.
Planning the Building of The Corpus.
Corpora designers need to carefully address the following general practices when creating a corpus: the planning of the corpus construction, including decisions concerning the corpus type, size, representativeness and balance; data collection and storage, including obtaining copyright permission, creating the metadata and cleaning the text; and finally decisions referring to corpus annotation (Biber, Conrad & Reppen, 1998;McEnery and Wilson, 1996;Mayer, 2002). Based on these practices, our project follows the following methodological decisions, made prior to data collection: 1. The size of the corpus will reach at least 4 million words. 2. The corpus will contain at least 15 different registers. 3. Each register will contain approximately 100,000 words. 4. We will try to control for variables such as gender, age of the speaker or writer. 5. Individual text files will be saved in UTF-8 format and stored in individual directories, hierarchically representing all registers. 6. Information on a variety of variables such as author names and gender, type of texts (i.e. full versus shorter samples), and the online source of the texts will be stored for further reference. 7. The corpus will be a balanced monitor 2 corpus. 8. Future steps: adding written text from earlier time periods (1500 -1800), adding spoken language samples and annotating the corpus for different grammatical markers. 9. These decisions have guided the data collection process. In the following sections, some of these considerations are discussed.
Corpus Type.
In order to enlarge the scope of our resource, we created a monitor corpus. A corpus that allows constant additions of new samples of data not only increases its size and representativeness as access to new data is gained, but also represents language through time and at its current state. A corpus as such can be used for a wide variety of linguistic measures, but also for typological studies and lexicography. Text is continuously being added to this resource as well as spoken language and transcribed spoken text.
Corpus Size and Representativeness.
Various factors can influence the ability to collect language data (e.g. time, data availability, funding, etc.). Classically, a representative corpus will include natural language samples that represent as many instances of language usage as possible. "Lengthier corpora are better than shorter corpora. However, even more important than the sheer length of the corpus is the range of genres included within it" (Meyer, 2002); thus, we aimed at including a wide range of genres from various language domains. Another aspect of the language we tried to include is the dialectal varieties in both written and spoken language (i.e. Moldovan Romanian). Although the spoken dialects of the Romanian spoken in the two countries (i.e. Romania and Republic of Moldova) differs due to the strong Russian influence on the spoken language in Moldova (Baar and Jakubek, 2017), the literary standard is similar (Minahan, 2013). Both dialects are included in the corpus. The initial goal for this project was to collect a minimum of 4 million words. This goal was attained (i.e. at its current state, the corpus has approximately 5,500,000 tokens from written data and approximately 100,000 tokens of spoken data); however, samples will continually be added. It is worth mentioning that depending on its purpose, some authors argue for small rather than large corpora. For example, O'Keeffe, McCarthy & Carter (2007), argues for the concept of small corpora as means to encourage detailed analysis of each individual feature; however, a multi-purpose corpus requires larger and more representative samples of language data (i.e. quantitative measures such as word frequency, neighborhood density, affix productivity, etc. seem to benefit from larger data samples). Biber (1990) found that many grammatical features are well stable within 1,000-word samples; however, rarer grammatical features may still be underrepresented in such small samples. The Balanced Romanian Corpus (BRC) has a collection of full texts for a wide range of registers. All registers contain over 100,000 words (see Table 1). Some registers are represented in a larger proportion for both opportunistic and intentional reasons. We were able to include certain texts over others since we obtained permission from a limited number of sources, which contain only specific language genres. We also included lengthier samples in certain genres, depending on the genre characteristics. For example, novels tend to be longer than poems, since we decided to include full texts for all genres, it was necessary to allow a larger proportion of tokens in order to include larger number of writings. The larger register in the corpus is Literatura Tradusa 'Translated Literature'. This was done intentionally, with the aim to mirror the genres included in the BRC written originally in Romanian. As these works were originally written by non-Romanian authors, the original language may have influenced the translations and we wanted to represent these peculiarities. The translated literature contains: Eseuri ('Essays'), Fabule ('Fables'), Fictiune ('Fiction'), Filosofie ('Philosophy'), Poezii ('Poems'), Romane ('Novels') and Teatru ('Theater'). The large number of tokens in the register of translated literature was a consequence of trying to add literature originally written within different genres and in different languages (i.e. English, French, Spanish and Russian; we plan to add texts from other languages as well). We tried to equally represent text samples from both male and female authors. However, some registers (i.e. Romane, 'Novels', Romane Istorice, 'Historical Novels' and other registers) have predominantly male authors in Romanian literature; thus, balancing the genders represented was challenging. Table 1. below shows the number of authors in each register with some of their demographic characteristics. The steps taken while collecting and editing the data were documented for further reference. Also, a list of the specific web pages and the names of the authors related to each document was separately created and stored. The process of getting the copyright permission was also documented.
Corpus Data
The sources for the text in the corpus were decided based on both "judgment" (i.e. trying to create a language repository that is balanced and representative across registers) and "convenience" (i.e. different registers of the language were selected but were also restricted by copyright permissions). For the spoken data, we have so far obtained the permission of bloggers and journalists/TV hosts Veronica Ghimp-Deineco and Lilia Lozovan Roşca, as well as producer and journalist Ana Danilescu to include some of their posts and TV show series. For each of the sources (including Audio Data), written permission was obtained from either the website owner or the author/writer/speaker or the producer of the sample. Below is a list of the web resources used:
Romanian Corpus Registers
Although the BRC at its present state includes a smaller proportion of transcribed spoken language, a wide variety of registers were chosen while collecting the written text. Within the registers, different authors and rubrics were included. For example, within the register Știri 'News', journal articles from the rubrics Justiție 'Justice', Social 'Social', Politic 'Politics', Editoriale 'Editorials' were equivalently collected; in Manuale 'Textbooks', pieces of text from Biologie 'Biology', Chimie 'Chemistry', Istorie History, Muzică 'Music' as well collections of Interviuri 'Interviews' and Bibliografii 'Biographies' are proportionally represented; Articole Cercetări 'Research Articles' contains text from articles about sports, mathematics, physics, pedagogy, and medicine; Poiezii 'Poems' include poems for children, and different genres of poetry; Basme 'Fairy Tales', include text written in prose as well as lyrics, etc. The corpus includes registers that have been considered valuable for the corpus representativeness as well as for the documentation of the Romanian language at its present state. For example, while children's literature has not been largely considered in linguistics research (Knowles & Malmkajer, 1996), Baker and Freebody (1989) analyzed texts from 163 primary school readers and noted that the frequency distribution of words in these shows different patterns compared to traditional corpora based on adult language samples (e.g. the word little was almost as frequent as the determiners in traditional English corpora). Hence, children's literature was considered an essential register of the language and was included in the BRC. The corpus includes two genres of children's literature: 'Fairy Tales' and 'Poetry', included within Basme 'Fairy Tales' and Poezii 'Poems', respectively. Along with fairy tales, many of the children's poems are only orally transmitted through generations. These cannot be found in print therefore, collecting the available online samples was considered crucial for language preservation purposes. We also obtained two notebooks of manually written songs (for both children and adults), collected during 1940-1950, in Republic of Moldova by Olga Midrigan, transmitted from her parents and grandparents. These are written in Romanian, using the Cyrillic alphabet (see Figure 1). We are currently transcribing them using the Romanian Alphabet. We are planning to include these in the corpus for preservation purposes. About 20% of the register Poezii 'Poems' is composed of children's poetry. Poetry is one of the richest compartments of children literature in Romanian (Stanciu, 1968(Stanciu, , 2000; also, children's poetry, especially lullabies, have influenced many musical genres (e. g. Doina -a free-rhythm, highly ornamented improvisational tune -their lyrics' common themes are melancholy, longing (dor), love for nature, complaints about life, religious, etc.,), hence have important cultural and linguistic connotations. Another register that was considered valuable for corpus representativeness was school textbook language samples. Since most children use textbooks through their education process, concurrent with the developing of language skills, representing this genre was considered necessary. An important language characteristic used more frequently in textbooks appears to be the imperative mood: Rețineți definiția 'Note the definition'. Textbook language samples were thus included as a separate genre in the corpus.
Corpus Text Structure
Constructing enormous collections of machine-readable text from online resources is fairly easy in certain languages; however, manually collecting, parsing, reformatting, and restricting text to be in line with the corpus text encoding conventions is a time-consuming process, as it was in our case. All texts were manually extracted from the online sources to ensure good data quality, to reduce the risk of including texts that lacked the proper use of Romanian diacritics (e.g. "ț", "ȋ", "ă", "ș", etc.), and to facilitate the recording of the metadata. Information about the name, gender, age and nationality of the author for each text for which it was available, was manually recorded and then included in the metadata files. The proofreading was also manually performed. This was necessary especially for registers such as Teatru, 'Theater', where names of the actors, scenes, and acts, needed to be delete, as well as for the register Manuale 'Textbooks', for which many rubrics in the books contained numbers, exercises, tables, and other content that may not represent language use per se, rather mathematical and statistical facts; these were manually extracted. Individual texts were converted to UTF-8 format and saved as plain text documents. Each file was named with author name and work title. Each text was organized in separate files, in distinct directories, containing the specific register and the metadata files associated with the files. These were organized in such ways for ease of compilation and accessing. The audio data was transcribed by Romanian native speakers and saved in its own directory with the associated audio files and metadata. Audio data is being continuously added to the corpus. These were then made available online through GitHub Pages. The Romanian Corpus, including transcribed spoken data, can be easily accessed and downloaded at the link: https://lmidriganciochina.github.io/romaniancorpus/.
Metadata
Storing information that describes the properties of the linguistic resource and variables containing information about every individual file is very essential for both accessing information of interest for specialized studies (e.g. psycholinguistic studies looking at gender differences and language use) but also indexing and searching the corpus. For the creation and storing of the metadata for the Romanian Corpus, we are using the Arbil tool, developed at Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands (Withers, 2009;, available at (http://tla.mpi.nl/tools/tla-tools/arbil/). This tool allows the usage of standardized profiles and schemas for both spoken and written language resources, CMDI (Component MetaData Infrastructure) framework. We are continually editing these as new data is added.
Corpus Annotation Aims
We are in the process of starting to annotate the Corpus, using the text preprocessing module TTL (Ion, 2007), developed in Perl. The module is available at http://ws.racai.ro/ttlws.wsdl and offers a Sentence Splitter, Tokenizer, Tagger, Lemmatizer, and a Chunker procedures for Romanian (Tufiş, Ion, Ceauşu and Ştefănescu, 2008). We also envision parse tree annotation, semantic labeling and affix annotation in future steps.
Conclusions and Further Directions
Computerized corpora in the different languages spoken around the world has important implication for linguistic theory. Although not all questions can be answered by studying language as represented in corpora, they can greatly expand our understanding of language and its complex facets. Evidence from corpora allows researchers to document natural language, study language typology and the effects of language-dependent factors on language processing. It has also important applications in the development of natural language processing systems. Thus, building resources in languages that are understudied is crucial. This project's goal was to enable linguists to study Romanian and its unique grammatical characteristics, by creating a reliable repository of text that represents a wide variety of registers of this language. Although the corpus presented in this project is one of the largest available for Romanian, we want to continue enlarging this resource, specifically the spoken data; Compiling large samples of spoken data is not an easy task. Accurately transcribing spoken language can be time consuming and expensive; it also requires native speaker knowledge. However, spoken language is by far the mode in which language is most frequently used, and it has its distinct characteristics. In written language, the authors tend to clean the text in somewhat unnatural ways: the ideas are well formed, and well organized. While when producing language, the speakers tend to make many repetitions (i.e. same words are said twice, or even more than two times), utter unfinished thoughts (Mayer, 2002), and produce various speech errors. Spoken language tends to have generally shorter sentences, with words that may not appear at all in written language (i.e. aha is used a lot in conversational Romanian to show 'agreement' or 'approval', while it is not a word that appears in written text). Language dialects are yet another reason why spoken language may have different characteristics than written language. Brysbaert and New (2009), found that frequencies that are calculated from movies' subtitles and television were better than the ones found on written text. Thus, a further direction for the development of the corpus is adding and including an equal amount of spoken data along with the written samples. Another step is adding text samples representing different time periods in the history of Romanian language development (i.e. current Romanian orthography is little more than a century old (Mallinson, 1988), thus writings representing the original language forms may be found particularly in print). Further work is yet needed in order to make the resource as easy to use as possible; the current corpus is at its initial stage of annotation. Many corpora used in various linguistic studies are still unannotated; however, working with annotated corpora makes the process of information retrieval easier and faster. One further steps of the present project is annotating the text and transcribed spoken language for various types of grammatical information. Some of the target annotations are word-class annotation as well as morphological annotation, including affix annotation for different types of affixes of the language; in addition, parse tree annotation and semantic labeling are also considered. The BRC project, aims to create a new, reliable resource in Romanian -a language that has unique structural characteristics, still remains understudied.
|
2020-05-29T13:12:15.120Z
|
2020-05-01T00:00:00.000
|
{
"year": 2020,
"sha1": "d4217a65542d1e8e66c4ca24b6b703ca0825315f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "47680b7fdf316eb00f144b6fcf729bcff57908ab",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
133254987
|
pes2o/s2orc
|
v3-fos-license
|
Using GIS in an Earth Sciences field course for quantitative exploration, data management and digital mapping
Abstract Field courses are essential for subjects like Earth Sciences, Geography and Ecology. In these topics, GIS is used to manage and analyse spatial data, and offers quantitative methods that are beneficial for fieldwork. This paper presents changes made to a first-year Earth Sciences field course in the French Alps, where new GIS methods were introduced. Students use GIS in preparation to explore their research area using an elevation model and satellite images, formulate hypotheses and plan the fieldwork. During the fieldwork, a pilot group managed their field-observations using GIS and made digital maps. Students praise the use of quantitative digital maps in the preparation. Students made use of the available techniques during the fieldwork, although this could be further intensified. Some students were extra motivated due to the technical nature as well as the additional analytical possibilities. The use of GIS was experienced as a steep learning curve by students, and not all staff members are confident in supervising students using GIS, which calls for a sufficient preparation and training of both students and staff. The use of GIS adds abstract analyses and quantitative assessment, which is a complementary learning style to fieldwork that mostly focuses on practical skills.
Introduction
Fieldwork is an integral part of education in natural sciences and disciplines with a strong spatial component, e.g. Earth Sciences, Ecology, Anthropology and Human Geography (Favier & van der Schee, 2009;Fuller, Edmondson, France, Higgitt, & Ratinen, 2006;Hope, 2009;Krakowka, 2012). Field courses vary in style and learning goals, for example teachers may show and explain phenomena to students, or students can have more freedom in a research-oriented fieldwork (Kent, Gilbertson, & Hunt, 1997;Remmen & Frøyland, 2014;Tonts, 2011).
Geographic Information Systems (GIS) comprise spatial datasets and software to collect, manage and analyse this data. GIS is valuable for education since it may help students to identify and analyse spatial patterns (Kim & Bednarz, 2013;Lee & Bednarz, 2009;Sinton, 2009). In many Geography, Earth Science and similar programs, GIS methods are being taught. However, for programmes with no GIS courses, or programmes where such courses appear late in the curriculum, students participating in fieldwork do not have a background in GIS methodology. In such cases, easy-to-use GIS-methods are a valuable addition to individual courses.
Fieldworks are considered valuable for teaching as this incorporates active learning or learning by doing (Klein, 2003;Remmen & Frøyland, 2014;Revell & Wainwright, 2009) and students learn better due to the usual positive affection toward fieldwork (Boyle et al., 2007). Nevertheless, field courses are often quite qualitative in nature, and adding quantitative GIS-methods adds a different learning style which may further enhances the students learning experience (Healey & Jenkins, 2000;Kolb, 1984). Previous work has shown that using novel technologies engages students, and that the use of GIS improves students' sense of spatial concepts (Carlson, 2007).
GIS can be used for several aspects of a fieldwork: (1) exploration and preparation of the fieldwork by using existing maps, elevation data and remote-sensed imagery; (2) collection of data by using mobile-GIS systems; (3) management and processing of collected data during the fieldwork to make or adjust a fieldwork campaign; (4) data analysis and creating maps; and (5) post-fieldwork analyses (e.g. modelling) with information collected in the field (Karssenberg, Burrough, Sluiter, & de Jong, 2001). Warburton and Higgitt (1997) state that IT-supported learning is useful for preparation of a fieldwork. However, field equipment now permits the use of GIS during all phases of a fieldwork (Wagtendonk & De Jeu, 2007). This paper reports on changes made to a first-year field course in the Earth Sciences programme at Utrecht University. In this field course, students collect their own data to reconstruct the geology and geomorphology. The aim of these changes was to improve this fieldwork by using GIS methods for the preparation, data management and mapping. For this purpose, a new tutorial was made where students used QGIS and Google Earth to explore their fieldwork site prior to the actual fieldwork, which focusses on students with no prior GIS experience. Furthermore, during the fieldwork, a pilot group consisting of about half of the students used GIS to manage and analyse their field data, and to make digital maps instead of hand-drawn maps.
The changes made to the course were evaluated using questionnaires and interviews with students, and experiences of involved staff-members. This evaluation focusses on two research questions. Firstly, what is the benefit of using GIS to the learning goals of the fieldwork? We hypothesize that (1) the students make better hypotheses related to their fieldwork site when using GIS in the preparation; and (2) that the students' field data are better structured, which helps them see spatial relations and relations with existing data (Bearman, Jones, André, Cachinho, & DeMers, 2016). The second research question reads: does the learning experience of the students improve by using GIS during fieldwork? Here we hypothesize that (1) the use of GIS appeals to a different learning style, which enhances students learning (Healey & Jenkins, 2000); and (2) the use of GIS motivates students as this is experienced as a modern approach (Carlson, 2007).
Implementation of GIS methods in a field course
This section summarizes the structure and learning goals of a first year field course and the changes we made to this course. This course is part of the Bachelor programme Earth Science at Utrecht University. The first three quarters of the first year consist of theoretical courses and a four-day excursion, and there is no GIS-course in this period. The field course under consideration takes up the entire final quarter.
Structure and learning goals of the field course
The field course consists of a preparation period of four weeks, followed by four weeks of fieldwork and two weeks back at the campus for wrapping up. Student-pairs are assigned a unique research area in the region between Veynes and Larange-Monteglin, France ( Figure 1).
In the preparation period, executed on campus, students learn fieldwork methods the general geology and geomorphology of the area. Furthermore, students explore their research area (A1 in Table 1) and formulate hypotheses (A2) using aerial photographs and a topographic map.
During the fieldwork period, student-pairs spend two-and-a-half week researching their own area, the remaining time is spent on excursions and field activities that are not related to the students' own research area. The goals of the research are to learn to perform observations (B1 in Table 1), interpret the spatial pattern in these observations (B2) and their relation to existing data (B3), and to reconstruct the geological and geomorphological history (B4). Students collect about 100 field observations, draw maps and write a report. The fieldwork is driven by setting-up working hypotheses, which students test with field observations.
New GIS methods in fieldwork preparation
In the fieldwork preparation period, new tutorials were introduced where student use Google Earth (Google Inc, 2013), and satellite imagery and a digital elevation model (DEM) using QGIS (QGIS Development Team, 2015), see A1 in Table 1. These, and the existing aerial photograph tutorial, focused on exploration of the research area and resulted in a set of maps (A2), working hypotheses, and a list of locations to visit in the field (A3). (Full text of these tutorials is available in Marra, 2016a). In the aerial photograph and Google Earth tutorials, students formed hypotheses on the geology and geomorphology of their fieldwork area. The aerial photos give an overview of the research area, while Google Earth gives more flexibility in viewing at different scales and angles. Google Earth is considered engaging tool valuable for geoscience education in general (Bailey, Whitmeyer, & De Paor, 2012), and can bridge between the broader content of classroom lectures and the students' research areas (Monet & Greene, 2012). Students made a fieldwork plan consisting of interesting locations together with their expectation of what to find there in the field.
The QGIS tutorial focussed on quantitative data. Students used a DEM at 30 m resolution (SRTM, USGS, 2015a) and a pre-processed Landsat-8 satellite image (USGS, 2015b) in both natural colours and false-colour to show vegetation patterns. Students made a slope-map from the DEM and described the vegetation patterns using the satellite imagery ( Figure 2). With this information, they made a digital map with hypothesised geomorphological units.
New GIS methods during fieldwork
During the actual fieldwork, data collection in the field remained the same, using a paper notebook, GPSs, geological compass, etc. In addition, students managed their data with QGIS at the end of each day in their accommodation (B1, Table 1). They digitized their fieldbook using an Excel spreadsheet (Figure 3(a)) in combination with QGIS. (Templates are available in Marra, 2016b). With this data, students were encouraged to make maps of spatial pattern in rock type, grains-size, soil properties, etc., and to overlay their observations on the topographic map, DEM and satellite images (B2, Figure 3(b)).
Students made digital maps (Figure 3(c)) instead of hand drawn maps. By drawing these maps in QGIS, the datasets and observations points are directly available as an aid to make these maps (B3), and students could make use of additional data sources to combine with their own field data to confirm or reject hypotheses (B4).
Evaluation methods
To evaluate the intended benefits of the use of GIS in the field course, students filled in questionnaires and a focus group was interviewed. All students used the new components of the preparation period (Section New GIS methods in fieldwork preparation), and a pilot group of about half of the number of students used the new workflow during the fieldwork (Section New GIS methods during fieldwork). A questionnaire was taken after the fieldwork preparation period. This questionnaire was intended to compare how the students experienced the classic and new methods, and to what extent the preparation contributed to the intended learning outcomes. A second questionnaire was taken after the fieldwork period. In this questionnaire the students had to reflect on the usefulness of the preparatory assignments for their fieldwork and use of GIS during the fieldwork. With the latter questionnaire, we compared students that used GIS and those that relied on classic methods during their fieldwork.
We asked the students questions on the following topics: (1) How the students experienced the preparatory tutorials, whether these contributed to the students understanding of their fieldwork area prior to the fieldwork, and which elements of the preparation contributed to the fieldwork in retrospect (learning goals A1, A2, A3 in Table 1). (2) How students collected and managed their data, how the new digital methods were used and if this contributed to the students' understanding of the landscape and their maps (B1, B2, B3). (3) Which data sources the students used, and whether this contributed to the fieldwork planning, the students' understanding of the landscape and the quality of their maps (B2, B3). (4) How students and their supervisors experienced the use of the new methods.
The questionnaires consisted of 4-step Likert rating scale (Likert, 1932) and free-text questions. A 4-step scale was used to avoid students picking the neutral option when they had a weak opinion about the subject, and to make strong visualisation of the results based on the method by Robbins, Heiberger, Court, and Hall (2011). Using a 4-step scale does not decrease the reliability of such a rating scale (Lozano, García-Cueto, & Muñiz, 2008).
Differences in the mean of Likert-scale were assessed with a dependent-samples t-test for the difference between questions and with an independent t-test for the differences between GIS vs non-GIS users. Although the response scale could be considered non-continuous, a t-test is still a powerful tool to evaluate rating scales (de Winter & Dodou, 2010).
A number of 68 (out of 70) students completed the evaluation after the preparation period and 64 students completed the questionnaire after the fieldwork, 23 of them were part of the pilot group that used GIS. Six students from the pilot group were interviewed by a person not involved in the fieldwork and unfamiliar to the students. The interviews were anonymous to promote honest responses. In this interview, students elaborated on their experiences, difficulties and improvements.
Furthermore, the experiences and opinions of six members of the staff involved in the field-course as supervisors were used to reflect of the students' evaluation.
Fieldwork preparation tutorials
The Google Earth tutorial received the highest scores (average = 3.5) regarding how interesting students found the tutorials, although the other tutorials followed closely (3.3, 3.4, Figure 4). Similarly, when asked if the tutorials contributed to their fieldwork preparation, students gave the highest score to the Google Earth tutorial (3.4), followed by the GIS-tutorial (3.3).
After the fieldwork, we asked the students if the preparatory assignments helped them with their fieldwork. The average scores for the GIS tutorial was the highest for both GIS and non-GIS users, compared to the aerial photo and Google Earth practical ( Figure 5). Interestingly, students that used GIS gave a lower score to the aerial photograph tutorial than non-GIS users (2.5 and 2.8), and a higher score to the GIS tutorial (3.7 and 3.2) ( Figure 5).
On the open questions, the large majority (48 out of 68 completed surveys) of the students answered that seeing the area in 3D is strongest point of using aerial photographs. For Google Earth most students (24/68) identify that familiarizing with the research area is the most useful part of this tutorial, and many students praise the flexibility and different angles from which they could view the research area (17/68) and finding location to visit (16/68) ( Figure 6). For the GIS tutorial, most students indicate seeing many maps as the most useful part if this tutorial (24/68), followed by making a geomorphological map (16/68), and the use of quantitative data (10/68). Specifically, for making a map students praise the synthesis Students' responses to questions asked after the fieldwork, concerning the preparation, grouped by students that did or did not use GiS during the fieldwork. note: ** indicates a significant difference in mean of the two groups at the 99% confidence level. of hypotheses during this exercise, an example of a response: "Getting a final, clear view/ hypothesis on the geomorphology of the area before going into the field", "Seeing different maps and combining them really helped us to make hypotheses of the area. " Regarding the use of quantitative data, a student responded: "With maps like elevation and slope, you can already test hypotheses because we used real values instead of only ideas. " Points for improvements are provided by the students mainly for the aerial photo tutorial, students indicate they wanted better materials (20/68) and instructions (18/68) ( Figure 6). For the other two tutorials, most students left this question blank, filled in "None" or a similar response. For the GIS tutorial, several students requested more explanation of the software (10/68), mostly to deal with common issues like changing or deleting polygons. The remaining responses covered diverse topics.
Fieldwork planning
Making a fieldwork plan in the preparation period received the lowest appreciation of the students (Figure 4). In the interview, students indicated there was not enough time to make a complete plan and to receive feedback. The plan describes places to visit for the first few days, however in the field these plans quickly change. As one of the students stated: "You think of better places to go, it is difficult to make this assessment on forehand", and: "Some places turn out to be inaccessible by cliffs or stingy bushes. " Although the fieldwork plans and hypotheses quickly superseded in the field, supervisors felt that making and presenting the fieldwork plan was valuable. This part of the preparation made students think about their approach in the field, and made it easier to discuss steps to prove or reject hypotheses during the fieldwork.
Data collection and mapping
Students that used GIS during the fieldwork give a positive reaction on the statements that GIS improved their understanding of their collected data (3.4) and the quality of their maps (3.2). However, there are no large differences in responses between students that used GIS and those that did not (Figure 7). Assessing these differences based on this questionnaire is difficult, because students cannot reflect on the difference in method as they have not experienced both. Students in the focus group were asked for their opinions on the use of GIS, which yielded enthusiastic responses, e.g. "GIS is a real improvement, otherwise you have to draw all points manually and use transparent paper. We already spend too much time on things that are so much easier with a computer. " In the students' reports, we see that the GIS-users made use of elevation profiles made from a DEM using QGIS as an alternative to making such a profile from contour lines of the topographic map. Besides elevation profiles, students have used the DEM and slope-map as an aid to map the boundaries of geomorphological units that were difficult to observe in the Figure 7. Students' responses to questions regarding the fieldwork period, grouped by students that did or did not use GiS during the fieldwork. note: ^ indicates a significant difference in mean of the two groups at the 90% confidence level.
field. None of the students used the provided satellite images in their report to support their findings. These could be used to classify vegetation and to complement field observations on vegetation. To make students use the satellite image better, they would need to be asked to make a qualitative assessment, for example by using a vegetation index based on infrared data and compared to observed vegetation properties.
Technical quality and issues
Several students indicated in the questionnaire that they wanted more instructions of QGIS (8 out of 23 from the pilot group). Some of these students experienced a lack of skills: "If we practiced more with GIS, everybody could have made much more beautiful maps", while others indicted they wanted to do more than required: "I wish they (supervisors) explained more of the functions of QGIS, because there are so many more things you could do to make fieldwork easier. " Supervisors indicated that hand drawn-maps were in general more appealing to see. However, a quality difference was present for both methods, which shows that quality difference is not merely the result of a digital-or hand-drawn map.
A number of 5 students out of 23 from the pilot group indicated to have experienced technical difficulties while using QGIS during fieldwork. There were three occasions where the transfer of the digital data-set from student to supervisor failed. This is the result of the data structure of a GIS project, consisting of multiple data-files with one central project-file referencing the data-files. As the students did not fully grasp this structure, only a part of the data structure was transferred in several cases.
Students indicate that using GIS is hard at first: "The first time it is like magic, you have no idea what happens, it is not the easiest software to work with" Also, students indicate that not all supervisors could equally help with GIS and sometimes had to forward problems to more knowledgeable members of staff.
The main concern supervisors had, is that they do not feel confident with supervising students using GIS or helping with technical difficulties. This an important point for making such changes; it requires a good preparation of the students so they hardly need technical help, and teaching the supervisors on the software and techniques that the students use.
Motivation of using GIS
Fieldwork in general is a positive experience for students, as one of the students' stated: "… fieldwork is essential because the type of skills are not learned from a book. ", or "You now see everything we learned in the field, therefore you start looking at it differently. " Student indicated that learning GIS is important for your career perspective since many job-vacancies ask for GIS-skills. "Skills you learn should prepare you for labour market". Similarly, students indicate that they prefer to use the same methods as the staff uses for their research to manage and process data. In that perspective, using advanced methods could be motivating for the students, which may help them overcome the hurdle of learning them.
Discussion
In this paper, we presented changes made to a research-focussed field course to incorporate the use of GIS, and the results of the evaluation thereof. In this section, we summarise and elucidate on the intended learning benefits, and provide recommendation for further improvements and similar courses.
Benefits and recommendations for using GIS in a field course
Preparing for fieldwork using Google Earth and QGIS in addition to the classic stereo-aerial photographs, was considered valuable for their fieldwork and a good experience by the students. Students indicate the value of exploring their research area with quantitative data using GIS (learning goal A1 in Table 1).
In addition to just exploring, students made hypotheses and a fieldwork plan in the preparation phase (A2, A3 in Table 1). However, these were quickly superseded by better hypotheses once in the field and several planned routes were inaccessible in reality. This did not speed up or improve the quality of the fieldwork per se, but we consider this an exercise in formulating hypotheses. For improvements and future cases, making such plans and hypotheses are more valuable when the students already have some experience with fieldwork and better know what to expect.
Students produced well-structured data-set using GIS during their fieldwork (B1), but the evaluation remains unclear about whether this resulted in a better understanding of the spatial patterns compared to students that did not use GIS (B2 and B4). Most students made use of elevation profiles, several students explicitly used spatial patterns of grain-sizes to support their interpretation, but none made use of the satellite image showing vegetation patterns. In the presented case, students were hinted at using these sources of data, but were not given step-by-step instructions. For first-year students we recommend to require the use of such GIS analyses in the assignment to promote their usage.
Making a map digitally was not faster or slower than the analogue alternative (B3). The quality of digital and analogue maps varied between students, with hand-drawn maps being more appealing in general. We recommend a good preparation to familiarize the students with the software for a smooth experience in the field and to improve visual quality of the maps.
Learning experience and motivation of students
GIS is increasingly becoming a valuable tool in geography research and an important skill in industry. The number of GIS courses increases, but courses focused on GIS alone do not provide a realistic view of its applications (Şeremet & Chalkley, 2014;Sinton, 2009). Using GIS as part of a fieldwork is an easy way to introduce GIS in a relevant context rather than in a GIS-specific course and enhances the student's view of the possibilities of GIS.
Using GIS techniques has a motivating effect on some students. We noticed that techsavvy students explored additional functionality and invested more time on making their maps. Students indicated they liked the use of digital and quantitative data as this gave a different view on their fieldwork area. It seems that both the use of new techniques as well as the additional analytical opportunism are a source of motivation (Carlson, 2007;Welsh et al., 2015). In addition, students indicated they also liked using GIS because it is a relevant skill for future research and for the labour market.
With GIS-data, students had to formulate hypotheses and were had to think about what to find in the field before going there. This exercise forced them to combine theoretical knowledge from lectures and abstract data to an expectation on what to find in the field.
The latter two points are associated with skills like abstract thinking and reflection, i.e. the assimilator learning style (Healey & Jenkins, 2000;Kolb, 1984). These skills are a valuable addition to a fieldwork, which is often dominant in practical skills like measuring, reporting observations, planning, or learning by experiencing in general. With the use of GIS, different types of learning styles are combined in a field course, which enhances the learning experience of the students (Healey & Jenkins, 2000).
Technical issues and staff-support
In our case, several students experienced' technical issues, which is in most cases a lack of understanding and practice with the software, just as that students may have difficulties using advanced analyses with pencil and paper. The latter case is not deemed a "technical" issue, but likewise requires help of knowledgeable staff. Students indicate that GIS has a steep learning curve, so the advice is to reserve enough time for practice in the preparation period and not give up or be tempted to revert to non-GIS methods.
Several members of staff raised the concerns of not being able to help students with questions on GIS, similar to the findings of Fletcher, France, Moore, and Robinson (2007), who indicated that a limited understanding of new technologies is a barrier in incorporating these in field courses. This is especially an issue for field courses as knowledgeable staff members are not available for support. Possible solutions are to train all involved staff in the same techniques as the students use, and to make use of distant-learning techniques like screencasts (e.g. Ooms et al., 2015) for common procedures and remote supervision to reduce the technical demand of on-site staff members.
Additional opportunities and recommendations
The teaching materials described in this paper were made for a first-year Earth Sciences field course in the French Alps. In this section we describe additional opportunities and recommendation for using these materials for similar initiatives.
The tutorials for preparing a fieldwork are based on the use of Google Earth and QGIS. Google Earth can be used in a wide range of topics (Bailey et al., 2012). In addition to global imagery and elevation data, many areas contain historic and ground-based (street-view) imagery. For geological exploration, Google Earth combined with an additional elevation data source can be used to map geological contacts and estimate strike and dip angles (Hasbargen, 2012). Google Earth is particular useful for showing both a broad overview as well as local details, and can be strong tool combine insights from the general landscape and information from small research areas.
We choose to use QGIS as it is relatively easy to use for basic analyses and digitizing, available for free and without a connection to a licence server. There are many alternate chooses for GIS-software (GisGeography, 2016), with ArcGIS and GRASS being widely used. To mitigate issues with demanding hardware requirements and licencing during a fieldtrip, ArcGIS could be run in a cloud service (Mui, Nelson, Huang, He, & Wilson, 2015).
For the QGIS tutorial, a 30-m SRTM elevation model and 15-m Landsat-8 image was used (USGS, 2015a(USGS, , 2015b. These data are available worldwide and good to get an overview of areas of several kilometres across. The datasets were pre-processed for the fieldwork, but the students did the visualisation and derivation of a slope map themselves. Although this is merely pressing a few buttons, student's positive reaction on using the slope map may be encouraged as they made and visualised the data themselves. The 30 m resolution was just enough for our case, but it mainly shows the general structure of the fieldwork areas (a few km in size). For future use, we recommend the use of a higher resolution DEM. Using a higher resolution model increases the details and quality of the maps and derived hypotheses, which may therefore be of higher value in the field. Furthermore, advanced GIS techniques can be used to identify morphological features (Seijmonsbergen, Hengl, & Anders, 2011) or the vegetation could further explored using a NDVI-index (Tucker, 1979).
In the presented case, we used a workflow with a paper notebook in the field and digital methods for processing field data afterwards on a daily basis. We choose not to use mobile devices as we anticipated issues with batteries and adverse weather conditions. Nevertheless, mobile GIS systems on smartphones, tablets or PDAs (Wagtendonk & De Jeu, 2007) could be used for this purpose, especially with rugged weatherproof equipment. Using mobile GIS integrates data collection and data processing, and students can access digital maps in the field, which may help to relate data to reality.
Conclusions
This paper describes newly introduced GIS-methods in a first-year field course to the French Alps. Students explored their fieldwork, using a DEM, satellite image and aerial photographs. During the actual fieldwork, a pilot group stored and analysed their field observations using QGIS and drew digital geomorphological maps. The new approach was evaluated using questionnaires and a group-interview. We draw the following conclusions: • Students praise the exploration of their fieldwork area using Google Earth and QGIS, and both the students that did and did not use GIS during the fieldwork indicate the preparatory GIS-assignment as the most valuable for their fieldwork. • Plans and hypotheses made in the preparation were a valuable exercise, although quickly superseded by better ones when the students were in the field. • Students utilized GIS during fieldwork to explore spatial patterns in their data and the relation with existing GIS-data, although it remains unclear if this improved the quality of the students' interpretations. • Students were on general positive about the use of GIS. They indicate that working with a skill that is relevant for future work was motivating and tech-savvy students spend more time and effort on their work. • Using GIS introduces skills related to abstract thinking and reflection, which is a valuable addition to the students' learning experience in a field course that otherwise consist of mainly practical skills. • A few students experienced technical issues, mainly with transferring GIS-data to supervisors. These issues can be resolved with more training and step-by-step instructions before and during the fieldwork.
• Some supervisors had concerns about not being able to help students with specific problems with the software. These concerns call for a good preparation for the students, as well as teaching supervisors.
|
2019-04-26T14:26:44.155Z
|
2017-03-21T00:00:00.000
|
{
"year": 2017,
"sha1": "a2e67b16f50e9e6d021acfec77710bdebbbc0735",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/03098265.2017.1291587?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "a714276d8afc5c8b0cd98033bb243392f67ecb2c",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Geography"
]
}
|
2098554
|
pes2o/s2orc
|
v3-fos-license
|
Increased Expression of Angiogenic and Inflammatory Proteins in the Vitreous of Patients with Ischemic Central Retinal Vein Occlusion
Background Central retinal vein occlusion (CRVO) is a common disease characterized by a disrupted retinal blood supply and a high risk of subsequent vision loss due to retinal edema and neovascular disease. This study was designed to assess the concentrations of selected signaling proteins in the vitreous and blood of patients with ischemic CRVO. Methods Vitreous and blood samples were collected from patients undergoing surgery for ischemic CRVO (radial optic neurotomy (RON), n = 13), epiretinal gliosis or macular hole (control group, n = 13). Concentrations of 40 different proteins were determined by an ELISA-type antibody microarray. Results Expression of proteins enriched in the vitreous (CCL2, IGFBP2, MMP10, HGF, TNFRSF11B (OPG)) was localized by immunohistochemistry in eyes of patients with severe ischemic CRVO followed by secondary glaucoma. Vitreal expression levels were higher in CRVO patients than in the control group (CRVO / control; p < 0.05) for ADIPOQ (13.6), ANGPT2 (20.5), CCL2 (MCP1) (3.2), HGF (4.7), IFNG (13.9), IGFBP1 (14.7), IGFBP2 (1.8), IGFBP3 (4.1), IGFBP4 (1.7), IL6 (10.8), LEP (3.4), MMP3 (4.3), MMP9 (3.6), MMP10 (5.4), PPBP (CXCL7 or NAP2) (11.8), TIMP4 (3.8), and VEGFA (85.3). In CRVO patients, vitreal levels of CCL2 (4.2), HGF (23.3), IGFBP2 (1.23), MMP10 (2.47), TNFRSF11B (2.96), and VEGFA (29.2) were higher than the blood levels (vitreous / blood, p < 0.05). Expression of CCL2, IGFBP2, MMP10, HGF, and TNFRSF11B was preferentially localized to the retina and the retinal pigment epithelium (RPE). Conclusion Proteins related to hypoxia, angiogenesis, and inflammation were significantly elevated in the vitreous of CRVO patients. Moreover, some markers known to indicate atherosclerosis may be related to a basic vascular disease underlying RVO. This would imply that local therapeutic targeting might not be sufficient for a long term therapy in a systemic disease but hypothetically reduce local changes as an initial therapeutic approach.
Introduction
Retinal vein occlusion is the second most common vascular eye disease and causes vision loss due to macular edema, retinal bleeding and ischemia [1]. The worldwide prevalence is estimated at 1:1250 [2]. Central retinal vein occlusion (CRVO) is less frequent than branch retinal vein occlusion (BRVO) but results in greater retinal damage.
Visual acuity (VA) prognosis in CRVO is significantly improved by treatment of macular edema either with intravitreal steroids or anti-VEGF therapeutics that address inflammatory and VEGF-driven ocular changes [3]. Intravitreal anti-VEGF treatment leads to significant visual gain of 15 letters or more in up to 60% of the patients (47% ranibizumab [4], 55% aflibercept [5], 60% bevacizumab [6]) at one year. However, final VA of 20/40, sufficient to allow for driving and reading, is only reached in every second patient (47% ranibizumab [4]). This underlines the need for a detailed characterization of risk factors and further improvement of treatment strategies.
Known risk factors for RVO are advanced age [1], glaucoma and systemic diseases, especially components of the metabolic syndrome such as diabetes mellitus, hypertension and hyperlipidemia [7]. Regarding diabetes, patients with end-organ damage from diabetes have a significantly increased risk of CRVO, while those without do not [7]. Hyperlipidemia leads to atherosclerosis, which represents a later state of the disease. Atherosclerosis of the central retinal artery was found in association with CRVO [8]. The hypothesis that atherosclerosis is associated with a higher risk of CRVO is supported by the finding that history of stroke and peripheral arterial disease are associated with higher incidence of CRVO [7,9,10].
Inflammatory cytokines, chemokines and neurotrophic factors have been investigated in the vitreous of patients with retinal vascular diseases due to diabetes or retinal vein occlusion. VEGF is upon the most investigated as anti-VEGF is implemented in therapy [3,11]. Elevated levels of inflammatory immune mediators such as IL-6, IL-8, CCL2 were reported in central and branch RVO, diabetic macular edema, proliferative diabetic retinopathy and retinal detachment [12]. Others found significantly higher levels of IL-1β, IL-2, IL-5, IL-8, IL-9, IL-10, IL-12, IL-13, CCL11, G-CSF, IFN-γ, CXCL10, CCL2, CCL4, TNF, and VEGF specifically in CRVO [13]. An association between the expression of inflammatory factors and the severity of macular edema was observed in CRVO [14]. Levels of VEGF, IL-6, sICAM-1 and PEDF correlated independently with vascular permeability. These factors were higher in CRVO than in controls, higher in ischemic versus non-ischemic CRVO and correlated with macular edema in optic coherence tomography [14].
Analysis of plasma levels of atherosclerotic and thrombophilic risk factors demonstrated that arterial hypertension, hypercholesterolemia, hyperhomocysteinemia and elevated factor VIII were associated with an increased risk for ischemic versus non-ischemic CRVO [15]. We set out to simultaneously investigate the expression of 40 proteins associated with inflammation, hypoxia, angiogenesis and atherosclerosis in vitreous and blood samples of patients undergoing RON (radial optic neurotomy) for clinically defined ischemic CRVO and compared it to a control group of patients receiving surgery for epiretinal gliosis or a macular hole. Criteria for the selection of the proteins to measure were solubility in the cytoplasm (as we did not expect cells or cell membranes in the vitreous), a context with angiogenesis and inflammation, and availability from the provider of the array. Our data suggest that distinct chemokines (CCL2) and growth factors (HGF) may represent valuable targets for novel therapeutic approaches to treat or prevent ischemic complications in CRVO patients. The observations also support epidemiologic data regarding risk factors such as atherosclerosis.
Ethics statement
All patients gave their written informed consent prior to their inclusion in the study. The study was registered as experimental laboratory investigation at the Center of Clinical Trials and approved by the Institutional Review Board of the University Freiburg (No 215/08) and performed in accordance with the IRB's requirements, with the ethical standards laid down in the 1964 Declaration of Helsinki and with the federal laws in Germany.
Patients and study design
Patients with ischemic CRVO were recruited between 2005 and 2006. At the time of sample acquisition, radial optic neurotomy was thought a valuable surgical approach for ischemic CRVO. However, this technique did not fulfil expectations [16]. In recent years, intravitreal anti-VEGF treatment has been introduced to treat macular edema secondary to CRVO. It is currently the new standard of treatment for either non-ischemic and ischemic CRVO and surgical approaches are left to rare severe cases. CRVO patients with ischemic occlusive disease, indicated either by nonperfusion in fluorescence angiography (> 10 disc diameters), visual acuity > 1.0 log MAR, and/or clinical findings such as dark hemorrhages, a high number of cotton wool spots, or massive leakage of the vessels and papilledema [17], were selected for vitrectomy and radial optic neurotomy (n = 13). Duration of CRVO was defined as time from onset of symptoms until surgery. Neovascularizations of the iris were found in 2/13 patients. Control specimens were collected from 13 patients undergoing vitrectomy for macular pucker and macular hole. Patient data is presented in Table 1. CRVO patients did not show differences compared to control regarding age and the risk factors arterial hypertension, diabetes, and history of stroke. Significantly more CRVO patients presented with hyperlipidemia, history of smoking, glaucoma and use of anticoagulants (aspirin or phenprocoumon), indicating a higher prevalence of cardiovascular diseases known as risk factors for CRVO.
A standard 3-port vitrectomy was performed during surgery. Sample acquisition was achieved as the first step of the surgery avoiding dilution by the infusion. Depending on clinical findings, additional procedures such as laser photocoagulation, intravitreal administration of triamcinolone or bevacizumab could be included at the end of surgery. Samples (200-400 μl each) were immediately stored at -80°C until further investigation.
Patients with other proliferative eye diseases, such as uveitis or diabetic retinopathy, or patients with intraocular surgery within the last 6 months, or history of vitrectomy, were excluded from the study.
Measurement of proteins
Concentrations of various proteins from vitreous and blood samples were measured with an ELISA-type antibody microarray (Quantibody, Raybiotech Inc., Norcross, GA) following the manufacturer's instructions. Antibodies for each protein were arrayed in quadruplicates per array. 80 μl of vitreous or blood was used for each sample. The detection antibodies were labelled with biotin which was detected with Alexa Fluor 555-conjugated streptavidin. The signals were read with a G2565 microarray reader (Agilent Technologies, Santa Clara, CA). TM4 Spotfinder (http://www.tm4.org, [18]) was used for quantification of the spots. The concentrations of the proteins were calculated from the median intensities of the spots using standard curves obtained with a mix of the 40 peptide standards. Detection limits were calculated from the standard curves with DINTEST (http://www.luiw.ethz.ch/computer/software/) according to DIN 32645. Protein concentrations were determined using the Bradford assay with BSA as a standard [19] as the BCA (Bicinchoninic acid) test resulted in erroneously high values if the proteins were not precipitated.
Mean concentrations for CRVO and control groups were compared by nonparametric comparisons (R package nparcomp, http://www.r-project.org/) with Tukey's correction for multiple comparisons. Correlation between concentrations and the time after CRVO was determined by the Pearson product-moment correlation coefficient. p < 0.05 was considered significant. Correlations among proteins as well as among patients were tested in R with corr (psych package) using the Spearman coefficient and Holm adjustment for multiple comparison. Biochemical pathways were analyzed by enrichment analysis (EnrichmentBrowser and gage in Bioconductor) against the KEGG pathways database (http://www.genome.jp/kegg/) and the gene ontology database (http://geneontology.org/) for the CRVO patients and vitreous samples with 24 genes showing expression above background. A complete list of factors tested, gene symbol, gene ID and gene name is provided as supplementary S1 Table. Immunohistochemistry Enucleated eyes from two female patients (88 and 61 years old) with severe ischemic CRVO followed by neovascular glaucoma were investigated for expression of the proteins that showed (5) 13 (4) Age (mean ± SD, years) 74.6 ± 9.7 69.5 ± 9.8 Duration of CRVO (weeks) 9.4 ± 5.9 not applicable Neovascularisation of the iris (at time of surgery) 2/13 (15%) 0 (0%) Neovascularisation of the disc or elsewhere in the retina 0 (0%) 0 (0%) Mean visual acuity (log MAR +/-SD) 1.6 +/-0.48 not applicable Risk factors for CRVO (%): Arterial hypertension 10/13 (77%) 11/13 (85%) Diabetes 0 (0%) 0 (0%) higher concentrations in the vitreous than in blood. Sections (5 μm) of paraffin embedded eyes were dewaxed and demasked for 20 min in 100 mM sodium citrate, pH 6.0, in a steamer. After transfer to TBST (50 mM Tris / HCl pH 7.6, 0.9% NaCl, 0.02% Tween 20), sections were blocked with Ultra V Block (Lab Vision at medac GmbH, Wedel, Germany) and incubated with antibodies as listed in Table 2 for 3 h. After washing in TBST, an AP-labelled goat antimouse secondary antibody (A3562, Sigma-Aldrich, Taufkirchen, Germany) was applied for 1 h, or a biotin-labelled goat anti-rabbit secondary antibody (71-00-30, KPL, Gaithersburg, MD, USA) was applied for 1 h followed by streptavidin-coupled AP (71-00-45, KPL) for 1 h. Slides were washed and AP was made visible by the Vector Red AP Substrate Kit I (SK-5100, Vector Labs at Axxora, Lörrach, Germany). Sections were counter-stained with hematoxylin.
Concentration of various proteins in the vitreous or blood
The blood concentrations of the proteins investigated in this study were similar in CRVO patients compared to control patients (range of the ratios between 0.22 and 1.93, median 1.04, Table 3). In contrast, the concentration of total protein in the vitreous of CRVO patients was 6.4 fold elevated compared to that of control patients ( Table 3). Proteins like FGF6, FGF7, MMP1, TIMP1, and TIMP2 did not show enhanced vitreous concentrations in CRVO patients compared to controls. This indicates that there was not only a break-down of the blood retina barrier but also a local production within the eye or a selective transport of proteins. The total increase of the proteins measured in this study was 1.1 μg/ml in the vitreous (mainly contributed by PPBP), while the increase in total vitreal protein was 2.8 mg/ml indicating a 2500 fold impact of blood retina barrier break-down as compared to ocular protein expression. Taking into account the ocular expression of proteins not measured in this study, the factor will be somewhat smaller than 2500. In CRVO patients, most of the proteins investigated had significantly higher concentrations in blood than in the vitreous. However, protein concentrations (ratio vitreous / blood) of CCL2 2) were significantly higher in the vitreous than in the blood of the same patient (p < 0.05, Table 3). This indicates that the proteins showing higher concentrations in the vitreous were, at least partially, produced within the eye or actively transported there. In control patients, only HGF showed significantly higher concentrations in the vitreous than in blood. Statistical analysis of the correlations among proteins or patients were not conclusive, most probably because of the
Dependence of the protein concentrations from the time after occlusion
The time between the onset of symptoms due to CRVO and the time point at which the vitreous specimen was taken was different for each patient (time after occlusion, mean: 9.4 ± 5.9 weeks). As surgery was performed once in every patient, specimens could not be taken at different time points which limits the interpretation of the results. We compared the concentrations of the proteins measured to the time after occlusion (Table 3). PPBP (-0.73, p<0.05) and CCL2 (-0.55, p<0.05) showed a negative correlation (Fig 1) that may reflect an increased selective permeability for certain small proteins or an increased inflammatory or angiogenic state shortly after CRVO that is repaired with time. Similar tendencies, though not statistically significant, were found for IGFBP2, IL6, MMP3, TIMP4, and VEGFA. In contrast, LEP showed a positive correlation (0.67, p<0.05).
Localization of selected proteins in the human eye
Ocular localization of the proteins that showed significantly higher expression in the vitreous than in blood (CCL2, IGFBP2, MMP10, HGF, TNFRSF11B) was investigated in histological specimens of eyes from patients with painful blindness due to secondary glaucoma after CRVO (Fig 2). Staining for all these factors was found preferentially in the retina and in the retinal pigment epithelium (RPE) but to a much lesser extent in the optic nerve head or extrascleral nerves. Staining intensity was higher in ocular areas affected with inflammation. HGF was additionally found in the endothelium and media of some but not all extrascleral vessels. Staining for GFAP (glial marker), IBA1 (microglial and macrophage marker), and COL IV (marker for basement membranes of vessels) was used for comparison.
Discussion
Retinal vein occlusion and subsequent ischemia are followed by the release of cytokines, growth factors and enzymes which contribute to severe vision loss due to retinal edema and neovascularization. Previous studies in patients suffering from retinal vein occlusions detected a range of proteins in the vitreous [20,21]. Vitreal VEGFA concentrations were also determined earlier [11,22,23] and used as a reference in this study. In contrast to previous reports, we had the opportunity to assess distinct protein levels in the vitreous as well as in blood samples of patients following ischemic CRVO and compare them to unrelated controls. This allowed us to further characterize CRVO-specific changes in vitreal protein expression patterns and to gather evidence for ischemia-induced localized expression of distinct proteins as opposed to a release from blood. Most of the proteins that were found to be more prevalent in the vitreous of CRVO patients than in controls appear to be strongly related to hypoxia, inflammation or angiogenesis. VEGFA, ADIPOQ, ANGPT2, CCL2, IGFBP1, or LEP share a common hypoxia-response element (HRE) at their promoter or intron [24][25][26][27] indicating that they are regulated by HIF1A or HIF2A. In addition, IGFBP2 and IGFBP3 are known to be up-regulated upon hypoxia [28,29], but it is currently unclear if they are upregulated by HIF or by one of his target genes. Three of these factors (VEGFA, CCL2 and IGFBP2) showed significantly higher levels in the vitreous than in the blood of CRVO patients. These data are clearly consistent with the activation of hypoxia-induced gene networks and a localized intraocular expression of specific proteins due to a hypoxic state. The transcription factor NFKB is upregulated by hypoxia. It has a central role in inflammation as it induces IFNG [30] and IL6 [31]. Both were detected in the vitreous of CRVO patients and significantly increased compared to controls. In atherosclerosis, CCL2 (also named MCP-1) is involved in initial steps of inflammation by attracting monocytes, T-cells and dendritic cells [32,33]. These data strongly support the notion that CRVO is inducing an inflammatory response in the vitreous.
VEGFA, HGF, MMP3 and MMP9 share common ETS1 binding sites in their promoter region [34][35][36]. The transcription factor ETS1 is expressed in endothelial cells and upregulates genes involved in angiogenesis. ETS1 itself is induced by angiogenic factors like VEGF, HGF, or FGF2 resulting in a positive feed-back loop [37,38]. Moreover, expression of ETS1 is induced by HIF1A [39] linking angiogenesis to hypoxia in addition to the up-regulation of VEGFA by HIF. The metalloprotease MMP10 is induced by the transcription factor MEF2 in response to VEGFA [40,41]. This indicates that significant vitreal levels of angiogenic signaling factors are present in ischemic CRVO before neovascular changes are clinically apparent. In addition, these findings point towards a set of angiogenic target proteins including HGF and selected MMPs which may be amenable to pharmacological intervention.
HGF was found to be increased in vitreous samples of patients with proliferative diabetic retinopathy and was higher in vitreous than in blood similar to our results [42]. The intraocular expression of CCL2, HGF, IGFBP2, MMP10, and TNFRSF11B was confirmed by immunohistochemistry in eyes of patients with secondary glaucoma after RVO. This validates some of our earlier results and provides strong evidence that these proteins are expressed in ocular tissues. For most of them, expression within the eye has been reported earlier: CCL2, HGF [43,44], IGFBP2, MMP10, TNFRSF11B, and VEGFA [45] demonstrating that at least one cell type in the eye can produce these proteins under certain conditions. Current ocular treatment is focused on anti-VEGF agents and anti-inflammatory steroids. Our data may add therapeutic targets to improve current anti-VEGF therapy in ischemic CRVO. Further investigation in the factors associated with hypoxia, inflammation and angiogenesis in ischemic CRVO may also lead to new therapeutic approaches to prevent conversion from non-ischemic to ischemic CRVO.
We also asked, whether our limited sample reflects known risk factors for retinal vein occlusion such as metabolic syndrome (diabetes, hypertension, hyperlipidemia (> 1 factor)), atherosclerosis of central retinal artery, history of stroke, and peripheral artery disease. More CRVO patients presented with one or more risk factors compared to controls (most pronounced differences in hyperlipidemia, smoking and use of anticoagulation). This is in line with previous findings: Analysis of plasma levels of atherosclerotic and thrombophilic risk factors demonstrated that arterial hypertension, hypercholesterolemia, hyperhomocysteinemia, elevated factor VIII were associated with an increased risk for ischemic versus non ischemic CRVO [15]. Our findings stress the need for careful work-up of ischemic CRVO patients to detect risk factors and adequately treat all the patient's diseases.
Pathophysiology of CRVO is not yet completely clear, but it is agreed that atherosclerotic changes of the retinal arteries contribute to the disease [46]. With regard to the vitreal proteins Immunohistochemical staining (alkaline phosphatase, red; blue counter staining: hematoxylin) for CCL2, HGF, IGFBP2, MMP10, and TNFRSF11B in ocular samples of various patients. These factors were found preferentially in the retina and additionally in nerves and in the RPE. TNFRSF11B was found in the axons and some nuclei of extrascleral nerves. HGF was additionally found in the endothelium and media of some but not all extrascleral vessels. GFAP (glia cell marker), IBA1 (migroglia and macrophage marker, and COL IV (basement membrane marker, e.g. in the basement membrane of vessels and in the membrana limitans interna) are shown for comparison. Note that GFAP is not expressed in the outer segments of the photoreceptors as is the case for CCL2, HGF, IGFBP2, MMP10, and TNFRSF11B. neg: negative control without primary antibody, a: artery, n: nerve, ONL: outer nuclear layer, green arrow: RPE.
doi:10.1371/journal.pone.0126859.g002 detected in CRVO patients, TNFRSF11B is a marker of atherosclerosis [47], though its pathophysiological role is yet unclear [48]. Similarly, serum concentration of TIMP4 is increased in systemic sclerosis [49]. TIMP4 is the major MMP inhibitor in platelets and is released upon platelet aggregation induced by collagen and thrombin [50,51]. MMP10 is upregulated by thrombin in endothelial cells and enhances fibrinolysis [52,53]. PPBP is expressed upon platelet activation during thrombus formation [54]. PPBP expression is induced by MMP3 [55]. Both MMP9 and CCL2 are associated with atherosclerosis [56] where CCL2 attracts monocytes that mature into macrophages and produce MMP9. This cleaves components of the extracellular matrix within the atherosclerotic plaques. Thus, several of the vitreal proteins we detected are consistent with an atherosclerotic phenotype. Since data on the vitreal protein expression patterns preceding the retinal vein occlusion are not available, it remains challenging to dissect which vitreal proteins reflect an underlying chronic condition rather than an acute occlusion response.
In summary, ischemic CRVO is characterized by increased vitreal levels of a distinct set of proteins, some of them locally expressed, which may serve as targets for novel therapeutic approaches to augment current anti-inflammatory and anti-angiogenic treatments.
Supporting Information S1
|
2016-05-12T22:15:10.714Z
|
2015-05-15T00:00:00.000
|
{
"year": 2015,
"sha1": "5b54c3119248191639d20603ea7aecc3142dd3b5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0126859&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84cf2434b34ed2d57d633b1409ab963c811494b7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53208084
|
pes2o/s2orc
|
v3-fos-license
|
Recent Trends in Covalent and Metal Organic Frameworks for Biomedical Applications
Materials science has seen a great deal of advancement and development. The discovery of new types of materials sparked the study of their properties followed by applications ranging from separation, catalysis, optoelectronics, sensing, drug delivery and biomedicine, and many other uses in different fields of science. Metal organic frameworks (MOFs) and covalent organic frameworks (COFs) are a relatively new type of materials with high surface areas and permanent porosity that show great promise for such applications. The current study aims at presenting the recent work achieved in COFs and MOFs for biomedical applications, and to examine some challenges and future directions which the field may take. The paper herein surveys their synthesis, and their use as Drug Delivery Systems (DDS), in non-drug delivery therapeutics and for biosensing and diagnostics.
Introduction
Supramolecular chemistry, or "chemistry beyond the molecule", has presented a new paradigm for molecular engineers [1,2]-an achievement that merited the 1987 Nobel Prize [3]. Supramolecular chemistry has allowed researchers to design molecules with custom properties, like chirality [4]. The development of supramolecular chemistry has opened new subfields of nanoscience. Nanoparticles such as liposomes, micelles, and other small polymers have already been designed for biomedical applications, such as drug delivery [5][6][7]. Covalent organic frameworks (COFs) and their metal organic framework (MOF) cousins are exciting, medically relevant nanomaterials made possible by this new chemistry.
The first reported COFs were the boron-ring-based COF-1 and COF-5, synthesized by Côté and coworkers in 2005 [4]. A COF is a two-dimensional (2D) or three-dimensional (3D) structure with a backbone of light atoms [8], which are held together by strong covalent bonds [9]. Desirable properties of COFs include regular porosity, crystallinity, and high Brunauer-Emmett-Teller surface area (S BET ), which where demonstrated in Côté's groundbreaking work [4,10]; other properties include well-defined pore aperture, ordered channel structure, low density, stability, mechanical strength, and a wide band gap [8,11,12].
Numerous applications of COFs have been proposed, including separation, catalysis, optoelectronics, sensing, as well as drug delivery and other biomedical uses. COFs have been demonstrated as agents for gas and small-molecule separation, adsorption, and detection [8]. They can be made selective to particular gasses, such as CO 2 , CH 4 , and H 2 ; and they may be mounted on a solid support like ceramic or Al 2 O 3 [8]. They have been designed for selectivity towards trace amounts of different analytes such as NH 3 [13,14], food dies, and Uranium [8]. Catalytic COFs have Much of this recent work has been characterized by genuine scientific progress coupled with some emerging drawbacks [34][35][36][37]. Recent advances include the use of modulators to improve crystallinity of frameworks; permanent porosity; and the linking of proteins inside COFs and MOFs [9]. Progress has been made toward solving the "crystallization problem" [38], which historically has hindered the construction of crystalline extended structures with metal-charged-ligand or nonmetal-nonmetal bond. On the other hand, disadvantages-particularly for MOFs-abound. These frameworks are frequently synthesized using toxic metals, linkers, and/or solvents [26], and may be chemically or thermally unstable [27]; certain first-generation MOFs have been known to collapse upon removal of their guest molecules [27].
Nevertheless, nanocarriers like COFs and MOFs hold exceptional promise with respect to four basic facets of biomedicine, according to Horcajada [26]: 1. Cell-or tissue-specific targeted drug delivery, 2. Transport of drugs across barriers, 3. Delivery to intracellular sites, and 4. Visualization of drug delivery sites, e.g., theranostics, the fusion of diagnosis and treatment [39].
Furthermore, properties particular to COFs and MOFs make them especially suitable for biomedicine, including their large surface areas; biodegradable and biocompatible structures [40][41][42]; newly described functionalization of COF and MOF scaffolds [43]; and the amenity of both the inner and outer surfaces of these frameworks to functionalization, thus uniting diagnosis and treatment [36,37,44,45].
In view of the absence of a recent review of COFs and MOFs for biomedical applications, this study aims to survey current work, and to examine some challenges and future directions the field may take. Much of this recent work has been characterized by genuine scientific progress coupled with some emerging drawbacks [34][35][36][37]. Recent advances include the use of modulators to improve crystallinity of frameworks; permanent porosity; and the linking of proteins inside COFs and MOFs [9]. Progress has been made toward solving the "crystallization problem" [38], which historically has hindered the construction of crystalline extended structures with metal-charged-ligand or nonmetal-nonmetal bond. On the other hand, disadvantages-particularly for MOFs-abound. These frameworks are frequently synthesized using toxic metals, linkers, and/or solvents [26], and may be chemically or thermally unstable [27]; certain first-generation MOFs have been known to collapse upon removal of their guest molecules [27].
Nevertheless, nanocarriers like COFs and MOFs hold exceptional promise with respect to four basic facets of biomedicine, according to Horcajada [26]: Cell-or tissue-specific targeted drug delivery, 2.
Transport of drugs across barriers, 3.
Delivery to intracellular sites, and 4.
Visualization of drug delivery sites, e.g., theranostics, the fusion of diagnosis and treatment [39].
Furthermore, properties particular to COFs and MOFs make them especially suitable for biomedicine, including their large surface areas; biodegradable and biocompatible structures [40][41][42]; newly described functionalization of COF and MOF scaffolds [43]; and the amenity of both the inner and outer surfaces of these frameworks to functionalization, thus uniting diagnosis and treatment [36,37,44,45].
In view of the absence of a recent review of COFs and MOFs for biomedical applications, this study aims to survey current work, and to examine some challenges and future directions the field may take. ultimately, apoptosis (programmed cell death). Apoptosis must not be confused with senescence (another goal of anti-cancer drugs), which refers to a cell in a non-dividing inert state.
Attempts have even been made to combine COFs with metal moieties for medicinal purposes. Luo produced a porphyrin-based covalent triazine framework with and without manganese to effectively deliver IBU in vitro-but the material was shown to be amorphous [69]. Still, the covalent nature of the framework, combined with porphyrin's good metal coordination, allowed for high drug loading (23 wt %), good release profile, porosity, and thermal stability.
MOFs as Drug Delivery Systems
Several properties of MOFs make these nanocarriers ideal for drug delivery. First, the interaction between MOFs and guest molecules is tunable [70,71]. The relationship between MOF hosts and their guests is dynamic and selective; interactions can be predicted using simulation; and some MOFs even retain "memory" of their guests [72][73][74][75]. Second, some MOFs may be loaded with multiple drugs [76,77]. Third, the external surface of MOFs can be functionalized to promote coordinative binding [78], ligand exchange [79], and covalent binding to linking groups [80,81]. Fourth, MOFs can be designed for stimulus-responsive intracellular drug release [82]-both MOF polymers [82][83][84] and MOFs coated with lipid bilayers [85,86] have been reported.
Progress in MOF-based drug delivery has had the advantage of time; research on MOF-DDS was ongoing even as COFs were being discovered. MOFs have been engineered to deliver both endogenous substances and synthetic drugs. One such endogenous substance is nitric oxide (NO), a biologically active signaling molecule that is commonly employed in surgery and dialysis; it is critical for clotting, the nervous system, and the immune system. Highly crystalline
COFs as Drug Delivery Systems
Despite being a young field, the implementation of COFs in drug delivery has already begun bearing fruit. The first report of COF-DDS came in 2015; this pioneering work by Fang demonstrated effective IBU release from a polyimide covalent organic framework (PI-COF) with drug loading as high as 24 wt % [22]. The system showed a good release profile, tied to the COF's pore size and geometry. Another early attempt employed PAF-6, an aromatic framework, for IBU delivery [11]. This nontoxic, biodegradable PAF was produced under mild synthetic conditions, without a metal catalyst, and it showed regular, 2D sheets with π-π stacking. Compared to MOFs already studied in the context of drug delivery, such as MIL-53 and MIL-101, PAF-6 showed high release rate and outperformed inorganic nanoparticles such as MCM-41 [11].
More recent attempts have improved the performance and versatility of COF-DDS-even against multidrug-resistant cancers [64]. Tian synthesized a framework in situ for delivery of pemetrexed, a chemotherapeutic drug, in a pH-sensitive fashion [65]. This approach was shown to be effective against MCF-7 breast cancer in vivo and in vitro, and the new DDS overcame multidrug resistance in these cells by leveraging the EPR effect. The framework, taken up by endocytosis, showed better load efficiency than liposomes and had low inherent cytotoxicity. Bai used PI-COFs (endowed with amine groups to hook drug guests) to deliver 5-FU, IBU, and captopril in vitro [23]. This approach showed high drug loading (up to 30 wt %) and good release (days), and the COF pores expanded upon drug loading. Kandambeth used hollow spherical COFs to deliver DOX, a chemotherapeutic agent, with good release profile [57]. Quercetin, another anti-cancer drug, was released by an imine-based COF DDS, showing efficacy against breast cancer cells in vitro (MDA-MB-231) [66]. Liu produced a single-layer, photoresponsive COF, capable of being destroyed under ultraviolet light but recovering upon mild annealing; this technique demonstrated controlled capture and release of a guest molecule (copper phthalocyanine) [21].
Several COFs have been designed to discriminately target drug-delivery sites. Rengaraj synthesized a nano-covalent triazine polymer (CTP) to release DOX in a pH-sensitive manner [24]. After synthesis, the CTP was subjected to ultrasound and filtration to yield nano-CTP. Inherently fluorescent (allowing the researchers to track DDS movement in vitro), this material released DOX at low pH (~4.8) commonly associated with cancer cells (as opposed to physiological pH,~7.4). When loaded with DOX, the DDS showed higher cytotoxicity against adenocarcinoma cells (Henrietta Lacks, or HeLa cell line) than free DOX. The material was shown to promote cell senescence-substantiated by upregulation of genes p53 and p21, which are implicated in tumor suppression, response to deoxyribonucleotide (DNA) damage, and apoptosis [67,68]. Mitra synthesized COFs for targeted delivery of 5-FU, another chemotherapeutic agent [56]. The COFs were produced using Schiff-base synthesis, followed by exfoliation and ultimately a series of post-synthetic modifications to add targeting ligands. Using amine groups as anchors, the investigators attached folic acid to the COFs to facilitate targeting of breast cancer cells in vitro (MDA-MB-231). This key targeting step lead to receptor-mediated endocytosis of the DDS and, ultimately, apoptosis (programmed cell death). Apoptosis must not be confused with senescence (another goal of anti-cancer drugs), which refers to a cell in a non-dividing inert state.
Attempts have even been made to combine COFs with metal moieties for medicinal purposes. Luo produced a porphyrin-based covalent triazine framework with and without manganese to effectively deliver IBU in vitro-but the material was shown to be amorphous [69]. Still, the covalent nature of the framework, combined with porphyrin's good metal coordination, allowed for high drug loading (23 wt %), good release profile, porosity, and thermal stability.
MOFs as Drug Delivery Systems
Several properties of MOFs make these nanocarriers ideal for drug delivery. First, the interaction between MOFs and guest molecules is tunable [70,71]. The relationship between MOF hosts and their guests is dynamic and selective; interactions can be predicted using simulation; and some MOFs even retain "memory" of their guests [72][73][74][75]. Second, some MOFs may be loaded with multiple drugs [76,77]. Third, the external surface of MOFs can be functionalized to promote coordinative binding [78], ligand exchange [79], and covalent binding to linking groups [80,81]. Fourth, MOFs can be designed for stimulus-responsive intracellular drug release [82]-both MOF polymers [82][83][84] and MOFs coated with lipid bilayers [85,86] have been reported.
Progress in MOF-based drug delivery has had the advantage of time; research on MOF-DDS was ongoing even as COFs were being discovered. MOFs have been engineered to deliver both endogenous substances and synthetic drugs. One such endogenous substance is nitric oxide (NO), a biologically active signaling molecule that is commonly employed in surgery and dialysis; it is critical for clotting, the nervous system, and the immune system. Highly crystalline copper-tricarboxylate-based MOF HKUST-1 has been shown to absorb and release NO upon contact with water vapor; NO has antibacterial, antiplatelet, and vasodilatory effects [87]. Unfortunately, HKUST-1 showed instability in water (a problem endemic to MOFs) and turned the aqueous solution green within hours of exposure to plasma. Nitric oxide was shown to bind directly to the Cu core in HKUST-1; in fact, Horcajada [26] affirms that " . . . every single MOF that has open metal sites seems to bind NO to a significant degree". This suggests potential for MOF-based NO delivery systems, perhaps by coating artificial surfaces (dialysis tubes, artificial valves, stents) with such materials.
Various other MOFs have been exploited for intriguing ends. Iron (III)-based MOFs such as MIL-53, MIL-88A, MIL-88Bt, MIL-89, MIL-100, and MIL-101_NH 2 have been tested for effective in vivo delivery of anticancer and antiretroviral (HIV) drugs, such as DOX, azidothymidine, and busulfan [40]. Uptake and release of caffeine by MOFs have been studied [88], and platinum-based MOFs have effectively killed human colon carcinoma cells in vitro [27]. The first biodegradable MOF, Fe-based BioMIL-1, was made with organic linkers that themselves functioned as the drug [2,89]. The linker in this pioneering work was nicotinic acid, or niacin, a form of vitamin B, with vasodilating, anti-lipemia, and anti-pellagra properties. This linker, coupled with relatively nontoxic iron, should serve as a model for other MOF-DDS. MOFs with various mixed ligands have been studied for the controlled delivery of IBU and DOX [90].
MOFs have been the subject of theoretical study as well; multiple papers combine in silico studies with traditional wet chemistry. Babarao and coworkers conducted a complete computational analysis of IBU loading and release in a range of mesoporous MOFs and empirically confirmed stronger binding and slower release in MIL-101, as predicted by calculations [91]. The investigators accurately predicted high loading capacities (MIL-101: 1.38 g·g −1 , MIL-53: 0.22 g·g −1 but independent of central metal, e.g., Cr or Fe) using Monte Carlo, molecular dynamics, and density functional theory techniques. They showed that guest molecules administered via a nanoporous vehicle behave differently with respect to normal bulk administration, and they predicted a carboxyl group rearrangement in MIL-101. A newer study by Li analyzed a 3D Cu(II)-carboxylic MOF carrying 5-FU and effective against spinal cord cancer in vitro; the authors accurately predicted the drug loading capacity (17.3 wt %). They also investigated the MOF's success at avoiding the undesirable burst effect, the rapid release of much of the drug payload [92]. A recent study led by Rojas analyzed the adsorption/desorption kinetics of various drugs (IBU, aspirin) and MOFs (MIL-100, UiO-66, MIL-127) [29]. The authors concluded that drug loading and delivery are influenced primarily by two factors: (1) structure, e.g., framework accessibility and drug volume; and (2) the MOF/drug hydrophobicity/hydrophilicity balance.
Recent studies have yielded MOFs both sensitive to stimuli and capable of targeted drug release. Many syntheses of targeted MOF-DDS rely on post-synthetic modification, e.g., attaching ligands (such as folic acid or special substrate peptides that can bind cancer cells) to the MOF [93,94]. Chen designed a DOX-loaded nanoMOF coated with a nucleic acid hydrogel, sensitive to adenosine triphosphate (ATP), so that the drug was released in the presence of high concentrations of ATP, which is commonly overexpressed in cancer cells [63]. Advantages of hydrogel-coated MOFs over traditional MOFs, exemplified in this study, include higher drug loading, lower leakage, and greater cytotoxicity toward cancer cells. Remarkably, this DDS was selective to ATP alone; that is, other nucleotide triphosphates (GTP, CTP, TTP) did not trigger drug release. This UiO-68-type, Zr-based MOF was effective against breast cancer (MDA-MB-231) in vitro and employed a DNA switch as a release trigger. (A DNA switch, seen in DNA machines, hydrogels, and sensors, is a supramolecular structure that reversibly reconfigures itself in the presence of signals such as pH, enzymes, or light [95].) DNA was linked to the MOF via click chemistry; and the overall DDS showed minimal cytotoxicity toward healthy cells. In a subsequent study, Chen constructed another Zr-based, DOX-loaded nanoMOF, modified with a nucleic acid sequence complementary to a vascular endothelial growth factor (VEGF) aptamer [95]. VEGF is over-expressed in conditions like macular degeneration, diabetic retinopathy, rheumatoid arthritis, bronchial asthma, and diabetes mellitus; an aptamer is a nucleotide or peptide molecule that binds to a specific target molecule. As in the previously mentioned study by the same investigator, a DNA switch was used to trigger the release of DOX in the presence of VEGF. The aptamer was used to target specific receptors (e.g., nucleolin) on cancer cells. This nanoMOF exhibited performance superior to that of SiO 2 , with higher drug loading, better dispersion, and comparable background release.
Hybrid MOFs have begun to appear in drug delivery contexts. Liang [96] produced a core-shell protein@ZIF entity that released DOX at low pH, similar in concept to the pH-triggered drug release investigated by Rengaraj [24] and Tian [65]. This protein@ZIF hybrid, effective against breast cancer (MCF-7) in vitro, consisted of a DOX/Bovine Serum Albumin (BSA) core surrounded by a ZIF-8 shell. In environments with low pH (5.0-6.0), dissociation of the ZIF was triggered, promoting drug release [96]. Impressively, this hybrid showed crystallinity, effectively no leaching of drug, and better biocompatibility than bare ZIF-8. The investigators demonstrated their ability to adjust the hybrid particle's size by varying the concentration of NaCl during synthesis. MOFs have also been combined with graphene oxide (GO) to form MOF/GO composites for efficient, controlled, and tunable delivery of drugs [97].
MOF-based DDS have generated much interest in the chemical community at large, and MOFs' utility in other areas of interest to biomedicine has been demonstrated as well.
Photothermal Therapy
Photothermal therapy is a modern antitumor treatment whereby targeted radiation excites a photosensitizer molecule, which in turn generates heat and kills cancer cells by thermal ablation. This phenomenon has been demonstrated with some success in graphene [60]. Covalent photosensitizers are believed to be advantageous for this technique, as they tend to be biocompatible, efficient photoconverting agents. This technique is outlined in Figure 3 (along with other non-drug delivery therapeutic techniques that involve COFs and MOFs).
Photothermal therapy is a modern antitumor treatment whereby targeted radiation excites a photosensitizer molecule, which in turn generates heat and kills cancer cells by thermal ablation. This phenomenon has been demonstrated with some success in graphene [60]. Covalent photosensitizers are believed to be advantageous for this technique, as they tend to be biocompatible, efficient photoconverting agents. This technique is outlined in Figure 3 (along with other non-drug delivery therapeutic techniques that involve COFs and MOFs).
Some COFs have been designed with photothermal therapy in mind. Tan used a self-sacrificial template to construct a COF with an integrated Fe3O4 core and efficient photoconversion ability, allowing for rapid killing of HeLa cells in vitro [103]. The researchers were able to modulate shell thickness and sphere cavity size, and the COF showed low inherent cytotoxicity. Irradiation by near infrared (NIR) laser caused rapid heating-up to 24 °C within six minutes-and the authors suggested targeted delivery using a magnetic field in vivo, owing to the magnetic nature of the Fe3O4 cores. A subsequent study by the same author [60] was the first reported demonstration of an imine-based COF with photoconversion ability (again associated with a Fe3O4 core), thanks in part to the COF's layered π-π stacking. Although a large, quick temperature change was observed, and the COF shell enhanced light absorption, several drawbacks were encountered. The investigators reported difficulty in modulating the COF during growth, due to inconsistent Ostwald ripening. MOFs for photothermal therapy have been described as well. Photosensitizers can be incorporated into the MOFs' structure, as reported for fourth-generation MOFs [104]. Wang reported a polymer-MOF (UiO-66) composite for photothermal therapy that was shown to be effective against colon cancer cells in vivo [105]. A recent investigation [39] reported a core-shell structure for synergistic photothermal therapy and chemotherapy, with efficacy both in vitro (breast Some COFs have been designed with photothermal therapy in mind. Tan used a self-sacrificial template to construct a COF with an integrated Fe 3 O 4 core and efficient photoconversion ability, allowing for rapid killing of HeLa cells in vitro [103]. The researchers were able to modulate shell thickness and sphere cavity size, and the COF showed low inherent cytotoxicity. Irradiation by near infrared (NIR) laser caused rapid heating-up to 24 • C within six minutes-and the authors suggested targeted delivery using a magnetic field in vivo, owing to the magnetic nature of the Fe 3 O 4 cores. A subsequent study by the same author [60] was the first reported demonstration of an imine-based COF with photoconversion ability (again associated with a Fe 3 O 4 core), thanks in part to the COF's layered π-π stacking. Although a large, quick temperature change was observed, and the COF shell enhanced light absorption, several drawbacks were encountered. The investigators reported difficulty in modulating the COF during growth, due to inconsistent Ostwald ripening.
MOFs for photothermal therapy have been described as well. Photosensitizers can be incorporated into the MOFs' structure, as reported for fourth-generation MOFs [104]. Wang reported a polymer-MOF (UiO-66) composite for photothermal therapy that was shown to be effective against colon cancer cells in vivo [105]. A recent investigation [39] reported a core-shell structure for synergistic photothermal therapy and chemotherapy, with efficacy both in vitro (breast carcinoma line 4T 1 ) and in vivo (90% tumor suppression in mice). A mesoporous ZIF-8 MOF shell was deposited around a single gold nanorod core and loaded with DOX. This hybrid material showed dual sensitivity to low pH and NIR irradiation. This synergistic approach caused much more thorough tumor suppression (~90% suppression) as opposed to~58% under irradiation alone, and a merẽ 30% without NIR irradiation. This new approach presents substantial advantages: high drug loading, stability in physiologic-like media, and biocompatible components with efficient light-to-heat conversion. This chemical-radiative synergy helps overcome drawbacks inherent to other approaches: some organics may release drugs inefficiently, and some inorganics are difficult to synthesize and may not be biocompatible.
Photodynamic Therapy
Another modern anti-tumor treatment is minimally invasive photodynamic therapy: a photosensitizer generates reactive oxygen species (ROS, 1 O 2 ) in response to irradiation, and the ROS go on to kill cancer cells [106]. Photosensitizers must have high quantum yields, long lifetimes, and good Stokes' shifts. Typical photosensitizers, like porphyrin, present certain problems related to their hydrophobic nature and π-π interaction, ultimately leading to aggregation and inefficient ROS generation.
COFs, on the other hand, are theorized to be excellent photosensitizers. They are covalent networks with periodicity-a trait not shared with CMP, CTF, and POP molecules. In addition, COFs possess desirable photosensitizer characteristics that MOFs do not: large, accessible pores, exceptionally well-ordered structure, low density, and thermal stability [107].
A few COFs have been designed with photodynamic therapy in mind.
Using an imine-condensation reaction, Lin synthesized a porphyrin-based COF (3D-Por-COF) to effectively generate ROS [108]. While the integration of photoelectric moieties into COFs had thus far proven elusive, investigators nevertheless reported effective 1 O generation under photoirradiation and were able to modulate the COF's properties by metalating the porphyrin rings. ROS generation was effective over multiple cycles, and the framework structure was maintained throughout. Bhanja created a novel N-containing COF (EDTFP-1) for photodynamic therapy [107]. The ROS-generating ability of this material was tested with a number of cell lines in vitro over a range of pH values, effectively triggering apoptosis in cancer cells via a p53-mediated pathway.
MOFs, too, have found some success as photodynamic therapy agents. A composite MOF/imine-based organic polymer (UNM) with a core-shell structure was designed by Zheng for ROS generation [106]. UNM boasted porosity and high surface area, but the covalent shell was shown to be amorphous. ROS generation was inhibited by vitamin C, a recognized ROS scavenger. The material was shown to be effective against HeLa cancer cells in vitro with uptake via ATP-mediated endocytosis; interestingly, the apoptosis was concluded to be dose-dependent, e.g., governed by radiation power and time. Other MOFs-mostly based on porphyrin-have been designed for photodynamic therapy. A Zr-porphyrin MOF known as PCN-222 was originally designed for general biocatalysis [109] and has recently been adapted for anticancer photodynamic therapy [110]. One group was able to modulate the size of its porphyrin-based MOF for effective, targeted photodynamic therapy [111]; another group decorated a MOF's surface to improve stability and photodynamic activity against cervical cancer cells [112]. Other groups have designed chlorin-based MOFs for photodynamic therapy [113] as well as photodynamically-active MOFs that target mitochondria [114].
Two groups have devised novel ways to employ MOFs for photodynamic therapy in hypoxic environments; using two distinct methods, they equipped MOFs with chemical machinery capable of generating oxygen. Li embedded catalase (an enzyme capable of converting hydrogen peroxide into oxygen) into a photoactive MOF, thus ensuring that photodynamic activity could continue in oxygen-poor tissues in vivo [115]. Impressively, another enzyme was also included to effectively starve cancer cells. Recently, Lan reported the inclusion of Fe 3 O clusters in a MOF to convert endogenous H 2 O 2 into O 2 for photodynamic therapy [116].
Adsorption of Heavy Metals
Heavy metals are implicated in a variety of health problems, especially neurologic diseases. Although many studies of heavy metal uptake in COFs and MOFs have focused on environmental pollution and water remediation, it possible that these materials will one day serve as antidotes to toxic metal poisoning in humans, as well. COFs offer some tantalizing characteristics needed for metal adsorption, like the ability to graft coordination groups to their porous base; fast, selective, high-capacity adsorption; and regular pores, making them preferable to molecular sieves [117]. However, not all coordination groups are compatible with current synthetic methods.
COFs promise brilliant performance in mercury removal from aqueous media. Mercury is a common pollutant, implicated in neurologic afflictions like Minamata disease, which leads to convulsions and death [118]. The use of porous adsorbents is cheaper and simpler than chemical methods of trapping Hg. COF-LZU8, a pioneering material, combined detection of mercury (fluorescence) with removal. This material, developed by Ding [119] employed a thioether group as an Hg 2+ receptor-fluorescence quencher. Sensitive to very low concentrations (25.0 ppb), recyclable, and selective to Hg 2+ , COF-LZU8 set a high bar for this subfield despite some problems with dispersion. More recently, Huang made TAPB-MBTTPA-COF to selectively trap Hg (II) from aqueous media via thioester ligation [118]. This imine-linked COF contained high sulfur content (15.5 wt %) and remained stable in both acidic and basic media. The COF captured Hg-up to 734 mg·g −1 , more efficiently than COF-LZU8. TAPB-MBTTTPA-COF was recyclable, sensitive (10 ppm), and selective to Hg 2+ . Another paper, by Sun [117], described COF-V's effective removal of both Hg 2+ (up to 1350 mg·g −1 ) and Hg 0 (up to 863 mg·g −1 ) from gaseous and aqueous media via exploitation of Hg-π interaction. This recyclable COF contained a vinyl moiety to allow for straightforward PSM.
Antimicrobial Activity
In an age of increasing antibiotic resistance, development of alternative anti-bacterial agents is critical. Some groups have leveraged the sensitivity of microbes to reactive oxygen species, an approach known as photodynamic inactivation; other groups have used other approaches. Many groups have reported greater success against Gram-positive bacteria than Gram-negative bacteria; the latter are covered by a thick glycocalyx, which is made of negatively charged (and toxic) lipopolysaccharide, which enhances resistance to photodynamic inactivation [123][124][125]. Much work has been reported on nanomaterials other than COFs [123,126,127] and MOFs [128][129][130][131][132][133] for photodynamic inactivation.
A few examples of COFs for photodynamic inactivation of microbes have been reported. Triazine-based COFs have been designed by Liu [134] for this goal. These COFs (COF-SDU1 and COFs-Trif-Benz) were effective against Escherichia coli (gram-negative) and Staphylococcus aureus (gram-positive). The most prominent example of COF-based photodynamic inactivation was recently reported by Hynek and coworkers [135]. They designed a 3D porphyrin-based COF that effectively generated singlet oxygen under visible-light irradiation at intensities as low as 1 mW cm −2 , and killed resistant Enterococcus faecalis (a gram-positive facultative anaerobe) as well as Pseudomonas aeruginosa (a gram-negative aerobe). Such bacteria are challenging; infectious-disease specialists must balance-on one hand-the urgency of treating infected patients and keeping hospitals clean-against the looming threat of antibiotic resistance, on the other. It is remarkable that this COF was effective against both gram-positive and gram-negative bugs under visible-light wavelengths. Antimicrobial surface coatings based on this COF may prove quite useful when incorporated into medical devices, bandages, and so on.
So far, not many MOFs have been designed specifically for photodynamic inactivation. A new surface-anchored, porphyrin-based MOF, dubbed "SURMOF", demonstrated antimicrobial activity via ROS against E. coli under visible light [136]. Potentially, this MOF could be deployed as a thin film and guest molecules such as antibiotics could be incorporated.
Other approaches do not involve photodynamic inactivation. One solution might be 4-4'-bipyrazoyl-Ag discs, which are effective against three different strains of bacteria, including Staphylococcus aureus [27]. In 2016, Mitra [59] demonstrated a COF with incorporated guanidium ions for antimicrobial activity. This COF was synthesized via exfoliation without external stimuli, and was effective against a variety of bacteria, both gram positive and gram negative. The authors surmised that the intrinsic charge on the COF backbone facilitated the entry of the COF into negatively charged bacterial cell membranes, causing membrane rupture. As a side note, some researchers have reported success with photodynamic inactivation based on modified silica gel [137].
Other Uses
COFs and MOFs have been prepared for various other biomedical applications. Lohse synthesized an imine-based 2D COF for lactic acid absorption, employing PSM to create appropriate hooks without interfering with COF formation or π-π stacking [55]. Kandambeth made a hollow spherical COF to immobilize the protein trypsin, with potential applications in industry and the health sciences, possibly as a biosensor [57]. Targeting of nanomaterials was explored by ligating nanoMOFs to cyclodextrin [27], thus allowing these MOFs to bind specific receptors or even circumvent the immune system. MOFs have been synthesized with the use of cell walls (fungal and bacterial) as supports [138], facilitating size-selective, slow release of guests. Finally, a recent study employed a UiO-66 type Zr-based MOF for absorption of the nonsteroidal anti-inflammatory drug (NSAID) ketorolac tromethamine from water [139]. This reusable MOF, interestingly, used linkers sourced from recycled polyethylene terephthalate plastic bottles, paving the way for future green synthesis of frameworks.
A Brief Survey of Current Techniques
Modern imaging is an integral part of 21st-century medicine. Common advanced imaging techniques include computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), ultrasound, and optical imaging. Several of these techniques require contrast agents, which may be ingested by the patient [140]. Approaches used when applying nanotechnology to diagnosis fall into two broad categories: electrochemical biosensing/imaging, and optical biosensing/imaging [12].
Considerations in Designing Materials for Biosensing and Diagnosis
Materials scientists must be mindful of certain constraints when designing contrast agents and other chemicals for imaging. Some fluorescent frameworks employ "turn-on" fluorescence, the emission of light upon reception of stimuli; others are of the "turn-off" type [140]. The path a nanoparticle takes through the body is dictated by its design: primarily its size (ideally 60-100 nm), shape (rods are preferable to spheres), and surface chemistry. Additional considerations apply for imaging and diagnosis of cancer. Scientists must consider a tumor's microenvironment-vascular, deep tissue, or circulating-which affects imaging options. In addition, the EPR effect desired for DDS opposes the design requirements for contrast agents. Whereas the EPR effect relies on accumulation of chemical agents near tumors, accurate imaging requires even distribution [140].
The ideal contrast agent for modern imaging should possess three fundamental properties. First, the agent should have a high rate of margination, that is, the ability of a nanoparticle to escape the blood flow and move toward the blood vessel wall. Larger particles tend to rely on convection; smaller particles, on diffusion. Second, such agents should have strong binding avidity to tumors for accurate visualization. Third, they should be rapidly internalized into cells and tissues.
Reliable, accurate diagnosis is critical, particularly for tumors. It is widely known that early detection and treatment of tumors, generally when they are less than a few millimeters in diameter, correlates with better survival. Blood vessels in young tumors differ greatly from those in metastasized, advanced tumors; younger tumors respond better to targeting, suffer less fluid leakage, and overexpress certain receptors, hence facilitating targeting by imaging and antitumor agents [140].
COFs for Biosensing and Diagnostics
To date, relatively few studies on COFs as biosensing agents have been conducted. Nevertheless, trends in the literature have begun to emerge [12]: luminescence is stronger in 2D than bulk COFs, and exfoliation from 3D to 2D is a valuable synthetic tool in this area. Two-dimensional porous frameworks show good potential [49] thanks to their large, rigid structure with π-π conjugation, strong fluorescence, spatial selectivity, and well-defined, regular pores. Wan reported the first luminescent, semiconducting COF in 2008 [18]. This mesoporous, arene-based, belt-shaped COF set a high bar for subsequent investigations with its high quantum yield (~60%), good on/off switching performance, and characterization as a p-type semiconductor.
Newer work has set new standards. Wang designed an imine-linked COF, layered on a silicon substrate, for DNA detection by measuring the change in impedance [141]. Peng made a 2D imine-linked COF as a selective and sensitive DNA detector [58]. Produced via exfoliation of bulk COFs, this material fluoresced upon encountering specific target DNA, which triggered hybridization between hairpin DNA ligated to the COF. Interestingly, this was the first reported observation of COF building blocks under the transmission electron microscope. More recently, Kong reported the in-situ synthesis of imine-based COF LZU1 for electrochemical detection and separation of amino acids and NSAIDs [142]. The COF was layered in a column for open-tubular capillary electrochromatography to improve its performance. Wang [49] demonstrated PI-COFs for Fe 3+ detection via metal-ion-induced fluorescence quenching. Lin used a relatively rare 3D COF containing photoelectric units [143]. This material effectively detected picric acid, an explosive, via turn-off fluorescence. This approach leveraged the advantages of 3D frameworks in such an application, e.g., high specific surface area, low density, and multitude of open sites. Wan [144] developed an imine-linked COF for photocurrent generation, similar to the endeavor mentioned in (Wan, 2008) [18]. Finally, Li designed an imine-linked COF to detect the biomolecules DNA and ATP via turn-on fluorescence [145]. The investigators designed this COF, stable in human serum, as a multi-function sensor. In addition, they were able to distinguish single-base-pair mismatches in target DNA, potentially allowing for detection of mutated DNA (a root cause of cancer). The COF's sensitivity to ATP may yet prove useful for tumor detection.
MOFs for Biosensing and Diagnostics
MOFs, like their COF relatives, offer unprecedented flexibility in the design of targeted biosensing/diagnostic agents [146]. MOFs may be used to position catalysts and magnets [9]; a 2009 study [147] demonstrated homogenous inclusion of iron into a graphitic nitride network for generation of H 2 and activation of CO 2 . It is possible to engineer MOFs based on enzymes, such as ferritin, and MOFs have demonstrated utility as films (the likes of which are essential for biosensor design) [9]. MOFs have been studied for use with MRI [140,148] and PET [149]. A recent study reported the synthesis of graphene-MOF composites for enantioselective capture of drug intermediates in a magnetic field [150]. In light of the relatively scant quantity of work performed on COFs or MOFs as films, this area merits increased attention.
Theranostics
Theranostics means combining diagnosis and treatment; this is an area in which nanocarriers-especially MOFs, and to a greater extent than COFs-have shown exceptional promise [151]. This approach allows clinicians to accurately target tumors or other areas of interest, thereby ensuring thorough treatment and lessening the required effective dose. For example, Zhao reported a combined magnetic resonance-contrast and DOX-carrying MOF that lead to better therapeutic outcomes than free DOX alone [152]. Some groups have created core-shell theranostic MOFs [153], fluorescent, trackable MOF DDS [154], and targeted, MRI-trackable MOF DDS [93]. Other advances include imaging-trackable MOFs that can be employed for photodynamic therapy [155,156] as well as photothermal therapy [157]. One group even combined controllable drug delivery with MRI and photothermal therapy in a single MOF system [158]. However, it appears that not much work on COF-based theranostics has been reported.
Future Advances and Obstacles
Despite COFs' and MOFs' remarkable performance in the laboratory, turning these frameworks into safe and economical therapeutic and diagnostic agents is expensive, fraught with regulatory hurdles, and difficult from a technical perspective. In general, these problems characterize many attempts to bring new drugs and medical tools to the market.
Regulatory Difficulty
Any substance designed for use in medicine must demonstrate safety and efficacy in rigorous clinical trials before being approved for use on patients. In the United States, the Food and Drug Administration (FDA) is the regulatory body responsible for such approvals.
The FDA upholds rigorous standards and screens new products for safety. A higher margin of safety is required for imaging substances than for drugs, since the former may be administered to healthy individuals [140]. The components of nanotechnology-the various linkers and monomers discussed previously-must conform to published FDA lists, including the Generally Regarded as Safe list, and the Everything Added to Food list [2]. Unfortunately, there is inconsistency in regulatory practices and risk tolerance between different countries; as early as 2015, for example, cocrystal treatment was available and approved in Japan, well before European or American approval [2].
Problems in Translating In Vitro/Small Animal Results into Clinical Results
The papers mentioned in this review largely reported in vitro studies and, in some cases, small animal trials. Unfortunately, it is inherently difficult to translate such studies into real-patient results. In vitro studies neglect hemodynamics and tumor microenvironments, for example [140]. Many experiments are not carried out in physiological media [91].
New technology, like COFs and MOFs, must demonstrate profound effectiveness, not mere sophistication and complexity; indeed, some new nanotechnologies perform worse than traditional treatments [5]. In the early 2010s, for example, thermo-sensitive liposomes found success in mouse trials but failed to demonstrate effectiveness at the subsequent clinical trial stage. Ultimately, medicinal outcomes depend on individual physicians' and patients' goals [5].
Existing Nanotechnology Based on Old Principles
Two examples of nanotechnology, based on classic chemical principles, have found resounding success in the clinic: doxil and abraxane. Doxil, a PEGylated liposome, delivers DOX as effectively as the free drug itself but with a higher margin of safety for the patient [5]. Liposomes have been known to the chemical community for some 60 years; PEG, for 40. Work on PEGylated liposome began in the 1980s, but the product was not granted FDA approval until 1995, an unfortunate but typical lag time. Abraxane is an oil-water emulsion designed to deliver paclitaxel, a chemotherapeutic agent [5]. This was the first nanomedicine FDA-approved for metastatic breast cancer [96].
Nanotechnology is Not Necessarily Progress
In light of decreasing funding for research around the world, pressure is on scientists to publish (sometimes) unsubstantiated claims to drum up public support, and thus attract more research funding. Hence there is some misinformation and uncalled-for hype surrounding nanomaterials, including COFs and MOFs. The much-lauded EPR effect, while fundamental to the action of many frameworks discussed in this review, is not unique to nanoparticles, for example [5]. An instance of similar hype in the 20th century was glucose-dependent insulin delivery, a type of 2nd-generation DDS, which ultimately proved ineffective [5].
Drug Delivery Systems: Targeting Problems
Key to effective nanoparticle therapeutics and diagnostics is proper targeting. Nanoparticles can increase drug concentration around a tumor by as much as 100% to 400% [5]. However, most of the administered drug migrates to nontarget sites, and some new DDS do not offer high enough clearance (removal from the target area). Despite these drawbacks, compensation arrives in the form of progressively higher drug loading in new materials, and up to five-fold more effective delivery, as illustrated by taxol, abraxane, genexol, and the like. Perhaps future work on COFs and MOFs for drug delivery will more often incorporate advanced targeting, such as specific antigen-based targeting.
Toxicity
Some nanomaterials-particularly MOFs-are inherently toxic. This poses a problem for nanomaterials intended for medicinal use. However, MOFs may be designed with "biologically benevolent" metals with lower toxicity such as Ca, Mg, Zn, Fe, Ti, and Zr [26], and can even be designed using endogenous linkers that the body can metabolize.
Economics and Looking Forward
Paramount to drug companies is the continuous improvement of technology that has already earned FDA approval [39]. Such technologies represent low risk and good financial return in the high-stakes world of drug discovery. Therefore, drug manufacturers may prefer spending their research, legal, and lobbying resources on modifications of existing technology instead of novel therapies. Hence medical applications of COFs and MOFs might take decades, if ever, to attain commercial viability.
Critical Assessment of the Field and Conclusions
Covalent and metal organic frameworks have considerable potential in biomedicine. These nanocarriers could herald a new era of targeted, stimulus-sensitive drug delivery, safe and sensitive imaging, and combined imaging and treatment. The ability to predict COF/MOF stability, drug loading, and other properties in silico is significant. The in vitro and in vivo effectiveness of COFs and MOFs that combine drug delivery with other disease-fighting techniques like photodynamic therapy and photothermal therapy is particularly impressive. However, at this stage in the development of COFs and MOFs, it would be unrealistic to pronounce them the next great advance in medicine. It is far too early to begin clinical trials on any COF or MOF-based treatment system, and pharmaceutical companies wisely choose to fund high-return research and trials; there is no shortage of nanomaterials research to be funded, for that matter. These companies may choose to pour their research and marketing resources into variations on old technology instead of gambling on unproven nanotechnologies. COFs and MOFs have inherent drawbacks that might preclude researchers from receiving well-deserved funding-COFs are often designed with toxic linkers, and MOFs with toxic metals. Still, significant progress has been made with the biologically-friendly MOFs mentioned in this article. This review of Covalent Organic Frameworks and Metal Organic Frameworks hints at their encouraging future in biomedicine. Both types are crystalline and porous, they can be designed with custom properties, and they may be functionalized with post-synthetic modification-advantages not typically shared with other nanomaterials. COFs and MOFs have been demonstrated as effective drug delivery systems, agents for photothermal and photodynamic therapy, heavy metal adsorbents, antimicrobial agents, contrast and diagnostics agents, and more. They have shown promising results against various cancers and may be designed to specifically target cancer cells. However, creating crystalline, covalent extended structures like COFs remains fundamentally difficult, as does scaling the production of frameworks for commercialization. The exciting performance of MOFs in biomedical settings is tempered by the instability of some MOFs and the toxicity of many metal centers. Ultimately, any MOFs or COFs destined for the clinic must first meet stringent safety and efficacy standards set by regulatory agencies. Nevertheless, COFs and MOFs merit continued research attention if tomorrow's physicians are to be given the tools they deserve.
Author Contributions: G.C. reviewed the literature and did the analysis, handled methodology, prepared and wrote the original draft, A.Y. provided conceptualization, supervision, and did writing-review and final editing.
|
2018-11-15T17:45:15.101Z
|
2018-11-01T00:00:00.000
|
{
"year": 2018,
"sha1": "94e39f61b3ada9940db4cc1a23cd3ca2f53fbdfe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/8/11/916/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "94e39f61b3ada9940db4cc1a23cd3ca2f53fbdfe",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
211296304
|
pes2o/s2orc
|
v3-fos-license
|
Evidence of Neutrino Flux effect on Alpha Emission Radioactive Half-Life
Radioactive sources presented annual periodical half-life changes in several accurate measurements, although customary practice claims that radioactive decay should be a physical constant for each radionuclide. Besides that, the Purdue measurements of Mn-54 decay-rates indicated response to solar X-ray flare events in 2006. The Mn-54 source emits neutrino from the nucleus and therefore allows interpreting those solar neutrinos can interact with this radiation source. In order to track more radiation count-rate responses to solar flare events, we built an experimental detector system for gamma radiation count-rates measurements, facing an Am-241 source. The system was placed at an underground laboratory, permanently locked to avoid any influence by unexpected radiation perturbations, and environmentally controlled in means of temperature and clean-air flow, in order to maintain detector stabilization. The detector consist of NaI(Tl) scintillators for gamma radiation and total-counting reader devices for remote counting. Each radiation counting system was shielded by a 5 cm lead. One month prior to flare events from the Sun, all three detectors showed reasonably stable count-rates, which were tallied every 15 minutes. Five solar-flares occurred and reported by the SpaceWeatherLive website on 12th to 13th of October 2018. The Am-241 system response to solar flares found to be with a delay of around 20 days. We conclude that also for alpha emitter radioactive sources, the half-life altered due to changes of neutrino flux from the Sun. Our measurements indicated that an alpha emitter was affected by the neutrino flux change from the Sun.
In previously published research, measurements of half-life radioactive sources presented an annual periodical change, despite the customary notion that radioactive decay should be considered a physical constant for each radionuclide [1]. The most significant publication is the Alburger et al. (1986) experiment [2], in which decay rates of Si-32 and Cl-36 were simultaneously measured using the same detector system, and annual variations of count-rates were observed to differ in both amplitude and phase. These two radioisotopes are betta emitters, thus the detector's internal response could not be the reason for the counting change. Hence Alburger et al. concluded that half-life varies due to an annual periodical effect. Since then, further experiments have revealed these half-life annual periodical variations; however, all radioisotopes involved were betta and/or gamma emitters [3][4]. Yet one recent publication by Sturrock, Steinitz and Fischbach [5] presents long-term (i.e., 10 years with 15-minute intervals) measurements of Rn-222 decay data analyzed using spectrograms of the measured gamma radiation followed by the Rn-222 alpha particle emission. Their work proved that Rn-222 alpha particle emission can present an annual periodical count-rate change. Solar x-ray flares occur when the Sun's activity rises, and it is evident that an 11-year sunspot cycle is related to solar activity, therefore there is a higher probability of solar x-ray flares occurring in the higher solar activity phase of the cycle [6-7]. Now we are at the lowest phase of the solar activity cycle, and although solar flare appearance cannot be accurately predicted, maximal solar activity should appear during the years 2024-2025. The solar X-ray flare phenomenon is thought to be related to the particle transfer loop from the Sun to the corona [8]; in addition, since they can interact with Earth's ionosphere, several satellites have been launched in order to measure these flares and to report their appearance time and magnitude. A series of GOES (Geostationary Operational Environmental Satellites) satellites operated by the Space Weather Prediction Center, National Oceanic and Atmospheric Administration provides measured solar x-ray flux daily data, which is reported in units of W/m 2 for each minute. This x-ray flux daily data is classified as A, B, C, M, or X according to peak flux magnitude, where class A, the lowest flux, is less than 10 -7 W/m 2 , X is above 10 -4 W/m 2 , and the difference from class to class is 10-fold.
Only one report has been published regarding the influence of solar flares on radioactive half-life [9]. In December of 2006, for the first time, high-flux x-ray flares (class X -M) were found to be correlated to measured Mn-54 gamma radiation countrate discrepancies. Mn-54 is an electron-capture radioactive nucleus that produces gamma rays emitter, excited Cr-54, with a 312-day half-life [10]. The hypothesis that solar neutrino flux variations cause these count-rate discrepancies was presented by Jenkins and Fischbach [9]; although assumed to cause these decay rate variations, the involvement of neutrinos in radioactive decay is not included in nuclear physics models. One difficulty of the hypothesis is the minor portion of neutrino flux change that occurs due to annual or semi-annual Earth orbital motion. Another difficulty is that neutrino involvement in nuclear decay belongs only to betta decay since it is a lepton that interacts under the weak nuclear field. In addition, Sturrock, Steinitz, and Fischbach [5] found that alpha decay rate is dependent on neutrino flux change, therefore neutrinos should also be assumed to interact with the strong nuclear field.
Two setups of radiation measurement systems were integrated in an underground laboratory, one facing Rn-222 produced by Ra-226 (100 kBq) with two NaI(Tl) detectors (2" diameter by 2" length) and the second consisting of an Am-241 (37 kBq) source with one NaI(Tl) detector. All three detectors were shielded with a 5-cm thickness of lead. The lab walls and ceiling were made of concrete 30 cm thick, and the whole system was surrounded by 5 cm of lead. Both Det. A and Det. B are NaI(T) detectors of 2''x2'' (PM-11 manufactured by Rotem Industries). In Fig.1 is the schematic description of the system. All detectors were connected to data logger (DL) CR800 manufactured by Campbell Scientific in order to remotely collect and submit data to a computer, which itself is remotely controlled for access to the DL and collected data.
Every 15 minutes gamma counts from each detector were integrated and tallied. The lab was permanently locked to avoid any influence and unexpected radiation perturbations, as well as was environmentally controlled in terms of temperature and clean-air flow in order to reduce detector efficiency dependence.
A sealed container was used to contain the outgoing Rn gas during measurements.
The Ra-226 was positioned at the base of the sealed container, so detectors could be exposed only to the Rn-222 gas. The Rn-222 is Ra-226 progeny therefore, as recommended, we waited for 12 days in order to achieve equilibrium in Rn-222 production versus its decay. Before setting the system as illustrated in Fig.1, spectral gamma detection of the radiation source container in which the radon gas accumulated, was performed using a NaI(Tl) 3"x3" spectrometry detection system. Laboratory temperature, measured throughout the experiment, was found to be stable at 18°C (±1°). Careful temperature stability is required for such delicate changes since scintillation efficiency can be affected by temperature differences [13], even though peak counts are much more sensitive to these temperature changes compared to totalcounts [14]. PM-11 background counts, measured for a two-day period, remained around the level of 700 cpm.
Am-241 counting stability was recorded for 150 hours and showed 0.09% counting uncertainty, which we then set as the limit-of-detection level for the second system, not dependent on temperature variation. Table 1. Three count-rate dips are shown in Figure 2 (see red arrows); in the upper-most part of the figure is a graph of these solar flares after adaptation to the UTC+3 time-zone was mad. Furthermore, the wide shape variations in the Rn-222 counting when fit to sinusoidal wave-envelop showed a 9-hour repatriation rate. More investigation confirmed that, since radon is a heavy gas, its transport from the source to the counting system introduces exhalation beats. Full images from SpaceWeatherLive [12] of the solar flares observed during this study are presented (with permission) in Figure 3. The horizontal axis is UTC time; vertically, solar flux is represented by a log scale classification (A0 -X). The highest flares were measured between classes B and C. fourteen days, thus these fluctuations are an established fact not only due to the statistical nature of alpha particle decay process, but also to the counting route that might include scintillation, photocathode-electron-emission, and photomultiplier erraticism. A smaller valley taking place on October 24-25, whose minimum is about 6.5% below the average cpm is also shown, leading us to presume that a physical signal was detected. Furthermore, our measurements show that the Am-241 count-rate response is much more delayed compared to the Rn-222 system response. The first dip in Figure 4 corresponds to the first, relatively short solar flare that occurred on October 12 th , and the next valley is the response to the remainder of the solar flares listed in Table 1.
Our flares are orders of magnitude lower compared to the first observed flare in 2006.
Rn-222 is part of a decay chain of Ra-226 characterized by alpha particle emissions as well as gamma rays emitted due to nuclear energy level population change. Rn-222 radioactivity consists of four alpha emissions and four betta(-) emissions; it is this Rn-222 complexity that prohibits us from concluding that its flare response is due exclusively to alpha emission. Regardless, this study provides the first evidence that solar flares affect alpha emission type of nuclear decay.
Am-241 decay has a direct alpha particle emission with a signal gamma ray emission.
Because the produced Np-237 decay rate is very small (i.e., half-life of 2.144 million years [10]), we chose to measure the Am-241 gamma rays that follow the alpha emission. Our measurements evidence changes in Am-241 decay due to solar flares, leading us to conclude that neutrinos from the Sun can interfere with the alpha emission process. Furthermore, we found that different half-lives give different delay times from the solar-flare event time-point: Rn-222 gives a few hours, Mn-54 gives around 7 days [9], and Am-241 gives around 20 days. These different delay times are proportional to decay rates; we therefore recommend investigating additional radioactive sources with different decay rates.
From the Sun, only neutrinos can actually penetrate toward our experimental setup as it is shielded by the Earth's atmosphere, its own magnetic field and the lab shielding; it can thus be assumed that unstable nuclei interact with these neutrinos. Since Am-241 is an alpha emitter with which, theoretically, neutrinos are not be involved, our findings are unique and unexpected. This phenomenon indicates that new nuclear models should be considered for the alpha decay process, as well as that the neutrino be included in internal nucleon structure.
|
2019-02-26T07:36:28.000Z
|
2019-02-26T00:00:00.000
|
{
"year": 2019,
"sha1": "8b44812cfba36bc90d8a4443d6b1bba4419ea74e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c934a53d0d9f9a1901579163c58802caddf265e2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
4011260
|
pes2o/s2orc
|
v3-fos-license
|
[18F]FE@SNAP—a specific PET tracer for melanin-concentrating hormone receptor 1 imaging?
Background The melanin-concentrating hormone receptor 1 (MCHR1), which is highly expressed in the lateral hypothalamus, plays a key role in energy homeostasis, obesity and other endocrine diseases. Hence, there is a major interest in in vivo imaging of this receptor. A PET tracer would allow non-invasive in vivo visualization and quantification of the MCHR1. The aim of the study was the ex vivo evaluation of the MCHR1 ligand [18F]FE@SNAP as a potential PET tracer for the MCHR1. Methods [18F]FE@SNAP was injected directly into the jugular vein of awake naïve rats for ex vivo brain autoradiography, biodistribution and additional blood metabolite analysis. Blocking experiments were conducted using the unlabeled MCHR1 ligand SNAP-7941. Results A high uptake of [18F]FE@SNAP was observed in the lateral hypothalamus and the ventricular system. Both regions were significantly blocked by SNAP-7941. Biodistribution evinced the highest uptake in the kidneys, adrenals, lung and duodenum. Specific blocking with SNAP-7941 led to a significant tracer reduction in the heart and adrenals. In plasma samples, 47.73 ± 6.1 % of a hydrophilic radioactive metabolite was found 45 min after tracer injection. Conclusions Since [18F]FE@SNAP uptake was significantly blocked in the lateral hypothalamus, there is strong evidence that [18F]FE@SNAP is a highly suitable agent for specific MCHR1 imaging in the central nervous system. Additionally, this finding is supported by the specific blocking in the ventricular system, where the MCHR1 is expressed in the ependymal cells. These findings suggest that [18F]FE@SNAP could serve as a useful imaging and therapy monitoring tool for MCHR1-related pathologies.
Background
The melanin-concentrating hormone receptor 1 (MCHR1) plays a key role in energy homeostasis and obesity [1,2]. Furthermore, it has been implicated to be involved in the pathogenesis of diabetes [3,4] and inflammatory processes in the gut [5]. Since obesity affects over 600 million individuals worldwide (as estimated by the World Health Organization in 2014 [6]), there is extensive pharmaceutical interest in the development of anti-obesity drugs. It has been shown that MCHR1 antagonists reduce body weight in rodents [7]. Nevertheless, none of these molecules reached market authorization so far. A MCHR1-positron emission tomography (PET) ligand could support dose selection of MCHR1 antagonists [7] and, therefore, would be a valuable tool for drug development. PET allows noninvasive in vivo visualization and quantification of receptor systems, as well as monitoring and following hormone receptor status and related pathologies in vivo. Besides the application of a MCHR1-PET tracer for compound dose selection of potential MCHR1targeting drugs, another potential implication for obesity patients could be the in vivo quantification of the MCHR1-which is predominantly expressed in the lateral hypothalamus [8]-as a risk factor and early diagnostic tool for insulin resistance. Furthermore, a MCHR1-PET ligand could help to better understand the endocrine status and guide pharmacological intervention via the MCHR1.
So far, based on the specific MCHR1 antagonist SNAP-7941 [9], [ 11 C]SNAP-7941 was developed as the first PET tracer for the MCHR1 and was evaluated in a preclinical study [10,11]. Furthermore, we introduced the 18 F-fluoroethylated analogue [ 18 F]FE@SNAP ( Fig. 1) as an alternative potential PET tracer for the MCHR1 [12,13]. [ 18 F]FE@SNAP showed a high affinity (K d = 2.9 nM, evaluated on CHO cells expressing the human MCHR1) and selectivity (K i > 1000 nM on the second MCH receptor, MCHR2) towards the MCHR1 [13].
After successful in vitro evaluation, the next logical step in the preclinical evaluation process was the performance of ex vivo experiments. Hence, the purpose of the present study was to confirm the potential of [ 18 F]FE@SNAP for specific MCHR1 brain imaging in healthy rats. Therefore, [ 18 F]FE@SNAP was administered IV for ex vivo brain autoradiography and additionally to study biodistribution and to search for potential circulating metabolites. It is noteworthy that IV application was performed through the jugular vein, allowing animals to be awake and conscious, hence excluding the well-known significant anaesthetic influence on imaging results [14][15][16].
Animals
With free access to tap water and standard laboratory animal diet, 16-week-old male Sprague-Dawley rats (436 ± 79 g, mean ± SD) were kept under controlled environmental conditions on a 12-h light-12-h dark cycle (Alleinfutter für Ratten und Maeuse sniff R/M-H, sniff Spezialdiaeten GmbH; Soest, Germany).
For implantation with indwelling catheters into the right jugular, rats were anaesthetized by an intraperitoneal injection of ketamine-xylazine supplemented, if necessary, with inhalative sevoflurane [17]. Tracer experiments were performed not earlier than 7 days after surgery, when all rats were ±10 % within their pre-surgical body weight. Since these catheters allowed IV injections into conscious freely moving rats, any influence of anaesthesia was excluded in these experiments. All procedures and protocols using animals have been approved by the Institutional Animal Care and Use Committee of the Medical University of Vienna, Austria, as well as by the Austrian Ministry of Science, Research and Economy (BMWF-66.009/0268-II/3b/2012).
IV study including biodistribution and metabolite analysis
Conscious and freely moving rats of the baseline and blocking groups were always examined simultaneously: rats of the baseline group (n = 3) received vehicle (400 μL), and rats of the blocking group (n = 3) received SNAP-7941 (15 mg/kg; freshly dissolved in 400 μL) via the jugular vein 30 min prior to tracer application. Via the jugular vein, 51.33 ± 26.2 MBq of [ 18 F]FE@SNAP (specific activity 12.3-43.1 GBq/μmol; radiochemical purity ≥95 %; 30-100 μL) was then administered to all rats. After 45 min, rats were sacrificed by IV ketamine injection and decapitated, and the brains were removed and immediately quick frozen in isopentane (−45°C) for ex vivo autoradiography. Other organs including the eyes, tongue, muscle, epidermal white adipose tissue (WATep), heart, lung, stomach, pancreas, liver, duodenum, colon, spleen, kidneys, adrenals, testis, bladder and bone as well as blood and urine were removed, weighed and measured in a gamma counter (2480 WIZARD 2 , PerkinElmer). Radioactivity concentrations were normalized to dose and weight and expressed as percent injected dose per gram (%ID/g). To determine significant differences, a two-tailed t test with α = 0.95 was performed using the statistics add-on in Microsoft Excel® 2013. A value of P < 0.05 was considered as significant.
For analysis of potential circulating metabolites, blood samples from the baseline group were collected into heparinized tubes and immediately stored on ice before processing. The blood was centrifuged (Hettich Rotanta/TRC; 3400×g, 4 min) to separate cellular components. Sample cleanup was performed by vortexing plasma with the equivalent amount of acetonitrile and by subsequent centrifugation (Hettich Universal 30RF; 23,000×g, 3 min) to remove precipitated proteins. The obtained supernatant was applied to radio-thin-layer chromatography (radio-TLC silica gel plates, mobile phase acetonitrile/water 70/30 v/v, application volume 2 μL on origin) and analysed via a Canberra-Packard Instant Imager.
Ex vivo autoradiography
Whole rat brains (n = 6) were cut in a cryo-microtome (Microm HM 560, Thermo Scientific) into 50-μm-thick slices and thaw-mounted onto superfrost slides (Menzel-Gläser SUPERFROST® PLUS, Thermo Scientific). Samples were placed on Phosphor Imager plates (Multisensitive Phosphor Screens Long Type MS, PPN 7001724, PerkinElmer) for an exposure period of 18 h and then analysed with a Cyclone Phosphor Imager (Cyclone Plus Storage Phosphor System, PerkinElmer). Data analysis was performed with OptiQuant® data processing software Version 5.0. Due to high-MCHR1 appearance, special emphasis was laid on the hypothalamus [8] and the ventricular system [18,19]. Hence, regions of interest (ROIs) were drawn for the hypothalamic region and the ventricle, and additionally, a non-target region and the whole tissue were selected (Fig. 2). To facilitate comparison, manually defined template ROIs fitting to all slices (30 slices/brain; bregma −1.0 to −2.5 mm) were applied. ROIs resulted in normalized digital light units per square millimeter (DLU/mm 2 ); values were expressed as ratio of MCHR1-rich and reference region. In detail, ratios of hypothalamus/ non-target and ventricle/non-target were calculated. Significant differences were calculated as described before.
Preliminary small-animal imaging
An anaesthetized rat (with 1.5-2.5 % isoflurane) was immobilized in a multimodal animal carrier unit (MACU; medres®-medical research GmbH, Cologne, Germany) and maintained at a body temperature of 37°C throughout the whole experiment. [ 18 F]FE@SNAP (47.64 ± 1.23 MBq) was injected as a bolus via the lateral tail vein, and dynamic PET imaging (Siemens Inveon preclinical μPET/SPECT/CT system) was performed over 60 min. Immediately afterwards, T1weighted high-resolution axial, coronary and sagittal brain MRI scans were performed using a Bruker BioSpec 94/30 USR small-animal MR system (Bruker BioSpin GmbH, Karlsruhe, Germany).
Results
Ex vivo autoradiography after IV application of [ 18 F]FE@SNAP (baseline group, n = 3) showed tracer uptake into the rat brains, with increased accumulation in the lateral hypothalamus and the ventricular system. In the blocking group (n = 3), whole-brain uptake was significantly higher than in baseline animals. Specific blocking with 15 mg/kg SNAP-7941 evinced a significant reduction of tracer uptake in the lateral hypothalamus and in the ventricular system (Fig. 3). The ratios are shown in Table 1.
The results of the biodistribution (n = 3, each for the "baseline" and the "blocking group"; 45 min after injection) are shown in Fig. 4. Specific blocking with 15 mg/kg SNAP-7941 led to a significant tracer Fig. 2 Scheme of the ROIs for calculation of ratios. 1 (yellow): ventricular system; 2 (green): hypothalamic area; 3 (pink): non-target region; and 4 (orange): whole brain. It is noteworthy that ROIs 1 and 2 include the target areas as well as some background areas, due to varying target size throughout the different brain levels (bregma) reduction in the heart and adrenals. Other organs known to express MCHR1 to some extent (eye, tongue, muscle soleus, pancreas and colon) showed a trend towards reduced uptake under blocking conditions. Analysis of metabolites in the blood (n = 3) evinced 51.50 ± 5.5 % of the parent compound and 47.73 ± 6.1 % of a hydrophilic radioactive metabolite (probably [ 18 F]fluoroethanol) 45 min after tracer application.
Preliminary small-animal PET measurements in a healthy rat under baseline conditions showed a high tracer uptake in the ventricular system (Fig. 5).
Discussion
Specific MCHR1 imaging is of high clinical interest for status monitoring in endocrine pathologies like obesity and diabetes. A PET tracer for MCHR1 comprises several advantages for clinicians and patients as the in vivo monitoring and following of the hormone receptor status and related pathologies. Moreover, it could support dose selection of MCHR1 antagonists in drug development [7].
The focus of this study was to investigate the potential of [ 18 F]FE@SNAP to specifically label MCHR1-rich regions like the lateral hypothalamus [8]. Therefore, [ 18 F]FE@SNAP was injected IV into healthy and conscious rats followed by ex vivo brain autoradiography, biodistribution and analysis of potential blood metabolites.
In order to avoid the well-known effects of anaesthesia [14][15][16], [ 18 F]FE@SNAP was administered directly into the jugular vein of awake and conscious rats. IV application of [ 18 F]FE@SNAP showed a high and specific uptake in the ventricular system-where the ependymal cells are recently known to express the MCHR1 [18,19]-and the hypothalamic region suggesting specific MCHR1 targeting and visualization. Furthermore, blocking of MCHR1 with SNAP-7941 led to a significantly increased overall tracer uptake into the brain, whilst significantly reducing tracer uptake in the presumably MCHR1-rich regions. Since MCHR1-rich regions are blocked with the unlabeled SNAP-7941 and, therefore, specific MCHR1-targeted binding was inhibited, an unspecific tracer uptake in the whole brain was observed.
High specific blocking of the tracer in the brain was not hampered by observed hydrophilic blood metabolites.
Apart from a high specific central MCHR1 uptake, the blocking experiments hinted at a MCHR1-related uptake of [ 18 F]FE@SNAP also in the adrenals, eyes, tongue, muscle, pancreas and colon which is in line with literature [3,20,21]. However, statistical analysis of the %ID/g of these organs only revealed specific blocking in the adrenals. The specific blocking in the heart raises the question whether MCHR1 might be expressed there too.
With regard to future application of the tracer, it is promising that no defluorination and insignificant uptake of 18 F-fluoride into the bone were observed.
It is noteworthy that throughout the jugular vein injection, rats were conscious during tracer application and distribution; hence, potential influence of anaesthesia was completely excluded. Therefore, specific binding and blocking resulted directly from the investigated compounds.
The high resolution of ex vivo autoradiography using a Phosphor Imager allowed identification of specific
Conclusions
Since the MCHR1 is predominantly expressed in the lateral hypothalamus as well as in the ependymal cells of the third ventricle epithelium, a tracer for the MCHR1 should show specific uptake and be significantly blocked by an unlabeled ligand in these areas.
[ 18 F]FE@SNAP proved these characteristics, which provides strong evidence that it is a highly specific agent for MCHR1 imaging. Involvement of MCHR1 was reported in diabetes and obesity, and MCHR1 has also been related to asthmatic seizures, colitis, depression, anxiety and promotion of sleep. Against this background, [ 18 F]FE@SNAP could serve as a useful tool for imaging and therapy monitoring for MCHR1-related pathologies.
|
2016-05-18T09:47:14.549Z
|
2016-04-01T00:00:00.000
|
{
"year": 2016,
"sha1": "d48a781808ec2b2a3e321dad83c9c4b9f35909e6",
"oa_license": "CCBY",
"oa_url": "https://ejnmmires.springeropen.com/track/pdf/10.1186/s13550-016-0186-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d48a781808ec2b2a3e321dad83c9c4b9f35909e6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
197347750
|
pes2o/s2orc
|
v3-fos-license
|
ONE-POT SYNTHESIS OF TETRA-SUBSTITUTED IMIDAZOLES ON SILICA GEL UNDER MICROWAVE IRRADIATION
– A new procedure for synthesis of tetra-substituted imidazoles was developed. A series of imidazole derivatives including six new compounds were synthesized by this procedure via condensation of benzoin, aromatic aldehyde, amine and ammonium acetate in the presence of silica gel under solvent-free microwave irradiation.
INTRODUCTION
Imidazole nucleus is the main component of some important biological molecules, such as histidine, Vitamin B12, purines, histamine and biotin.It also presents in many natural and synthetic drug molecules, for example, cimetidine, azomycin and metronidazole. 1 Compounds with imidazole ring systems have many pharmaceutical activities and play important roles in biochemical processes. 2cause imidazoles are versatile intermediates in the manufacture of pharmacologically active compounds, 3 many methods have been developed for the synthesis of substituted imidazoles.Typical methods include: condensation of diones, aldehydes, primary amines, and ammonia; 4 condensation of benzoin or benzoin acetate with aldehydes, primary amines and ammonia in the presence of copper acetate; 5 four-component condensations of diones, aldehydes, primary amines, and ammonia acetate in acetic acid under reflux conditions; 6 cyclization of sulfonamides with mesoionic 1,3-oxazolium-5-olates; 7 condensation of β-carbonyl-N-acyl-N-alkylamines with ammonium acetate in refluxing HOAc; 8 and conversion of N-(2-oxo)amides with ammonium triflouroacetate under neutral conditions; 9 iron and copper catalyzed reaction of benzylamine with carbon tetrachloride. 10cently, microwave-assisted synthesis in organic chemistry is quickly growing. 11Many organic reactions proceed much faster and get higher yields under microwave irradiation compared to conventional heating.Supported reagents on solid surface have been widely employed in organic synthesis.Reagents impregnanted on solid materials present advantages over the conventional solution phase reactions, for the good dispersion of active sites leading to improved reactivity and milder reaction conditions.A solvent-free reaction with microwave irradiation would reduce reaction time, and provide easier work-up procedures.In contrast to the traditional application of solid phase in synthesis, microwave technology does not involve the solid phase linking with the reactants and cleaving with the products.The recycling of the inorganic solid support makes the procedure more environmentally benign.
Two research groups have recently reported a one-pot condensation of benzil, aldehyde, amine and ammonium acetate on alumina or silica solid support under microwave irradiation. 3,12 t was found that microwave irradiation and solid support in the solventless reaction considerably shortened the reaction time and greatly reduced waste production. 13 our laboratory, some imidazole derivatives have been synthesized with microwave assistance. 14It was interesting to find that use of benzoin instead of benzil as starting material in the condensation also produced the desired products efficiently (Scheme 1).To our knowledge, the benzils are usually prepared from benzoins catalyzed by various toxic oxidants, such as thallium nitrate, ytterbium(III) nitrate, ammonium nitrate-copper acetate, clayfen, ammonium chlorochromate-alumina, nickel acetate, iron(III) chloride and bismuth(III) nitrate-copper(II) acetate. 15Obviously, direct use benzoin rather than benzil in the synthesis of imidazoles represents a significant improvement in the syntheses toward to greener chemistry.
RESULTS AND DISCUSSION
Tetra-substituted imidazoles were synthesized under microwave irradiation in good yield (Table 1).In order to avoiding overheating, two 10-minute irradiations were performed.All products, six of them (b, c, e, f, g, o) were firstly synthesized, were characterized by melting points, and elemental analyses, IR, MS, 1 H NMR and 13 C NMR spectroscopies.
From Table 1, it can be seen that this procedure could be applied to a broad range of substituted aromatic and aliphatic amines and aromatic aldehydes.The results indicated that good yields were obtained when the p-methoxyand p-methylbenzaldehyde were used as starting materials (Entries a, c, l, n), however, when p-dimethylaminobenzaldehyde was used, the yield is not so good (Entry o), but still acceptable.
Interestingly, it was found that benzoin could be used in the condensation yielding imidazole in absence of any oxidizing reagent.A control experiment of the condensation of benzoin, aldehyde, amine and ammonium acetate under conventional acetic acid reflux conditions was run, no corresponding imidazole was isolated from the reaction mixture.It was in agreement with the previous finding that an oxidizing reagent such as Cu(II) was needed in the conventional condensation. 5This finding demonstrated that the microwave irradiation and silica gel support would play important roles in the reaction.Interestingly, it was reported recently by Balalaie et al. 16 that benzoins were oxidized on zeolite A using microwave irradiation under solvent-free conditions, in which air functioned as oxidant in the conversion.
It was proposed that the zeolite A was significant in the Balalaie's benzoin oxidation.In our study the simple silica gel was adequate for a rapid and clean oxidation of the condensation mixture to imidazole, although at the present stage we cannot conclude whether benzoin is oxidized before condensation or after partially condensation.
In spite of the fact that one more chemical transformation, such as oxidation has to be involved in the condensation to imidazole, the yields of the reaction reported herein are as good as those previously reported (See Table 1). 3,12 syatinsky and Khmelnitsky 12 reported that a small amount of acetic acid was needed to accelerate the condensation, however, in our work the acetic acid was not necessary.
Based on the results described above, we can conclude that a facile and environmentally benign one-pot synthesis of tetra-substituted imidazole derivatives from benzoin, aldehydes, amines and ammonium acetate was developed.
EXPERIMENTAL
All reported yields are isolated yields after column chromatography.MS spectra were run on a GCT-CA064 spectrograph.All melting points are uncorrected and were measured on WRS-1A melting point apparatus.IR spectra were run on a Bruker spectrophotometer and expressed in cm -1 (KBr). 1 HNMR and 13 CNMR spectra were recorded on FT-NMR Bruker AV-400 (400 MHz) or Bruker AV-300 (300 MHz) in DMSO-d 6 with TMS as internal reference.Elemental analysis was performed by the Elementar Vario EL-III.All the reactions were conducted in a commercial microwave oven.(Galanz WD800B, 2450 Hz, output power 800 W) Typical procedure for the synthesis of tetra-substituted imidazoles as follows: A mixture of silica gel (15.4 g) and ammonium acetate (7.7 g, 100 mmol) was ground fully.A solution of benzoin (1.06 g, 5 mmol), aldehyde (5 mmol) and amine (5 mmol) in 20 mL of ethyl acetate was added to the mixture and mixed thoroughly.The solvent was allowed to evaporate under reduced pressure and the dry residue was irradiated at 160W (20% power) at 120℃ in 10 min.The mixture was cooled, stirred and irradiated for another 10 min.The mixture was cooled to rt and extracted with ethyl acetate (3×50 mL).The combined organic solution was filtered and the solvent was evaporated by a rotary evaporator.The product was purified by column chromatography.
|
2019-04-06T13:06:08.063Z
|
2004-05-25T00:00:00.000
|
{
"year": 2004,
"sha1": "b14eb49e8503ffcecda6b1f967d3cdab980e2824",
"oa_license": null,
"oa_url": "https://doi.org/10.3987/com-03-9896",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ca8d99e3a4258f2a51313d87ae20b66b4a117d10",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
23030471
|
pes2o/s2orc
|
v3-fos-license
|
Exposing medical students to expanding populations
Physicians are required to advocate for and counsel patients based on the best science and the interests of the individual while avoiding discrimination, ensuring equal access to health and mental services. Nonetheless, the communication gap between physician and patients has long been observed. To this end, the Institute for the Public Understanding of Health and Medicine of the Rutgers University New Jersey Medical School has expanded its efforts. This report describes two new programs: a legacy lecture series for medical students and an international “experience”, in Huancayo, Peru, for medical students and faculty. The MiniMed outreach program, now in its ninth year and first described in this journal in 2012, was designed to empower the powerless to communicate more effectively with clinicians, thus improving both the effectiveness of the physician–patient relationship and health care outcomes. The approach of the two new programs and their effects on patients, particularly the underserved, and medical students and faculty, are outlined in the following article.
Background
The communication gap between patients and their physicians has long been observed, 1-3 as has its negative consequences. 4 Sociopolitical changes in the last several decades have effected significant shifts in the ethical imperatives associated with the physician-patient relationship. Moral and value decisions can no longer be made solely by physicians on behalf of their patients. Today's physician must be adept at dealing with ethical conflicts and reconciling his own beliefs with those of his patient. Pellegrino notes that "… the ancient precept, primum non nocere, must now apply to the integrity of the patient's value system as well as his body." 3 Furthermore, physicians must be open to discussing moral choices with patients. Medicine's traditional moral authority is no longer universally accepted. Thus, it is imperative that medical educators help succeeding generations of physicians adapt to these changes, and promote the interests of the patient regardless of financial circumstances, socioeconomic status, or the health care setting. 5 Physicians must continue to advocate for and counsel each patient based on the best scientific knowledge available and the interests of the individual, and work to eliminate discrimination, thus ensuring patients' equal access to health and mental health services in the community. 5,6 As an example, physicians practicing in prison setting have certain ethical imperatives, especially in view of their role as agents of both the prisoner and the correctional system, but should nevertheless strive to base their medical judgments on providing appropriate care for the individual. 5 Allowing medical students to teach the underserved can help reinforce these values. Medical students often invest time and effort into activities that will offer them the highest yield in terms of income and prestige. Devoting time to courses on medical ethics does not often fit their perceived requirements. 7 Disparities in health care can begin to be addressed by having students work with minority and disadvantaged populations. The American College of Physicians is on record as advocating …increased resources for identifying and implementing educational approaches and behavior change strategies designed for minority audiences and the providers who treat them. 8 Affording medical students the opportunity to teach and interact with culturally, economically, and socially removed social groups, in nonthreatening contexts can promote empathy, a critical variable in the ethical delivery of health care.
This report describes the expansion of the efforts of the Institute for the Public Understanding of Health and Medicine of the Rutgers University New Jersey Medical School (NJMS), first reviewed in this journal by Lindenthal and DeLisa. 9 Since the publication of the original paper, two programs have been added: a legacy lecture series for medical students and an international "work experience", in Huancayo, Peru, for medical students and members of the faculty. This report further describes our outreach programs, now in their ninth year.
Rutgers NJMS approach
To enable our students to fulfill the aforementioned ethical imperatives, we expanded our MiniMed program. It now includes educational programs for groups outside of our school who would otherwise be unable to participate in the traditional MiniMed school. Medical students now prepare and deliver lectures during the academic year, to male and female inmates lodged at Kintock Group facilities and to residents of the Newark Renaissance House. A portion of the final session is devoted to a discussion of community health facilities, and a copy of each PowerPoint presentation is provided to attendees, as is a list of area health clinics. Administrative tasks, including arrangements for attendance of residents and inmates, are the responsibility of the responsibility of Kintock Group and Newark Renaissance House administrators. Medical students and administrators determine lecture topics.
Outreach programs Newark Renaissance house
The Newark Renaissance House was established in 1975 as a nonprofit residential therapeutic community with a focus on chemically dependent women and children, who are among the least empowered members of our society. The programs offered at the Newark Renaissance House serve the drug therapy needs of infants, children, adolescents, women, and men, and fall under the jurisdiction of the New Jersey Division of Child Protection and Permanency Service, as well as the Division of Mental Health and Addiction Services. Since its founding, programs have been added to address the spread of human immunodeficiency virus (HIV) infection and acquired immune deficiency syndrome (AIDS). Clients usually remain in residence between 6 and 8 months. Programs currently serve the drug therapy needs of infants, children, adolescents, women, and men. Medical students working in the Rutgers NJMS outreach program provide 16 60-minute lectures to the 38 clients enrolled in the adolescent residential program, for young men between the ages of 15 and 17, as well as to the 23 clients in the women and children's residential program devoted to the substance abuse needs of pregnant women and their children.
the Kintock group
With the United States harboring more prisoners than any other nation, affording medical students with the opportunity to instruct inmates in this setting is very useful. In operation since 1985, the Kintock Group is a nonprofit organization under contract with the Federal Bureau of Prisons, the New Jersey State Parole Board, and the Department of Corrections. In addition to the program in Newark, NJ, there are two more Kintock Group programs operating in Bridgeton and in Paterson, NJ, and another in Philadelphia, PA. While each establishment has its own set of programs, the mission of all is the same -to serve as a conduit between prison and release into the community by preparing individuals to care for themselves, to practice responsible behavior, and to enter the workforce. Medical students are involved in educating 100 of the 400 residents, referred to as "parolees", in the final phase of their incarceration. About 10% of the resident population is female. The average length of stay varies between 3 and 6 months. See Figure 1 for a sample of the lectures given to inmates.
MiniMed International
Increasing immigration from Latin American countries has motivated us to join other medical schools in providing learning experiences for our medical students in the southern hemisphere. We formed a novel program, involving both students and faculty members, within the last 3 years. The program was launched in collaboration with medical colleagues in Huancayo, Peru, the capitol of the Junin province with a population of 38,000, and with a well-established civic organization known as Chusi Wanka. The program affords Rutgers NJMS students with a 4-week health-related work experience between their first and second year of medical school. This effort provides an outstanding opportunity for medical students to learn the rudiments of health care delivery in an emerging country burdened with many preventable diseases. This is accomplished by having our medical students shadow clinicians and participate in "rounding" at the Daniel Alcides Carrión and EsSalud Hospitals, and by attending lectures at the Escuela de Medicina de la Universidad Nacional del Centro del Peru. As well, the Medical students participate in other activities -they teach young children in "HIV orphanages" about the basics of good health and disease prevention, assist their Peruvian medical student peers in learning English, and provide health-related lectures to the citizens of Huancayo.
Plans are underway to introduce a research component into the medical student experience in Peru, with faculty participation from both Peru and the United States. Paterson, NJ, about 18 miles from Newark, has a large Peruvian population, potentially allowing for comparative analyses.
With the second phase of this program, members of our faculty travel to Huancayo during the academic year to perform medical and surgical rounds, provide a week-long series of lectures to residents, and offer medical consultations. This program has proved refreshing to both our Peruvian colleagues as well as to our seasoned clinicians. Traveling with us is a librarian, whose lectures address accessing evidence-based medical information from the Internet. Participating faculty have found the experience valuable. Figure 2 provides a list of recent lectures presented by faculty members in Huancayo.
Rationale
By providing medical students, early in their training, with opportunities to communicate with diverse underserved populations in the role of instructor, we are attempting to raise perceptions, while imparting the rudiments of health education. The classroom provides a milieu conducive to breaking down of barriers to communication. Medical students learn to appreciate "where their audience is coming from" and more specifically, lay students' attributes and challenges, and their own mistaken perceptions and projections, while the route to becoming a physician is gradually elucidated. One commonly hears medical students reflecting on the intelligence, the fund of knowledge, and sophistication regarding health-related matters of the homeless, inmates, and wayward adolescents. This experience can help demystify many misconceptions, for eg, how the inmates became estranged from society. Medical students also learn about the strategies employed by drug abusers as they seek to satisfy their addiction -stunned silence followed after one medical student asked a prisoner how she was able to afford her alcohol addiction, and she revealed that she would crack open and drink the alcohol contents of cans of hairspray purchased for $0.99. Medical students were astounded to learn that a woman incarcerated for "vagrancy" derived from an advantaged social classes and that prisoners can have siblings who are professors in medical schools.
A century ago, sociologist Charles Horton Cooley described the "looking glass self," arguing that individuals' self-conception derives from how they believe others view them. 10 Interacting in nonthreatening academic settings on the "home turf " of the individual challenges all concerned to reevaluate their self-concepts and, hopefully, encourages empathy, a significant ingredient in enhancing the efficacy of clinical care.
Our experience with these outreach programs has demonstrated their importance in exposing medical students to diverse populations as well as in instructing them in the art of teaching. The thought of being charged with educating these groups is daunting at first to students, but after several weeks, comfort and confidence levels rise greatly in the medical students and their students. Over time, medical students begin to empathize with their audiences by virtue of the latter's eagerness to learn and the personal experience they bring to the sessions. By the end of the semester, members of each group view one another with heightened respect and with the appreciation that they still have much to learn from one another. We suspect that this experience will help fortify future clinicians who will be charged with patients from increasingly diverse backgrounds.
Role modeling with legacy lectures
Role modeling is an inherent component of education and the design of the Institute for the Public Understanding of Health and Medicine. The 50 preceptors in our in-house programs, their 18 peers in the outreach programs, and the ten new medical students per year involved in MiniMed International are required to attend several legacy lectures annually provided by senior faculty members. Legacy lecturers are drawn by the medical students from among senior members of the Rutgers NJMS faculty, whose careers span 25 years; the objective is to deliver an informal discourse describing some of the many challenges posed by a lifetime career in health and medicine. The central theme of the legacy lectures focuses on both personal and professional life experiences. Lecturers are given an opportunity to reflect candidly on missed opportunities and those pursued with success, mistakes that should have been avoided and mistakes from which they learned from, recognition deemed appropriate and other activities unrequited, and the management of failure, as well as additional experiences of the lecturer's choosing. The legacy lecturer is provided with a standardized
181
Medical students teach the underserved 1. Please identify someone in your life (family, mentors, professionals in the medical field) who has inspired you and how. series of questions at least 2 months in advance to facilitate adequate reflection (Figure 3). A medical student moderates the 75-minute session.
Conclusion
The mantra of our MiniMed School is that an educated patient is the doctor's best friend. Medical ethics demand an inclusive orientation directed to all citizens. Medical students are empowered by the outreach programs, as they are nonthreatening in nature. These programs are not meant to supplant health delivery experiences, such as screening clinics, but rather, to augment them. Recent empirical evidence suggests that physicians are more likely to adopt a patient-centered style of communication when they have had increased interactions with patients who ask questions, seek information, and express their concerns. 11 How beneficial this approach will be in terms of health care outcomes is yet to be determined. Patients will need to know more about health owing to the changing health care landscape in the country. Empowering the powerless to communicate more
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/advances-in-medical-education-and-practice-journal Advances in Medical Education and Practice is an international, peerreviewed, open access journal that aims to present and publish research on Medical Education covering medical, dental, nursing and allied health care professional education. The journal covers undergraduate education, postgraduate training and continuing medical education including emerging trends and innovative models linking education, research, and health care services. The manuscript management system is completely online and includes a very quick and fair peer-review system. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
Disclosure
The authors report no conflicts of interest in this work.
|
2017-06-29T15:39:38.671Z
|
2015-03-19T00:00:00.000
|
{
"year": 2015,
"sha1": "8a04ebdd3511e7bf62cc69b994e44620f29d3d8d",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=24216",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0415b1cb08652bcd1290d450adb227e96bc39e2a",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14454696
|
pes2o/s2orc
|
v3-fos-license
|
BanditRepair: Speculative Exploration of Runtime Patches
We propose, BanditRepair, a system that systematically explores and assesses a set of possible runtime patches. The system is grounded on so-called bandit algorithms, that are online machine learning algorithms, designed for constantly balancing exploitation and exploration. BanditRepair's runtime patches are based on modifying the execution state for repairing null dereferences. BanditRepair constantly trades the ratio of automatically handled failures for searching for new runtime patches and vice versa. We evaluate the system with 16 null dereference field bugs, where BanditRepair identifies a total of 8460 different runtime patches, which are composed of 1 up to 8 decisions (execution modifications) taken in a row. We are the first to finely characterize the search space and the outcomes of runtime repair based on execution modification.
Introduction
Field failures happen in production for any software system of sufficient complexity. For instance, it is common to observe error pages on the Internet while ordering a laser pointer, registering to a conference, or installing a new blogging platform. Many of them have an economic cost and the most dramatic software failures lead to loss of lives. To overcome software failures at runtime, runtime repair techniques modify the execution so that failures become less critical: instead of crashing the whole system, only the current task fails and the system remains available [8,19,24]. The literature refers to those modifications as "runtime patches" [2,21,28]. For instance, with failure oblivious computing [24], a runtime patch consists of skipping erroneous writes out of an array's bounds. In probabilistic memory safety [3], the runtime patches are controlled blank padding added around allocated memory. In the latter case, the runtime patch is a preventive measure and the execution is equivalent with or without the runtime modification. However, in the failure-oblivious case, the runtime patch has modified the system state or execution flow in an irreversible way.
In this paper, we consider the case where multiple runtime patches exist to repair the same failure, which is a scenario that has been very little studied [14,19]. For instance, for repairing a null dereference, one can skip the execution of the statement or craft an arbitrary value before dereferencing [19]. We propose, BanditRepair, a system that systematically explores and assesses a set of possible runtime patches. The system is grounded on so-called "bandit algorithms" [27], that are machine learning algorithms developed for A/B testing, which is the art of identifying the best strategies in production with in-the-field controlled experiments.
Bandit Algorithm for Repair
A bandit algorithm has two goals: 1) to systematically explore the search space of alternatives (e.g. all possible orderings of products) and 2) to maximize the sum of rewards earned by successively trying alternatives (e.g. selling as many products as possible). In a runtime repair context, we instantiate the "bandit view" as follows: BanditRepair 1) systematically explores the search space of runtime patches and 2) increases the number of failures handled by the application of a runtime patch, where handled refers to corralling as follows. A runtime patch is considered to handle 1 a failure, if a failure is replaced by the absence of exceptions within the scope of a given task, such as web request; if domain-specific post-recovery assertions exist, they are considered as well.
The key of bandit algorithms is to constantly balance exploitation (e.g. choosing the best product ordering seen so far) and exploration (e.g. finding an even better product ordering). BanditRepair is configured by an exploitation coefficient ζ (∈]0, 1]), that constantly steers the trade-off between the ratio of handled failures and the search for new runtime patches. In BanditRepair, the exploration of runtime patches is speculative in the sense that one never knows in advance whether a state or flow modification is successful to handle a failure.
We implement BanditRepair for Java, in a version that is dedicated to null dereferences. To handle null dereferences, BanditRepair takes a decision in a pool of possible ones related to null pointer exceptions (creating objects, reusing objects, skipping execution). Sometimes, a failure requires multiple decisions to be taken in a row, which results in what we call a decision sequence. A runtime patch is a decision sequence that handles a failure. The simplest runtime patches are unary: they are composed of a single derefer-ence repair decision, such as skipping the statement or returning from the method. An example of more complex runtime patch in BanditRepair is for instance: at line 24, a new object is crafted to overcome the null dereference, later on in the execution, at line 42, a statement is skipped to handle a second, subsequent null pointer exception. In such compoiste runtime patches, only one decision in isolation may not be enough to overcome the failure, only the sequence is a solution. More generally, a runtime patch contains decisions taken according to a "runtime patch model". To this extent, BanditRepair is realized with a runtime patch model for null pointer exceptions.
Pareto Front of Runtime Repair
We evaluate BanditRepair on 16 field failures reported for Java software. We run those field failures in a virtual endless "while(true)" loop that simulates the same failure happening again and again for different users, for different requests. By doing this, we can systematically study the search space of runtime repair, in terms of how many runtime patches exists, and how the exploration of alternative runtime patches can be balanced with exploitation of already known ones.
For instance, let us consider a field failure of Java open source package Apache Commons Collections, reported as issue #360, which is about a null pointer exception. We simulate 200 field failures by reproducing the failure 200 times in a row. If one configures BanditRepair's exploitation coefficient to explore more than exploit, BanditRepair tries 38 decision sequences and finds that 15 of them overcome the null dereference. On the other hand, by configuring Ban-ditRepair to exploit known valid runtime patches more, Ban-ditRepair tries 27 decision sequences and finds that 13 of them are valid solutions. In the former cases, 62/200 failures are handled by 15 different runtime patches, in the latter case 154/200 failures are handled by 13 runtime patches. That is, we can construct the Pareto front of runtime repair, along two axes that are the number of different runtime patches identified (exploration) and the proportion of handled failures (exploitation).
In our experiment, by summing over the 16 considered null dereference field bugs, BanditRepair identifies a total of 8460 different runtime patches, which are composed of 1 up to 8 decisions (execution modifications) taken in a row.
Reasons for Success
BanditRepair is capable of constructing the Pareto front of runtime repair for the following reasons. First, it introduces a key concept for reasoning on modified execution states or flows, the one of "execution laps". An execution laps is a time-bounded logical unit of computation such as a web request or a command-line execution of a program. The execution laps is essential because: 1) it delimits the start and the end of a runtime patch: a runtime patch is composed of all execution modification decisions that happen within a laps.
2) It comes with a predicate of the end of the laps that as-sesses the viability of the runtime patch. For instance, a web request may end with a HTTP success (200) or an internal server error (500), and a predicate may be "return code == 200". We call such predicate a "laps oracle". By structuring speculative execution with those novel and original concepts, BanditRepair is able to systematically reason on the search space of runtime patches.
Second, the fact that multiple runtime patches exist for the same failure is related to the deep nature of computation. In a single program execution, there are parts of the computation that are optional with respect to the task at hand. If a failure happens in this optional part, it impedes the whole task [1,24]. If given enough time to explore the search space, BanditRepair automatically finds the execution shortcuts that exist to skip a failure. The other fundamental characteristics of software exploited by BanditRepair is that there exists multiple execution paths to achieve the same computational effect. When those multiple paths are identified and a failure happens, one alternative path may succeed. BanditRepair automatically builds a portfolio of alternative execution paths in a systematic manner.
Contributions
To sum up, the contributions of this paper are: • A runtime repair algorithm, BanditRepair, that balances exploitation and exploration of runtime patches. The algorithm is implemented in Java, for repairing null pointer dereferences, and is made publicly-available for sake of open science.
• The characterization and systematic empirical study of the repair search space for null dereferences with respect to: (Size) how many different repair decisions can be tried to handle a given failure?; (Fertility) how many valid sequences of repair decisions exist in the search space?; (Disparity) are all repair decisions equal?; (Trade-off) what is the impact on the exploitation/exploration balance on search space exploration.
• An evaluation over 16 null dereference failres, reported in the field on a public issue tracker, on largely used Java libraries. By simulating the perpetual occurrences of those field failures, the evaluation identifies 8460 valid runtime patches, in an experiment that represents more than 10 days of computation in a distributed grid.
The remainder of this paper is organized as follows. Section 2 presents our approach for repairing null pointer exceptions at runtime. Section 3 details our BanditRepair algorithm. Section 4 details the evaluation on 16 field null dereference. Section 5 presents the related works and Section 6 concludes.
Motivating Example
Let us consider the example of Listing 1. It is an excerpt of server code that retrieves the last connection date of a user and prints it to an HTML page. Method getLastConnection-Date first gets the user session, then pulls the last connection date from the session object. This snippet can trigger two failures that can crash the request: 1) if the session does not exist and getUserSession returns null, then there is a null pointer exception at line 3 (NPE1) 2) for the first connection, getLastConnection returns null, and another null pointer exception can be thrown at line 6 (NPE2). Now let us consider a runtime repair system such as [18]. It would insert hooks in code such that, instead of a null dereference, a viable object is crafted upon failure. In Listing 1, to overcome NPE1 at line 3, such a system could modify the execution state and flow in three ways: 1) it creates a new session object on the fly 2) it returns an arbitrary Date object such as the current date 3) it returns null. As the example suggests, there are multiple possible state modifications for the same failure. However, not all such modifications are equivalent. For instance, if modification #3 is applied, it triggers another failure NPE2, whereas solutions #1 and #2 do not further break the system state. This indicates that not all state modifications are equivalent, some are best than others. In this paper, we devise a system that speculatively explores execution modifications in order to identify the better ones. .. 6 HTML.write(getLastConnectionDate().toString()); // NPE2
Problem statement
In this paper, we consider the problem of production failures. Since programs run in heterogeneous environments, responding to a large number of unpredictable events and inputs, errors happen, and manually written error handlers fail to handle them in many situations [9,26]. Consequently, failures in production happen on a daily basis, and crash reporting systems routinely collect enormous amounts of failure information: for instance the publicly available crash reporting system of Mozilla collects more than 30,000 crash reports per day for its Firefox browser [13]. We note that the same failure often happens multiple times, again and again, for different users, on different servers, etc.
One solution to this problem is to change the program state or flow such that the failure does not happen or is mitigated and the program is able to proceed with execution. This is known as runtime repair [15] and state repair [20]. One example of such a runtime repair approach is by Demsky and Rinard [7], who have proposed automatic restoration of invariants for coping with certain errors that are specific to data-structure. The ideal repair system would transform a failure into a correct result. However, this idealized vision is not realistic and in practice, the goal of runtime repair is to corral failure propagation, to replace crashed systems with continued execution, i.e. to increase overall availability.
Preliminary work on runtime repair suggests that for a single crashing failure, there are several possible repair decisions to be made [6,14,19]. For instance, let us a consider a division by zero, there are several possible repair decisions that could be made: one can divide by 1, or the result of the division can be an arbitrary value (0, 1, MAX VALUE or Not-a-number among others). In other words, upon failure, there are multiple and different alterations of program state that can be made to handle it. One alteration may not be enough to continue the execution after a failure, and after the first alteration of the program state, another failure may happen, triggering another alteration and so on. This is known as "cascaded errors" [19]. The principled handling and systematic study of "cascaded errors" and the corresponding cascaded repair decisions is an open and unexplored research field.
Problem: we aim at devising an architecture and a system that handles failures, by exploring cascaded runtime repair decisions (called "decision sequences") in a principled and effective way.
Exploration of Runtime Repair Decisions
Our key insight is that we can exploit the fact that the same failure happens again and again to explore alternative repair decisions. For a given failure, our idea is to record the execution modification decision and its eventual effectiveness and then to steer the new ones according to the past decisions.
We propose a conceptual framework for this, it seeks to balance the reuse of past runtime decisions (this is called exploitation) that have been shown successful and the exploration of new runtime decisions. The terminology exploitation/exploration as well as the core algorithm is inspired from bandit algorithms.
Bandit algorithms are machine learning algorithms that aim at maximizing the profit of a player in front of multiple slot machines (aka one armed bandit machines). To maximize its profit, the player aims at identifying the machine that has the largest probability of winning. However, these probabilities are unknown, hence the player has to estimate them. After an initial number of trials, he has an estimation of all winning probabilities, with one being higher than the others. However, it may only be due to the variance of the estimation process. Consequently, he has to balance playing the slot machine with highest probability (exploitation) and playing the other machines to gain knowledge (exploration).
We think that the two opposing yet complementary concepts of exploitation and exploration perfectly fit the problem of runtime repair. When a failure happens, from which one knows that a solution exists (based on previous executions), one can either reuse the existing knowledge (apply the same repair decision) or explore a new solution, which may prove to be better. As shown in Listing 1, a runtime patch that does not trigger new failures is better than one that creates invalid program states. The trade-off between exploration and exploitation of runtime repair is the essence of the system we present in this paper, a system called BanditRepair. BanditRepair is inspired from the so-called epsilongreedy bandit algorithm [27].
BanditRepair: runtime repair of failures
In this paper, a failure is defined as an unacceptable interruption of service of a program for a given input. For instance, when a web server does not succeed to serve a file, it is a failure. When a command line program crashes, it is a failure. Failures are considered deterministic: for the same input and the same system state the failure always or never happens. We propose BanditRepair, a conceptual framework and an algorithm for automatically handling failures at runtime.
BanditRepair Inputs
BanditRepair requires five inputs: 1) a program in which runtime repair support will be injected; 2) a failure model expressing the failures targeted by the system; 3) a laps model defining the boundaries between which runtime repair will take place; 4) a laps oracle specifying the viability of the computation; 5) a runtime patch model listing the possible modifications on the program state or execution flow.
Program
The first input of BanditRepair is a program P. BanditRepair uses meta-programming to inject in the program a set of monitoring and runtime intercession hooks. The monitoring hooks include failure detection according to the failure model (see Section 3.1.2) and runtime contract checking according to the laps oracle (see Section 3.1.4). An example of failure detection is the detection of null dereference by automatically adding null check ("if (x!=null) ...") before all field accesses and method calls. The runtime intercession hooks enable BanditRepair to change the program execution if appropriate according to the runtime patch model (see Section 3.1.5).
Failure Model
BanditRepair is parametrized by a failure model. A failure model is an abstraction to represent a family of production failures. An example of failure models is divide-byzero. In BanditRepair, failure models are intentional definitions, where the necessary and sufficient property is a failure predicate, which returns true if and only if the failure is about to happen. For detecting divide-by-zero errors, one can check upfront all denominators of divisions against zero. As explained in Section 3.4, in this paper, we realize BanditRepair for null dereferences (null pointer exceptions in Java).
Laps Model
A laps model defines a quantization of execution time. Program execution contain many natural quanta: for instance a method execution is an execution quantum, this a method laps, a request handling in a web server is a quantum (a request laps), the full execution of a command line program is a quantum (a command-line laps). As we see in those examples, a laps model defines when laps start and end. Ban-ditRepair works with any laps model that defines a laps start and end. Laps models can be domain specific, for instance, in a scientific simulation software application, a good laps model is a simulation step.
Laps Oracle
A laps oracle is a predicate on the program state that is executed at the end of each laps. The goal of a laps oracle is to validate or invalidate state modifications that have happened during the laps. For instance, in a web-server with a request laps model, a laps oracle can be whether the HTTP request return code is OK ("assert response code == 200"). While this example predicate only refers to one variable, it can be arbitrarily complex and refer to many parts of the observable program state. If one considers method laps, a classical design-by-contract post-condition is a laps oracle.
A laps oracle can serve two purposes. First, it can assess whether what happened in the laps has not failed, as in the examples we have given. This is an assertion on the past. Second, a laps oracle can assess whether the system state is viable for future actions and requests, this is then an assertion looking ahead, for asserting what Locasto et al. have called "life after self-healing" [17]. Whether it assesses success or viability, the result of the evaluation of the laps oracle always tells something about the success of state modifications that have happened in the laps. In our empirical experiment, the laps oracles assess the validity of the past computation.
Runtime Patch Model
When a failure of the failure model under consideration is about to happen, as detected by the failure predicate, the program or request (depending on the laps model) is about to crash. It would indeed have crashed without BanditRepair. However, with BanditRepair, when a failure is about to crash the program, BanditRepair replaces the crash by a modification of the program state or flow. The modifications are done according to a runtime patch model.
Definition. A runtime patch model describes how program state modifications are performed upon failures. Once a failure predicate evaluates to true, the program hits a decision point. A decision point is a point in a program where the program state may be modified. At a decision point, a decision must be taken: we call a decision an alteration of the execution state or flow. For instance, prematurely returning from the method is such a modification. When a decision is taken, one says that the decision point has been activated. In essence, a decision is speculative: BanditRepair never knows in advance whether the decision is correct. The only way to assess it is to proceed with execution and wait for the evaluation of the laps oracle.
Within a laps, several decision points may be activated, due to cascaded failures. A decision sequence is composed of consecutive decisions (runtime repair actions), where there is one decision per failure of the failure cascade. The decision are instances of the runtime patch model, the failures of the failure model. At the end of a laps, the laps oracle is evaluated. If it evaluates to true, it means that a valid decision sequence has been found. A runtime patch is a decision sequence that has been validated by the laps oracle. For instance, let us consider again the example of Listing 1, returning null is a failed decision sequence because the request crashes with HTTP 500, internal server error, due to NPE1. On the contrary, returning a fresh date object enables the request to succeed and the HTML to be generated, this is a valid unary decision sequence, i.e. a runtime patch.
Within a runtime patch, the first activated decision point has a special status. First, it is where the program was about to crash: consequently, we call the first activated decision point in a laps the failure point. Second, it is from there that the program execution will speculatively explore new runtime states. In this paper, we implement BanditRepair with the "NpeFix" runtime patch model [5], described in Section 3.4.
BanditRepair Effects
The core of BanditRepair is an engine that selects one decision when a failure is detected, that is when decision point is hit.
When a failure, instance of the failure model under consideration, is detected at a decision point, BanditRepair has to take one decision in order to handle the failure (instead of letting the program crash). This is done as follows.
Case 1: The decision point has never been activated, which means that the failure has never happened in this location of the program before. For instance, when a null dereference has never been seen up to now at line 3 of Listing 1. In this case, BanditRepair randomly selects a decision in the set of alternative possible decisions.
Case 2: The failure has already been seen before at this point in the program. For instance, in a server program, it means that another user has already encountered the same failure, by performing the same sequence of interactions with the program. When this happens, BanditRepair has to choose between exploitation and exploration as follows.
Case 2a exploitation: When the failure has already happened, it means that one or more decisions have already been if rand < ζ then 8: apply up-to-now best runtime patch from S Case 2a proceed with laps execution 15: end while 16: L.end() 17: if laps oracle O is success then 18: store runtime patch in S 19: end if 20: end while taken during another laps (i.e. the program execution has already been altered in the past in another request, for another user). For each past decisions, the laps oracle has been evaluated, and BanditRepair has stored whether the past failure was considered handled according to the laps oracle. When BanditRepair chooses exploitation, it selects one decision sequence which has been the most successful over the past laps. We use the term "exploitation" to refer to the fact that the system exploits its knowledge, by maximizing the likelihood of handling the failure.
Case 2b exploration: When the failure has already happened at a given decision point, BanditRepair may choose to take a decision that was never taken before at this point. In this case, BanditRepair speculatively explores new runtime patches.
To choose between exploitation (case 2a) and exploration (case 2b) BanditRepair draws a random variable from a uniform distribution. If it lower than an exploitation coefficient ζ (zeta), it selects exploitation, otherwise it selects exploration. For instance, BanditRepair with ζ = 0.2 prefers exploitation 20% of the time, and exploration 80% of the time. For large exploitation coefficients ζ , BanditRepair often reuses known runtime patches, for low coefficients, BanditRepair faster explores the space of possible runtime patches. This effect will be empirically studied in Section 4.
In the limit case of ζ = 1, BanditRepair only performs case 1 and case 2a, which means that as long as one runtime patch is found at a decision point, it is always applied over and over. We call this the full exploitation policy.
BanditRepair Algorithm
Algorithm 1 presents BanditRepair. It takes as input a program, a failure detector, a laps model, a laps oracle and a runtime patch model, as explained in Section 3.1. Then, for every laps, if a failure is detected, BanditRepair randomly selects between exploitation (line 8) and exploration (line 10). If the laps oracle validates the decision sequence, it becomes a runtime patch (according to our definition of Section 3.1.5) and is stored as such. Overtime, BanditRepair builds a set of runtime patches for each failure location, we call it a portfolio of runtime patches, a portfolio of decision sequences that have proven successful at least once.
All runtime patches in a portfolio share the common property that they pass the laps oracle. However, they differ from two perspectives. First, they may involve different decision points, at different locations in the program under consideration. Second, they may have different size, where the size of the runtime patch is the number of decisions taken. A runtime patch can be considered better if it contains fewer decisions, because it is likely to change less the execution state, and hence to stay closer to the states created by the initial program and envisioned by the developer. In other words, the smaller runtime patch creates execution states that are less speculative than the ones created by a bigger runtime patch. This will be explored in Section 4.4.3.
Implementation
We implement BanditRepair for null dereferences, aka null pointer exceptions (this is the failure model) with a runtime patch model dedicated to them, called NpeFix [5]. In Npe-Fix, all object variable dereferences are decision points (field accesses, method calls on local variables, method parameters, implicit casts and fields). A decision has to be taken when the variable is null, which means that decision points are activated if and only if a null is going to be dereferenced. For each decision point, NpeFix defines 6 types of decision, grouped in two categories, shown in Table 1. The first category consists of replacing the null value by an alternative valid non-null object of a compatible type. This category is composed of two-sub-categories: 1) when a variable is null, one can reuse an object from another variable in the scope instead, these are reuse-based decisions (on top of Table 1); 2) when a variable is null, one can also create a new object on the fly, these are creation-based decisions. Note that the number of possible decisions for reuse and creation based decisions is parametrized by the number of variables (resp. constructors) available, which means that for a single decision point, there are often dozens of different available decisions (and not only 6).
The second category is based on skipping the execution of the code affected by the null variable, one can either skip the line that uses the null variable, or skip the rest of the method. When skipping the rest of a method which returns a value, one can also either reuse an existing object or create one on the fly. To implement the NpeFix runtime patch model, we use source code transformation: we inject code at each decision point and the injected code is responsible to activate the decision point and actually performing the state or flow modification if necessary. For the interested reader, this runtime patch model and its implementation are extensively described in [5].
Evaluation
We now present the evaluation of BanditRepair. During this evaluation we focus on the following research questions. RQ1.
[Size] Does the core assumption of BanditRepair hold? How large is the runtime repair space? Bandit exploration of runtime patches only makes sense under the following conditions: 1) the repair space at runtime contains different alternatives; and 2) not all alternatives are valid. The answer to this research question will (in)validate the core assumption of BanditRepair. RQ2.
[Fertility] What is the proportion of valid decision sequences? In the context of runtime repair, there may exists different runtime patches that are all valid, that all fix the runtime failure. The proportion of valid repair decision sequences represents the fertility of the search space. When the goal is to find at least one valid decision sequence, it is much easier to do so if many points in the search space are valid. On the contrary, if there is a single point in the search space, in the worst case, it requires visiting the complete search space before finding it. The fertility of the search space is the opposite of what is called "hardness" or "constrainedness" in combinatorial optimization.
RQ3. [Disparity]
To what extent does the search space contain composite runtime patches? BanditRepair builds a portfolio of runtime patches, which are disparate in the sense that they can have a different size. In this paper, we consider that a smaller runtime patch, i.e. containing fewer decisions, is better than a bigger composite one, because it creates less exotic execution states (see Section 3.3). We will observe in our dataset, whether there exists such composite runtime patches. RQ4.
[Trade-off] What is the impact of the exploitation coefficient ζ on repair? The essence of bandit algorithms is to alternate exploitation of valid decisions and exploration of alternative ones. In the context of runtime repair, it means applying a runtime patch that has proven to be successful or searching for alternative runtime patches. We will explore the impact of the exploitation coefficient ζ on the time to find a first runtime patch and the overall proportion of avoided failures.
Dataset
In order to evaluate our runtime repair approach, we need real and reproducible production failures. Since we instantiate the bandit repair vision with null pointer exceptions, we collect null dereference failures.
To collect them, we look for null dereferences that are reported on a publicly-available forum (e.g. a bug tracker) and we assess that they are reproducible. In particular, we focus on failures in the Apache foundation projects because these projects are frequently used and have very good practices for bug reporting and field failure reproduction. In Apache, one guideline is to encode reproduced field bugs as test case. Consequently, our dataset of field bugs is composed of test cases, written by the developers of each project under consideration, which reproduce field bugs. In addition to the triple criteria of being field, reproducible and encoded as test cases, we aim at 1) having bugs in different projects and 2) having bugs in large enough software (where "large" is defined as more than 10,000 lines of code).
As a result, the benchmark contains 16 field bugs (1 from Collections, 3 from Lang, 7 from Math, 3 from PDFBox, 1 from Sling and 1 from Felix). This dataset only contains real null dereference bugs and no artificial or toy bugs. To give the reader a feeling of how hard it is to reproduce field bugs, we note that it took us appropriately 1 full month to build this dataset. As comparison, [24] considers 5 field failures. For sake of future work and comparative evaluations on this topic, this dataset is made publicly available on GitHub. 2 Table 2 presents our benchmark of 16 field bugs. The first column contains the Apache bug id. The second column contains the SVN revision of the global Apache SVN. The third column contains the number of line of code. The fourth column contains the total number of method call before the null pointer exception is trigger. For example, issue Collections-2 BanditRepair dataset: https://goo.gl/937Egi 360 fixed at revision 1076034 is within an application 21650 lines of code. The number of calls before the dereference gives an insight on the complexity of the setup required to reproduce the field failure. As shown in Table 2, there are between 2 and 342 application methods (not counting JDK methods) called for the reproduced field failures under consideration, with an average of 75.56. This indicates that the failures in our benchmark are not simple tests with a trivial call to a method with null parameters.
Experimental Protocol
We perform two experiments. The first one is based on the exhaustive exploration of the search space of runtime patches, as defined by our runtime patch model for null pointer exceptions described in Section 3.4. The second experiment trades-off exploration and exploitation of the search space. Both are done on the benchmark of failures presented in Section 4.1.
Exhaustive exploration
To exhaustively explore the search space of runtime patches for a given failure, we simply recursively explore all possible alternative decisions. For the first decision taken at the failure point (the first decision point in a laps), we take all decisions one after the other. Then, for each new decision points activated by the first decision, we also explore all possible decisions. This is done recursively. In other words, we build the complete decision tree of repair decisions for a given failure.
The time required to perform such an experiment has a bottom bound of the size of the search space multiplied by the time for reproducing the failure. The alternative computation that comes after the first repair decision at the failure point is added on top of this. Overall our experiment takes more than 10 days.
Bandit exploration
The study of exploration of runtime patches is done as follows. 1. We instrument each buggy program of our dataset with our repair framework. 2. We execute each instrumented program with the test case that encodes the field bugs. 3. We collect all decisions taken at runtime 4. We execute the runtime assertions at the end of the test cases We run step #3 and #4 a large number of times, it simulates users that trigger a production failure again and again. Indeed, production bugs keep reappearing as long as they are not fixed. This is why crash reporting systems have large number of instances of the same crash [13]. We trigger all failures exactly 200 times. For instance, we run the crashing test case of bug LANG-304 200 times, simulating that the crash happens on 200 user machines spread over the world, and communicating one another or to a server about the crashes, in an application community style [16]. In the following, a sequence of 200 runs is called a scenario (the scenario of having 200 users triggering the same failure).
In addition, BanditRepair is parametrized by an exploration/exploitation coefficient. We would like to understand the impact of this coefficient. Consequently, we apply the whole process (step #1 to step #4) for 11 different exploitation coefficient ζ from 0 to 1 with a 0.1 step. In the following, we use the term "run" to refer to one failure execution (one test case), for one given exploitation coefficient.
Finally, recall that our algorithm contains a random component. Our implementation fully controls this randomness by using a parametrized seed of the random number generator. However, it may happen that the system works accidentally well for a given seed. To mitigate this risk, for each exploitation coefficient, we repeat the process with 31 different random seeds.
In total, we execute 16 bugs × 200 executions × 11 ζ × 31 seeds = 1 091 200 executions. The raw data of this evaluation is publicly available on GitHub. 3 We answer to all research questions based on this data.
Validity of Repair Decision Sequences
For a given decision sequence taken in response to a failure, we assess its validity according to the laps oracle. In those experiments, the laps oracle are directly extracted from the test case reproducing the field failure. As such, a decision sequence is considered as valid, if no null pointer exception is thrown, and no other exception is thrown. A decision sequence is considered as invalid, if the original null pointer exception is thrown (meaning that that there is no possible decision at the failure point), or another exception is thrown and not caught. When the test case contains domain-specific assertions beyond the occurrence or not of exceptions, we 3 The raw evaluation data of BanditRepair: https://goo.gl/TJezRr. Figure 1: Excerpt of the decision tree of Math-988A. One path is this tree is a "decision sequence", one path resulting in a success (OK) is a "runtime patch". DPx refers to a decision point upon a null dereference failure.
keep them, and a decision sequence is considered valid if all assertions pass after the application of the runtime patch. This is the case for 14/16 failures.
Case Study
By applying the protocol of Section 4.2, we obtain runs for which runtime patches are identified. We now discuss the runtime patches of Math-988A, where the null pointer exception is thrown during the geometrical computation of the intersection of two lines when they do not have intersection. The initial null pointer exception is triggered in the return statement of method "toSubSpace" which returns an object of type Vector1D. The null pointer exception appears when a Vector2D parameter is null and methods getX and getY are called on it. As shown in Table 3 the size of the decision sequences varies between 1 and 3 for this bug, meaning that there are between 1 and 3 null dereferences happening depending on the selected decisions, for which a decision is taken.
Overall, BanditRepair identifies the following runtime patches: 1. initialize the null parameter with a new instance (1 decision point) 2. use a new and disposable instance of Vector2D at the both places where the null parameter is used (between 2 decision points) 3. return null either at the first NPE location or at the second one, triggering another decision in the caller (between 2 and 3 decision points) 4. return a new instance Vector1D (1 decision point) Recall that in our setup, the laps begins with the execution of the test that reproduces the field failure and it stops at the end of the test execution. In this case, the test contains Junit assertions checking the expected correct behavior (which is to return null when no intersection exists). The runtime patch passes those assertions, it means that in this case, runtime repair achieves full correctness. By comparing the decision sequence to the human patch, they are indeed equivalent, yet different. This confirms that there sometimes exists multiple execution paths for achieving the same computational effect. Figure 1 shows an excerpt of the possible paths in the decision tree from laps start to laps oracle evaluation.
RQ1. [Size]
Does the core assumption of BanditRepair hold? How large is the runtime repair space?
We analyze the data obtained with the experiment described in Section 4.2.1, consisting of exhaustively exploring the search space of runtime patches for null dereferences. We create a table that contains the core metrics we are interested in, it is reproduced in Table 3. Table 3 reads as follows. Each line corresponds to a failure of our dataset. Each column gives the value of a metric of interest. The first column contains the name of each bug. The third column contains the number of possible repair decision sequences for this failure. The fourth column contains the number of runtime patches (valid decision sequences for which the laps oracle has stated that the decision sequence has worked). The fifth column contains the minimum/median/maximum number of decisions taken for valid decision sequences.
For example the first line of Table 3 details the bug Collections-360. To repair this failure at runtime, there are two possible decision points, which, when they are systematically unfolded, correspond to 45 possible decision sequences, 16 of which being valid according to the laps oracle. The size of the valid decision sequences is always equals to 2, which means that there must be two decisions taken in a row to handle the failure.
The core assumption of BanditRepair is that there exists multiple alternative decisions to repair a failure at runtime. This assumption is reflected by the number of explored decision sequences, which is exactly the size of our search space since we conduct an exhaustive exploration. In this experiment, it ranges between 4 decisions (for Math-305 and PdfBox-2965) to 576 for Math-988A and 53951 for Math-1117.
Overall, we notice a great variance of the size of the repair space. To sum up, for all 16 failures of our benchmark, there exists alternative repair decisions to be taken at runtime. However, this was not at all an inclusion criterion for building the benchmark. It strongly suggests that alternative runtime repair decisions are prevalent for null dereference failures, and validates our core assumption.
We also see in Table 3 that there is a correlation between the number of activated decision points for a given failure and the number of possible decision sequences. For instance, for Felix-4960, there is only one activated decision point (at the failure point where the null pointer exception is about to happen), and 10 possible decisions can be taken at this point. On the contrary, Math-1117 has the biggest number of activated decision points resulting in a huge search space, 51 785 decision sequences. This correlation is expected and explained analytically as follows. Once a first decision is made at the failure point (where the null dereference is about to happen), many alternative execution paths are uncovered. Then, a combinatorial explosion of stacked decisions happens. If we assume that there are 5 alternative decisions at the first decision point, and that each of them triggers a different execution path and another decision point (all different) with 10 alternatives, it directly results in 5 × 10 = 50 possible decision sequences. One can easily extrapolate that for more than 2 stacked decision points, there is a combinatorial explosion. In general, the size of the repair space depends on: 1. the overall number of decision points activated for a given failure 2. the number of possible decisions at each decision point 3. and the correlation between each decision point, that is the extent to which one decision influences the number of possible subsequent decisions to be taken. For failures with large # of explored decision sequences, it means that runtime repair unfolds a large number of diverse program states and their corresponding subsequent executions. Answer to RQ1. The core assumption of BanditRepair holds: there are multiple alternative decision sequences to handle null dereferences. BanditRepair draws a precise picture of the runtime repair search space. In our experiment, there are 11/16 failures for which we observe more than 10 possible decision sequences (column "Nb decision seq.") for the same failure and according to our runtime patch model, with a maximum value of 51785 (for Math-1117)
RQ2. [Fertility] What is the proportion of valid decision sequences?
Now that we have a clearer picture of the size of the search space, we are interested in knowing whether there exists multiple valid decision sequences in that space. To do so, we still consider the exhaustive study protocol described in Section 4.2.1 whose results are give in Table 3. We concentrate especially on the column showing the number of valid decision sequences. We compare it against the column representing the size of the search space, i.e. the total number of possible decision sequences. For instance, for Collections-360, the search space contains 45 possible decision sequences, in which 16 are valid according to the laps oracle (the absence of null pointer exception and the two assertions at the end of the test case reproducing the failure pass). This makes a proportion of 16/45 = 36% of valid decision sequences in the search space. We notice several interesting extremum cases in Table 3. First there are two failures -Lang-587 and PdfBox-2995for which only 1 valid decision sequence exists. Also there is one failure for which all decisions remove the failures, this is Math-1115 for which 5 out of 5 possible decision sequences are valid. In general there is a great diversity of fertility (the proportion of runtime patches in the search space), which can be explained by two factors: first the strength of the laps oracle, which is in our case the strength of the assertions at the end of the test case that reproduces the failure (beyond not throwing a null pointer exception). The second factor is related to the new program states that are explored, once a first decision has been taken. If those speculative program states are too unrealistic, there is a great chance that the corresponding decision sequences are invalid (for instance because another exception is thrown). Along the same line, if the first decision taken, at the failure point, yields exotic program states, it is unlikely that the subsequent decisions put the system back into a viable state. Answer to RQ2. In our benchmark, the proportion of valid decision sequences varies from 0/14% to 100%, from 0 to 7708/51785 valid runtime patches. This great variation is due to the varying complexity of the system state at the failure point, and the hardness of the laps oracle. When the proportion of valid repair decision sequences is high, it means that BanditRepair is able to quickly find a valid runtime patch, based on a small number of failure occurrences.
RQ3. [Disparity]
To what extent does the search space contain composite runtime patches?
We have shown in RQ2 that there are multiple valid runtime patches. Now, we aim to determine which runtime patches from our runtime patch portfolio are better with respect to their size, as measured by the number of decisions. To do so, we study the results of the exhaustive study protocol described in Section 4.2.1 whose results are given in Table 3. We especially concentrate on the column showing the size of the valid decision sequences. This column gives the minimum, median and maximum size within the portfolio. For instance, for PDFBox-2812, the minimal size in number of decisions among all runtime patches is 1, the median size is 6 and the maximal size is 7. This data supports the following findings. First, one sees that there exists runtime patches composed of more than one decision. For instance, for Collections-360, all runtime patches contains 2 decisions. Since our failure and runtime repair model is specific to null pointer exceptions, it means that there exists decisions for which the null dereference problem is not definitely solved by the first decision, and that another null dereference happens later. This is indeed the case for Collections-360 where the null variable is used twice in two different methods.
Second, in 10/16 of our dataset, the runtime patches are always composed of a single decision. This is strongly correlated to the size of the search space (third column, # of decision sequences), hence indicating that the test case reproducing the production failure, sets up a program state that is repairable in one shot.
Third, in 3/16 failures, there are runtime patches of different size (Math-988A, PDFBox-2812, Math-1117). For instance, for Math-988A, there exists runtime patches of 1, 2 and 3 decisions. This means that there are decisions at the failure point that definitely solve the problem according to the laps oracle (those of patches of size 1). Assuming that the search first finds a complex runtime patch with many decisions, it is indeed necessary to further explore the search space in order to identify a smaller, hence better runtime patch. Forth, in one case (Math-369), there are several decision taken, but none of them are valid and there is no runtime patch. All decision sequences are invalidated by the laps oracle (the assertions of the test case reproducing the field failure).
When a runtime patch of size 1 is found, it may be argued that it is best, and that the speculative exploration could be stopped. This is not what BanditRepair does, because it aims at building a portfolio of runtime patches, and there may exists other runtime patches of size 1 in the search space. Answer to RQ3. For 5/16 failures of our benchmark, the search space contains composite runtime patches that have more than one decision. For 3/16 failures, the possible runtime patches have disparate sizes, and exploratory search of new runtime patches enables to find smaller runtime patches.
RQ4. [Trade-off]
What is the impact of the exploitation coefficient ζ on repair?
Following the protocol based on bandit exploration described in Section 4.2.2, we vary the value of the exploitation coefficient ζ and explore the impact it has on the search process. The results are given in Table 4. Table 4 reads as follows. Each line corresponds to a reproduced failure of our dataset. Each column gives the value of a metric of interest. Each failure line is split in four, corresponding to four different exploitation coefficient ζ (0, 0.2, 0.8 and 1). The first column contains the name of each bug. The second column contains the value of the ζ parameter for a given line (as defined in Section 3.2). The third column is the number of encountered decision points over the 200 laps. The fourth column contains the number of explored decision sequences. The fifth column contains the number of valid decision sequences. The sixth column contains the number of laps before a decision sequence is valid and succeeds the laps oracle.
For example, let us consider Collections-360 with ζ set to 0.8 (second line of the four lines for this failure). During these runs, two locations in code trigger a null dereference, which means that two decision points are activated. The combination of decisions over those two decision points results in 27 explored decision sequences for ζ =1 (this means that the first 26 decision sequences are invalid, and the 27 th is a runtime patch which is then exploited). Among those 26 decision sequences, 12 are considered valid according to the laps oracle. Among the 11 runs, it took a median of 3 runs before finding a valid decision sequence that fixes the failure.
The value of the exploitation coefficient ζ has an impact on the repair as follows. First, it has an impact on the number of activated decision points. For Math-1117 and Math-369, Math-998A and Pdfbox-2812, if we explore more and exploit less (lower ζ ), we explore less of the search space, and hence activate fewer decision points. The number of activated decision points is a coarse grain view of the explored search space. The fourth column showing the number of explored decision sequences better reflects what we are interested in. For all failures, if we increase the amount of exploration (lower ζ ), this indeed results in trying out more decision sequences (corresponding to a larger figure in 4th column). This validates the overall tradeoff architecture of BanditRepair for balancing exploration and exploitation. Now, let us consider the number of valid decision sequences. As expected, the proportion is valid sequences is roughly the same as the ones found during exhaustive exploration. Also, when there is full exploitation (ζ = 1), the search stops when a valid decision sequence is found, which can be observed in the table (only one runtime patch is identified for the columns where ζ = 1), this is an evidence of the correctness of our implementation. Interestingly, for Math-1117, there is no valid decision sequence found (while such sequence exists in the search space as shown in Table 4). The reason is that the number of runs of the experiment (200) is too small compared to the size of the search space, and the valid decision sequence has not yet been found after 200 runs. . The rule is the more one exploits (bigger ζ , such as 1 or 0.8) the more failures are handled. However, this is done to the price of not building a portfolio of alternative runtime patches. This is the essence of the balance between exploration and exploitation.
We now graphically depict the tradeoff between exploration and exploitation of runtime patches. Figure 2 is scatter plot of evaluation runs for failure Lang-587. Each dot is a run for a given random seed and exploitation coefficient. Hence, there are 10 * 31 = 310 dots.
The dots are colored by the value of the exploitation coefficient ζ . For a given run (dot), the X axis is the number of laps before the exploration becomes unsuccessful (no new decision sequences discovered after that), it corresponds to the 7th column of Table 4. The Y-axis corresponds to the proportion of fixed failures shown in the 6th column of Table 4. For instance, the top-most blue point, is an exploration with ζ =0.9, for which 180/200 (90%) runs are successfully repaired at runtime. In this figure, one clearly sees the Pareto front of the tradeoff between exploration and exploitation. The more one exploits, the longer it takes to explore the repair search space, but the more failures are handled. On the contrary, if one explores a lot (low ζ ), the search space is traversed really fast, building a large portfolio of runtime patches, but with a low proportion of handled failures. Interestingly, when using BanditRepair, there is irreducible warm up time before finding a valid repair decision sequence: this is the empty space between x = 0 and x = 28 at the right hand side of the figure. This is explained by the fact that we explore new decisions in a deterministic manner (only the choice between exploration and exploitation is random) , exploring decisions one after the other in the same order during the exploration phase. According to this deterministic order, for Lang-587 shown in this figure, it means that the 28th explored decision sequence is the first valid one. Answer to RQ4. The exploitation coefficient ζ has an impact on the size of the explored search space, the number of repaired failures, and the size of the portfolio of discovered runtime patches. The relation between ζ and those three core metrics draws a Pareto front of runtime repair.
Related work
There are several automatic repair techniques that handle failures at runtime.
One of the earliest techniques is Ammann and Knight's "data diversity" [1], that aims at enabling the computation of a program in the presence of failures. The idea of data diversity is that, when a failure occurs, the input data is changed so that the new input resulting from the change does not result in the failure. The assumption is that the output based on this artificial input, through an inverse transformation, remains acceptable in the domain under consideration. The input transformations can be seen as a kind of runtime patch model. As such, the BanditRepair algorithm could be used to reason on the associated runtime search space.
Demsky et al. [7] presents a language for the specification of data structure invariants. The invariant specification is used to verify and repair the consistency of data structure instances at runtime. The key difference between their work and ours is that BanditRepair is more generic in scope, only requiring a laps model and a laps oracle, which go beyond data structure errors and invariant restoration only.
Rinard et al. [24] presents a technique to avoid illegal memory accesses by adding additional code around each memory operation during the compilation process. For example, the additional code verifies at runtime that the program only uses the allocated memory. If the memory access is outside the allocated memory, the access is ignored instead crashing with a segmentation fault. The two differences between this work and BanditRepair are: first BanditRepair can apply different decisions to handle a given failure (and not a single code, hard-coded in the injected code), and second, BanditRepair uses an oracle to reason about the viability of the decision.
Perkins et al. [22] proposes ClearView, a system for automatically repairing errors in production. The system consists of monitoring the system execution on low-level registers to learn invariants. Those invariants are then monitored, and if a violation of an invariant is detected ClearView forces the restoration. From an engineering perspective, the differ-ence is BanditRepair reasons on decision sequences, while ClearView analizes each decision in isolation. From a scientific perspective, our work finely characterizes of the search space and the outcomes of runtime repair based on execution modification.
Kling et al. [14] propose Bold a system to detect and escape infinite and long-running loops. On user demand, Bolt is attached to a running application and tries different strategies to escape the infinite loop. If a strategy fails, Bolt uses rollback to restore the state of the application and then tries the next strategy. As BanditRepair, Bolt considers multiple decisions for a given failure, but the main difference is that it does not perform and reason about decision sequences made to handle cascaded errors.
Long et al. [19] introduces the idea of "recovery shepherding" in a system called RCV. Upon certain errors (null dereferences and divide by zero), recovery shepherding consists in returning a manufactured value, as for failure oblivious computing. The key idea of recovery shepherding is to track the manufactured values so as to see 1) whether they are passed to system calls or files and 2) whether they disappear. In BanditRepair's runtime patch model, 2/ our 5 kinds of decisions also use manufactured values. However, the key difference is that RCV reasons on each manufactured value in isolation. On the contrary, if an injected manufactured value triggers the creation of another one, what is called "cascaded errors" in RCV, BanditRepair will reason on the effect of their combinations (by storing and keeping information about the the actual valid sequence of decision).
Jula et al. [11] presents a system to defend against deadlocks at runtime. The system first detects synchronization patterns of deadlocks, and when the pattern is detected, the system avoids re-occurrences of the deadlock with additional locks. The pattern detection is related to the detector of instances of the fault model under consideration. However, Jula et al. do not explore and compare alternative locking strategies. We note that the code algorithm of BanditRepair may be plugged on top of their systems to explore the search space of locking sequences.
Hosek and Cadar [10] switch between application versions when a bug is detected. This technique can handle failures because some bugs disappear while others appear between versions. We can also imagine to plug BanditRepair on top of their system to systematically explore the sequences of runtime jumps across versions.
Assure [25] is a self-healing system based on checkpointing and error virtualization. Error virtualization consists of handling an unknown and unrecoverable error with error handling code that is already present in the system yet designed for handling other errors. While Assure does runtime repair by opportunistic reuse of already present recovery code, BanditRepair handles failures by modifying the state or flow according to a runtime patch model. Carzaniga et al. [4] repair web applications at runtime with set of manually written, API-specific alternatives rules. This set can be seen as a hardcoded set of runtime patches. On the contrary, BanditRepair does not require a list of alternatives but instead relies on an abstract runtime patch model that is automatically instantiated at runtime.
Berger and Zorn [3] show that is possible to effectively tolerate memory errors and provide probabilistic memory safety by randomizing the memory allocation and providing memory replication. The work by Qin et al. [23] exploits a specific hardware feature called ECC-memory for detecting illegal memory accesses at runtime. The idea of the paper is to use the consistency checks of the ECC-memory to detect illegal memory accesses (for instance due to buffer overflow). Both techniques are semantically equivalent in the normal case. On the contrary BanditRepair is meant to reason about the search space of execution modifications that are not semantically equivalent, where one taken decision can impact the rest of the computation.
Dobolyi and Weimer [8] present a technique to tolerate null dereferences. Using code transformation, they introduce hooks to a recovery framework. This framework is responsible for forward recovery of the form of creating a default object of an appropriate type of skipping instructions. Kent [12] proposes alternatives to null pointer exceptions. He proposes to skip the failure line or exits the method by a return when a null pointer exception is detected. In those two contributions, there is no reasoning on the search space of runtime repair, as done in BanditRepair. We note that our runtime patch model is inspired by theirs, while richer (method return, variable reuse).
Conclusion
In this paper, we have presented BanditRepair, a runtime repair system inspired from bandit algorithms in machine learning. The system explores the search space of runtime repair decisions in a systematic manner. As a result, the system controls the trade-off between exploiting known runtime patches that are able to handle a failure and exploring new alternative runtime patches. We have evaluated the systems with a protocol based on 16 field failures of Java applications, showing that the system uncovers and indeed explores the runtime repair search space.
This novel and original approach opens new research directions. We are in particular interested in bridging Ban-ditRepair with checkpoint & rollback in order to perform large-scale parallel speculation execution. Also, we will apply BanditRepair with other runtime patches models: more specific such as arithmetic patch models, and more generic such as catching arbitrary exceptions.
Our future is to explore varying ζ over time, as done in sophisticated bandit algorithms, as well as contextual multiarmed bandit where the context is the system state the initial failure point.
|
2016-03-24T15:44:17.000Z
|
2016-03-24T00:00:00.000
|
{
"year": 2016,
"sha1": "09fcf1ae28e6ef5c1e80e39c09e3832bf464be1a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "09fcf1ae28e6ef5c1e80e39c09e3832bf464be1a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
73750064
|
pes2o/s2orc
|
v3-fos-license
|
Demography and clinical outcome of pulmonary tuberculosis in Kashmir; 2 year prospective study
Tuberculosis; Demography; Clinical outcome Abstract Introduction: Tuberculosis (TB) is caused by Mycobacterium tuberculosis, primarily affecting lungs. One-third of the world’s population is currently infected with the TB bacillus. Tuberculosis is one of the three primary diseases of poverty. The risk of developing tuberculosis is higher in immunocompromised persons and is a chronic debilitating disease. Aims and objectives: To study the demographic features and clinical outcome of pulmonary tuberculosis. Materials and methods: A prospective study involving 72 pulmonary tuberculosis patients above 18 years. Results: In our study 45 were below the age of 40 years with a mean age of 47 years ±12.39, with a male to female ratio of 1.4:1.61; patients were from rural areas and 18 were labourers. Two were HIV positive; fever was the main presenting complaint. Mean haemoglobin was 11.2 ± 2.48. Mean ESR was 45.2 ± 12.55. Bronchoscopy was done in 13 patients and 4 had bronchoalveolar lavage positive for AFB. All patients received a daily regimen of ATT. 4 were treated as Cat II, rest were treated as Cat I. 64 patients (88.8%) were cured, 8 (11.1%) are on follow up. No resistance was documented in any of the patients. Treatment related complications were seen in 43 (30.8%). Conclusion: Tuberculosis most commonly occurs in younger patients, especially from rural areas. Due to the low prevalence of HIV in Kashmir association with HIV was low. The Commonest presentation was fever. Most patients had a good response to daily regimen and the most common drug related side effect was hepatitis. 2016 The Egyptian Society of Chest Diseases and Tuberculosis. Production and hosting by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-ncnd/4.0/).
Introduction
Tuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis. The disease primarily affects lungs. Overall, one-third of the world's population is currently infected with the TB bacillus. 5-10% of the people who are infected with TB bacilli (but who are not infected with HIV) become sick or infectious at some time during their life. People with HIV and tuberculosis infection are much more likely to develop active tuberculosis. The risk for developing TB disease is also higher in persons with diabetes, other chronic debilitating diseases leading to immune-compromised state, poor living conditions, tobacco smokers, preexisting structural lung disease etc [1].
It can also affect other tissues of the body. The disease is usually chronic with cardinal features like persistent cough with or without expectoration, intermittent fever, loss of appetite, weight loss, chest pain and haemoptysis [2]. In healthy people, infection with M. tuberculosis often causes no symptoms, since the person's immune system acts to ''wall off" the bacteria. Tuberculosis is one of the three primary diseases of poverty along with AIDS and malaria [3].
The following profile of patients should be screened for tuberculosis: Persistent cough of 2 weeks or more or any duration in HIV positive. Fever for more than 2 weeks. Unexplained night sweats. Unexplained weight loss (more than 1.5 kg in a month).
Aims and objectives
To study the demographic features, clinical presentation and treatment outcome of pulmonary tuberculosis patients at SKIMS, a tertiary care hospital in Kashmir valley.
Materials and methods
A prospective study of tuberculosis patients was conducted in the Infectious Disease Department, Division of Internal Medicine, Sher-I-Kashmir Institute of Medical Sciences, Soura, Srinagar from June 2013 to May 2015. Tuberculosis patients who visited infectious disease clinic on OPD basis and patients who were admitted in general medicine ward were taken up in this study. For each patient clinical presentation, socio demographic profile and outcome of treatment were recorded interpreted and analysed.
Inclusion criteria
All patients >18 years of age suspected of pulmonary tuberculosis were included.
Pulmonary tuberculosis was diagnosed on the basis of: breathlessness, cervical swelling, decreased appetite and pain abdomen occurred less frequently Table 3.
With regard to CECT chest/abdomen findings mediastinal lymphadenopathy was seen in 13 (32. Table 5. The treatment outcome of the patients in our study was that; 64 (88.8%) were cured and 8 (11.1%) are on treatment. 4 patients were treated as defaulters (Cat II WHO) rest were treated as Cat I WHO with a daily regime of ATT. In our study there was no relapse and there were no defaulters as shown in Table 6.
Bronchoscopy was done in a total of 13 patients out of which bronchoalveolar lavage was positive for AFB in 4 patients and TBLB was done as described in Table 9.
Discussion
The present study was conducted in the Infectious disease division of General Medicine Department, Sheri-I-Kashmir Institute of Medical Sciences, Jammu and Kashmir. In this study 72 patients were enrolled and it was a prospective hospital based study. The present study, comprised of 72 patients out of which 42 (58.3%) were males and 30 (41.6%) were females with a male to female ratio of 1.4:1. Most of our study group patients (62%) belonged to the age group of 18-40 years of age with mean age of 47 years ±12.39 ( Table 1) and most of the patients were labourers 18 (25%) followed by student 14 (19.5%). 14 (19.5%) were elderly and not educated, 13 (18%) were housewives, (14%) were employee and 3 (4%) were businessmen. A study conducted by Ogboi et al. [4] at Nigeria found that out of 694 cases 200 (28.9%) were unemployed and 79 (11.4%) were students, 154 (22.1%) were artisans, and 168 (24.2%) were not educated. The results were also comparable with a study conducted by Gebretsadik Berhe et al. [5].
In our study majority of the patients were from rural areas. Majority of the patients from rural areas (64.7%) were also seen by Mengistuendris et al. [6]. In our study, 26 (36%) were smokers and 46 (63.8.0%) were non smokers (Table 1). In a study conducted by Jianming wang et al. [7], the proportion of cigarette smoking was 54.6%, and in a study conducted by l. Burnet et al. in South Africa, they have seen 56% and 60% of patients with active and latent TB infection were smokers [8].
In our study, 45 (63%) were married and 27 (36.0%) were unmarried, compared to a study done by Onyebuchi Stepphanie Ofoegbu et al. in Nigeria where they have found that 57% of the study population were married [9]. In our study, 2 (2.77%) were HIV positive. The seroprevalence of HIV among TB patients in a study conducted by Bahl, Singh et al. in Jammu and Kashmir was 1.6% [10]. A study conducted by Mubarik et al. [11] found that out of 1141 patients tested, 26 proved to have HIV 1 infection with no case of HIV 2 detected. More than 42% were non Kashmiris. Heterosexual transmission was the commonest with married out numbering unmarried. However a study conducted by Acharyal [12] found that out of 250 cases of TB admitted, 25 cases (10%) were diagnosed as HIV positive.
Baseline investigations revealed a mean haemoglobin, ESR and total leucocyte count of 11.2, 45.2 and 7.03 respectively with minimum and maximum of these investigations 2.3, 10, 4.7 and 17.6, 75, 18.74 respectively. The minimum neutrophil count was 40.00% and the maximum was 98.00% with a mean of 71.11 ± 11.63 while as minimum lymphocyte count was 2.00% and the maximum was 58.00% with a mean of 19.98 ± 10.71. Our results are comparable with a study conducted by Singh, Ahluwalia et al. [15] and with a study conducted by Olaniyi et al. [16] in Nigeria.
The treatment outcome of our study revealed that, 64 (88.8%) of cases were cured, 8 (11.1%) were on treatment and they are on follow up, there were no relapse case nor any defaulter in our study (Table 6). Mengistu Endris et al. found in his study, the overall five year treatment success rate of the TB patients was 94.8%, slightly higher as compared to our study because the total number of patients in there study was 400 [6], comparable to other study conducted by Gebretsadik Berhe et al. [5].
In our study 36 (50%) of the cases had underlying comorbidity, hypertension 15 (20.8%) and diabetes 7 (9.7%) were most common Table 8. The study conducted by Nissapatorn et al. [18] found that lungs were the most common location (91.4%) in tuberculosis with diabetes.
Conclusion
Disturbing fact from this study reveals that tuberculosis haunts younger productive age, in an age group of 20-40 years (62%) with most patients being from rural area where poverty, ignorance and lack of adequate health facility still prevail. Due to the low prevalence of HIV in Kashmir association with HIV was low. Commonest presentation was fever, cough and hemoptysis. Most patients have a good response and the most common drug related side effect was hepatitis and is quite prevalent in our community which further needs to be assessed in larger studies especially with regard to genetic aspects.
The present study revealed that noncompliance is not a pressing issue and the level of resistance is low in our community. Response rate of daily regimen seems to be better although it needs to be further studied in larger studies.
|
2019-03-12T13:11:10.623Z
|
2016-04-01T00:00:00.000
|
{
"year": 2016,
"sha1": "67aa20b2586d32a2c768618288443d40a0f43107",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ejcdt.2015.12.015",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9bc50759ed788978d06d132957ba7b03fe3a1d40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
23324103
|
pes2o/s2orc
|
v3-fos-license
|
Mixed Lineage Kinase Phosphorylates Transcription Factor E47 and Inhibits TrkB Expression to Link Neuronal Death and Survival Pathways*
E47 is a basic helix-loop-helix transcription factor involved in neuronal differentiation and survival. We had previously shown that the basic helix-loop-helix protein E47 binds to E-box sequences within the promoter of the TrkB gene and activates its transcription. Proper expression of the TrkB receptor plays a key role in development and function of the vertebrate nervous system, and altered levels of TrkB have been associated with important human diseases. Here we show that E47 interacts with MLK2, a mixed lineage kinase (MLK) involved in JNK-mediated activation of programmed cell death. MLK2 enhances phosphorylation of the AD2 activation domain of E47 in vivo in a JNK-independent manner and phosphorylates in vitro defined serine and threonine residues within a loop-helix structure of AD2 that also contains a putative MLK docking site. Although these residues are essential for MLK2-mediated inactivation of E47, inhibition of MLKs by CEP11004 causes up-regulation of TrkB at a transcriptional level in cerebellar granule neurons and differentiating neuroblastoma cells. These findings allow us to propose a novel mechanism by which MLK regulates TrkB expression through phosphorylation of an activation domain of E47. This molecular link would explain why MLK inhibitors not only prevent activation of cell death processes but also enhance cell survival signaling as a key aspect of their neuroprotective potential.
Basic helix-loop-helix (bHLH) 3 proteins are transcription factors that regulate gene expression to promote cell differentiation and tissue-specific cellular functions (1). For instance, NeuroD and MyoD are tissue-specific bHLH proteins involved in neurogenesis and myogenesis, respectively (2,3). These tissue-specific proteins form dimers with other ubiquitously expressed bHLH transcription factors called E proteins, which bind to the canonical E-box sequence CANNTG and include HEB, E2-2, and the E2A gene products E12 and E47 (1). The formation of active heterodimers can be inhibited by overexpression of Id family members, which bind to and sequester E2A proteins into transcriptionally nonfunctional complexes (4). There are two activation domains within E12 and E47 proteins: AD1, which is found between amino acids 1 and 99 (5,6), and AD2, which is between amino acids 325 and 432 (7). The AD2 domain contains a region between amino acids 345 and 408 consisting of a loop adjacent to an amphipathic ␣-helix, the loop-helix (LH) motif, which is conserved in yeast, Drosophila, and mammalian cells (7). c-Jun specifically represses the activity of the LH domain, and the selective recognition by c-Jun of this activation domain in pancreatic  cells suggests an important function of E2A proteins in regulating insulin control element-mediated expression (8). In addition, E2A transcriptional activity can be modulated by phosphorylation at different levels. It has been reported that phosphorylation status of E47 alters the DNA binding ability of E47 as homodimers or heterodimers in different cellular contexts (9 -11). Thus, p38 MAPK has been described to phosphorylate E47 at Ser 140 and promote MyoD/E47 association and muscle-specific gene transcription (12). On the other hand, it has also been observed that MEKK1 (MAPK/ERK kinase kinase 1) signaling through p38 leads to transcriptional inactivation of E47 (13). In addition, Ser/Thr kinases 3pK and MAPK-activated protein kinase 2 have been shown to interact with E47 and repress its transcriptional activity (14). Finally, E47 proteasomal degradation is associated with increased ERK MAPK activity in aged B cell precursors (15,16).
MLKs are a family of serine/threonine kinases, all of which act as mitogen-activated protein kinase kinase kinases (17). The name "mixed lineage kinases" derives from the fact that of the 11 conserved subdomains found in all protein kinases, domains 1-8 of the MLK proteins resemble serine/threonine kinases, whereas regions 9 -11 share sequence similarity with those of tyrosine kinases, such as the fibroblast growth factor receptor and Src (17)(18)(19). Eight mammalian MLKs have been identified and categorized into three subfamilies on the basis of domain organization and sequence similarity: MLKs, dual leucine zipper-bearing kinases (DLKs), and zipper sterile ␣-motif kinases (ZAKs). However, MLK2, MLK3, DLK, and leucine zipper kinase (LZK) are the only MLKs that have been studied in any detail at a biochemical level (20 -22). Although MLK3 and LZK are expressed widely, MLK2 and DLK are restricted to brain. MLK1-4 share a high degree of homology in the N-terminal 500 amino acids, which incorporate several functional domains: an SH3 domain, a kinase catalytic domain, two leucine zippers, a proline-rich region, and the Cdc42/Rac interactive binding (CRIB) domain, which mediates binding to GTPbound Cdc42 and Rac. In contrast to the N-terminal portion of these proteins, the C termini share little homology and vary greatly in size, possibly to mediate specificity and orchestrate interactions with various other proteins. MLKs are considered primarily as kinases that act upstream of JNKs. Nonetheless, mammalian MLK3 is localized to the centrosome, and its activity is enhanced during G 2 /M phase transition, when the JNK pathway remains inactive (23). It has been reported that silencing MLK3 can block mitogen-stimulated B-Raf activation and prevent serum-or Ki-Ras-stimulated cell proliferation (24). In addition, MLK2 has been involved in modulation of NeuroD and Alien transcription factors (25,26). Thus, MLKs may be involved in very different processes depending on the particular molecular context.
Little is known about the molecular effectors of MLK activity in response to external or internal inputs, but the available evidence suggests that the activity of MLK proteins is controlled by phosphorylation (27). The most striking feature of all MLK proteins is their ability to dimerize, and MLK3 autophosphorylates following homodimerization (27) via a mechanism analogous to that of receptor tyrosine kinases (with which MLKs share some sequence homology). Activation of MLK3 by Cdc42 and Rac has been shown to be a pathway for JNK activation by these G-proteins (28). Two Cdc42-inducible autophosphorylation sites in MLK3 have been identified, but it is unclear how Cdc42 induces MLK3 phosphorylation. One speculation was that the association of the CRIB domain with a GTP-bound G-protein would displace an intramolecular SH3-mediated interaction and permit homodimerization via the leucine zipper and autophosphorylation.
The activation of the JNK pathway is critical for the naturally occurring neuronal cell death in development and may be important in the pathological neuronal cell death of neurodegenerative diseases. The small molecule MLK inhibitors CEP1347 and CEP11004 prevent the activation of the JNK pathway and consequently reduce neuronal cell death in many cell culture and animal models (29). Thus, the cell death program induced by nerve growth factor deprivation in sympathetic neurons is inhibited by MLK inhibitors as a direct consequence of inhibiting downstream kinases of the JNK pathway such as MKK4 and JNK and JNK-activated transcription factors such as c-Jun (30,31). Intriguingly, it has been reported that MLK inhibition by CEP11004 increases TrkB protein levels in cultures of cerebellar granule neurons (32). Because TrkB is a key tyrosine receptor for brain-derived neurotrophic factor involved in differentiation and survival in the nervous system (33), MLK inhibitors would also up-regulate cell survival circuits. Previous work in our laboratory has demonstrated that E47 and NeuroD bHLH proteins bind to p21 CIP1 and TrkB promoters linking differentiation and cell cycle arrest in SH-SY5Y neuroblastoma cells (34). In this study we show that MLK2 binds to and phosphorylates E47 at the AD2 transactivation domain and, as a consequence, represses TrkB promoter activity. Our results provide a molecular link that explains why MLK inhibition not only prevents activation of cell death processes but also enhances cell survival signaling.
Cell Culture-SH-SY5Y neuroblastoma cells, mouse embryonic fibroblasts (MEFs), HEK293T cells, and GP2-293T cells were grown at 37°C in a humidified atmosphere of 5% CO 2 in Dulbecco's modified Eagle's medium (Invitrogen) supplemented with 20 units/ml penicillin, 20 g/ml streptomycin, and 10% fetal bovine serum (Invitrogen). To induce cell cycle arrest and differentiation in SH-SY5Y cells, all-trans-retinoic acid (RA; Sigma) was added to a final concentration of 10 M for 5 days. The medium was changed every 3 days. MEFs and SH-SY5Y cells were transfected with Lipofectamine 2000 (Invitrogen), and HEK293T and GP2-293T were transfected with polyethylenimine (Sigma-Aldrich), following standard protocols. Primary cultures of cerebellar granule neurons were prepared from postnatal day 7 Sprague-Dawley rat pups as described previously (32). The cells were dissociated by 1 mg/ml trypsin for 15 min prior to mechanical trituration, plated in poly-L-lysine-coated dishes at a density of 3 ϫ 10 5 cells/ml and maintained in basal Eagle's medium with 25 mM KCl, 10% dialyzed fetal bovine serum, 2 mM glutamine, 100 units/ml penicillin, and 100 g/ml streptomycin. Aphidicolin (3.3 g/ml) was added 1 day later to reduce the number of non-neuronal cells. The MLK inhibitor CEP-11004 was kindly provided by Cephalon, Inc. (West Chester, PA). Stocks solutions of CEP-11004 (4 mM) were prepared in dimethyl sulfox-ide, and a working 40 M solution was prepared in 1% bovine serum albumin/basal Eagle's medium on the day of the experiment.
Tandem Affinity Purification-TAP-tagged N-terminal (1-220 aa) or C-terminal (100 -651 aa) human E47 fragments or TAP control cDNAs were cloned into a retroviral expression vector (pBABE-puro) to obtain pCYC313, pCYC315 and pCYC316, respectively. Retroviruses were produced in GP2-293T packaging cells by transfection of pBABE-puro derivatives and helper pVSV-G. The cells were maintained at 32°C in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum. Supernatant containing viruses was recovered after 48, 72, and 96 h, filtered (0.45 m), and used for SH-SY5Y infection with the addition of 5 g/ml polybrene. After 2 days, the medium was removed, and infected cells were selected with 0.25 g/ml puromycin. A pBABE-puro derivative expressing green fluorescent protein (pCYC186) was used to estimate infection efficiencies. Ten 150-mm plates (8 ϫ 10 6 cells seeded), and twenty 150-mm plates (5 ϫ 10 6 cells seeded) were used for TAP experiments with proliferating or RA-differentiated SH-SY5Y cells, respectively. The cells were lysed in TNT buffer (10 mM Tris-HCl, pH 8, 150 mM NaCl, 0.1% Triton X-100, 1 mM DTT) with 2 mM MgCl 2 and protease and phosphatase inhibitors and centrifuged for 3 min at 200 ϫ g. Triton X-100 was raised to 1% whenever the large C-terminal fragment of E47 was used. Obtained supernatants were first used for immunoprecipitation with 100 l of IgG Sepharose 6 Fast Flow beads (Amersham Biosciences) for 2 h. After five washes, 300 units/ml AcTEV protease (Invitrogen) was added to the column, which was incubated for 2 h at room temperature to elute E47 in TNT buffer with 0.5 mM EDTA. The second immunoprecipitation was carried out with calmodulin beads (Stratagene) in TNT buffer with 1 mM magnesium acetate, 1 mM imidazole, and 2 mM CaCl 2 . After five washes, calmodulin beads were suspended in 50 mM NH 4 CO 3 H, pH 8, and proteins were analyzed by nano-liquid chromatography coupled to nanoelectrospray ionization and tandem mass spectrometry analysis.
Protein Phosphatase Treatment-HEK293T cells were co-transfected with E47 and MLK2 expression plasmids in 60-mm plates. One day after transfection, the cells were lysed in 200 l of 20 mM HEPES-KOH, pH 7.9, 125 mM NaCl, 0.1% Nonidet P-40, 1 mM EDTA, 1 mM phenylmethylsulfonyl fluoride, and protease inhibitors; sonicated on ice; and spun for 10 min at 12,000 ϫ g. Supernatant was divided into 50-l aliquots and treated with 4 units of SAP (shrimp alkaline phosphatase; Roche Applied Science) or 400 units of protein phosphatase (NEB) as directed by the supplier. Phosphatase inhibitors (1 mM sodium fluoride, 1 mM -glycerophosphate, 5 mM sodium pyrophosphate, 1 mM EGTA) were added when indicated. The samples were incubated 90 min at 37°C (SAP) or 30°C ( phosphatase). To stop the reaction, 2ϫ SDS-PAGE loading buffer was added, and the samples were analyzed by Western blot.
Immunoprecipitation and Western Blot Analysis-For immunoprecipitation experiments, HEK293T cells were harvested 24 h after transfection. The cells were resuspended in lysis buffer (20 mM HEPES-KOH, pH 7.9, 125 mM NaCl, 0.1% Nonidet P-40, 1 mM EDTA, 1 mM phenylmethylsulfonyl fluo-ride, protease, and phosphatase inhibitors), sonicated on ice, and spun for 10 min at 12,000 ϫ g. The supernatants were incubated with 50 l of ␣FLAG M2-agarose beads (Sigma). The samples were rocked for 2 h at 4°C, and the beads were collected by centrifugation (1 min at 2000 ϫ g), washed three times with 1 ml of cold lysis buffer, and finally resuspended in loading buffer, boiled, and loaded onto SDS-PAGE gels. Western blot analysis was performed as previously described (34) with ␣E47 (SC-763), ␣FLAG (Sigma), horseradish peroxidase-␣-horseradish peroxidase (Sigma), ␣pS63-c-Jun (Cell Signaling), and ␣JNK (Cell Signaling) antibodies used as recommended by the suppliers.
Luciferase Assays-Luciferase were performed essentially as described (34) with a dual luciferase reporter assay system (Promega). Routinely, 1 g of firefly luciferase reporter plasmid and 0.05 g of Renilla luciferase pRL-TK control plasmid (Promega) were used to determine relative expression values. To assess effects caused by E47 or MLK2, 0.8 or 0.2 g of each expression plasmid (or empty vector) were added to each transfection assay. Firefly/Renilla luciferase ratios obtained from a promoterless vector were substracted as background, and the resulting values were normalized to control conditions.
Production and Purification of Recombinant Proteins-All of the GST fusions were expressed in Escherichia coli BL21 (DE3) by adding 0.4 mM isopropyl--D-thiogalactopyranoside to cultures at a cell density of 0.3 A 600 and subsequent incubation for 4 h at 30°C. The proteins were purified using glutathione-Sepharose beads as directed by the supplier (Amersham Biosciences) in 500 l of lysis buffer containing 25 mM HEPES, pH 7.9, 0.3 M KCl, 1 mM EDTA, 0.1% Nonidet P-40, 10% glycerol, 1 mM DTT, and protease inhibitors. Elution buffer contained 40 mM reduced glutathione, 20 mM NaCl, 0.5 mM DTT, and 50 mM Tris-HCl, pH 8.
In Vitro Kinase Assay-Kinase reactions were carried out in 20 l (20 mM HEPES, pH 7.5, 15 mM MgCl 2 , 2 mM DTT, 0.1 mM Na 3 VO 4 , 1 mM phenylmethylsulfonyl fluoride, protease, and phosphatase inhibitors) containing 25 M nonradioactive ATP, 5 Ci of [␥-32 P]ATP, 0.2 g of GST-MLK2 (or GST-MLK2 KD), and 2 g of the corresponding substrate proteins and were incubated for 20 min at 30°C. Phosphorylated products were separated by SDS-PAGE, stained with Coomassie Brilliant Blue to ensure equal loading of substrate proteins and MLK2 kinase, dried, and analyzed by autoradiography.
RESULTS
E47 Interacts with MLK2-To identify neuronal proteins that interact with transcriptional complexes containing E47, we
E47 Phosphorylation by MLK Represses TrkB Expression
carried out TAP experiments using E47 as bait in SH-SY5Y human neuroblastoma cells either proliferating or undergoing differentiation with RA (37). SH-SY5Y cells were infected with retroviral vectors expressing TAP-tagged human N-terminal (1-220 aa) or C-terminal (100 -651 aa) E47 moieties and subsequently selected by puromycin treatment. Purified E47-TAP complexes were analyzed by nano-liquid chromatography coupled to nano-electrospray ionization and tandem mass spectrometry analysis. One of the identified peptides specifically present in the N-terminal E47-TAP pulldown was matched to MLK2, one of the mixed lineage kinases that had been initially identified as signaling effectors in the nervous system (17,38,39). To confirm and analyze further the interaction between E47 and MLK2, HEK293T cells were transfected with fulllength E47 and different FLAG-tagged MLK2 constructs (Fig. 1A), and corresponding cell extracts were used for immunoprecipitation experiments with ␣FLAG beads. Although the interaction of E47 with full-length FLAG-MLK2 was readily detected (Fig. 1B), the co-immunoprecipitation efficiency decreased significantly when only the C-terminal moiety of MLK2 was used, suggesting that the interacting region would lie in the N-terminal sequences of MLK2 that include the SH3 motif and the kinase domain. In agreement with this, the N-terminal region of MLK2 was sufficient to co-immunoprecipitate E47 at a similar relative efficiency compared with full-length MLK2. Finally, supporting a key role for the kinase activity of MLK2, a kinase-dead mutant of MLK2 was unable to co-immunoprecipitate E47 (Fig. 1B). Because MLKs undergo autophosphorylation to become fully active (17,19), this result suggests that autophosphorylation might also be important to acquire a competent conformation to interact with E47. NeuroD overexpression did not affect co-immunoprecipitation of E47 and FLAG-MLK2, suggesting that NeuroD would not mediate their interaction (25). Indeed, MLK2 requires its LZ domain to interact with NeuroD (25), although it is not involved in its interaction with E47 (Fig. 1B).
MLK2 Enhances Phosphorylation of E47-A shift in the mobility of E47 was visible by Western blot of HEK293T cell extracts when wildtype MLK2, but not the kinase-dead mutant (KD), was co-expressed (Fig. 1B). Because MLK2 has been shown to be constitutively active when overexpressed (36), our observations would suggest that MLK2 phosphorylates E47, either directly or indirectly. The retarded E47 bands caused by MLK2 co-expression collapsed into a faster migrating band after treatment of cell extracts with shrimp alkaline phosphatase (Fig. 1C), demonstrating that MLK2 enhances phosphorylation of E47. Moreover, treatment of cell extracts with protein phosphatase also revealed a basal phosphorylation status for E47 in HEK293T cells, which likely reflects the demonstrated participation of other kinases in E47 phosphorylation (9 -16). A kinase-dependent shift in E47 mobility was also observed in extracts from SH-SY5Y cells transfected with MLK2, either cycling or undergoing differentiation with RA (Fig. 1D). Finally, phosphorylation of E47 was inhibited when MLK2-expressing cells were treated with CEP11004, a specific MLK inhibitor (Fig. 1E). In summary, our results indicate that MLK2 induces E47 phosphorylation in vivo.
MLK2 Inhibits E47 Transactivation Activity on the TrkB Promoter-Because MLK2 had been implicated in NeuroD modulation (25) and nuclear receptor co-repression (26), we wanted to study the effects of MLK2 overexpression on E47 transcriptional activity. We had shown that E47 is able to activate the TrkB promoter through E-box sequences important for achieving full transcriptional activity during SH-SY5Y differentiation (34). Thus, we used our previously established luciferase-reporter assay to analyze the role of MLK2 on E47 transcriptional activity. MLK2 was able to inhibit most effects caused by E47 co-expression on TrkB-promoter reporter expression in both proliferating and RA-treated cells (Fig. 2), whereas it had no significant effect on corresponding reporter basal levels. This inhibition was due neither to lower E47 protein levels in the presence of MLK2 (Fig. 1D) nor to a change in E47 intracellular localization, which was mostly nuclear under these conditions (data not shown). Moreover, the MLK2 kinase-dead mutant was unable to exert this inhibition (Fig. 2), which reinforced the notion that MLK2 regulates E47 through phosphorylation. Interestingly, MLK2 was not able to revert E47-mediated activation of the p21 CIP1 promoter (data not NOVEMBER 20, 2009 • VOLUME 284 • NUMBER 47
JOURNAL OF BIOLOGICAL CHEMISTRY 32983
shown), suggesting that MLK2 would exert specific roles to regulate differentiation and survival, but not cell cycle events.
MLK2 Phosphorylates Ser/Thr Residues in the LH Motif of E47-A close inspection of the shift caused by MLK2 co-expression in TAP-tagged full and partial-degradation products (data not shown), suggested that major phosphorylation events should take place within the N-terminal 460 aa of E47 (Fig. 3A). By means of an ordered deletion analysis (Fig. 3B), we delimited a region spanning amino acids 331-460 in E47 that corresponded to the AD2 activation domain and included the LH motif previously identified (7). Because MLKs have been shown to recognize sequence-specific 24-aa docking sites in downstream kinases (40), we scrutinized the identified region that was highly phosphorylated by MLK2 and found a significant similarity to the MLK docking site between residues 387 and 408 of E47 (Fig. 3A). On the other hand, although MLKs show similarities to serine-threonine and tyrosine kinases, so far they have been shown to phosphorylate only serine and threonine residues in downstream kinases (17,19). Thus, we searched for serine and threonine residues in E47 near the putative MLK docking site that were conserved from fish to human. Although there are no strictly conserved serines or threonines C-terminal from the docking site, there are many conserved Ser/Thr residues between positions 331 and 386, with Ser 379 being the closest to the MLK docking sequence (Fig. 4A). First, the S379A mutation abolished most of the shift caused by MLK2 on E47 in both HEK293T (Fig. 4B) and SH-SY5Y (Fig. 4C) cells. Although MLKs do not require a proline next to the Ser/Thr residue (36,41,42), because Ser 379 is followed by proline, we decided to screen for additional (S/T)P sites in a close serine-rich patch between amino acids 350 and 359 (Fig. 4A). We also took into account the fact that MLKs likely target (S/T)XXXS as consensus sequence for autophosphorylation. Whereas the most prominent shift was still observed in a triple S352A/T355A/ S359A mutant, the smear that can be observed in the wild-type peptide was lost in the triple mutant (Fig. 4B), suggesting that MLK2 phosphorylates additional sites to Ser 379 . Finally, mutating Ser 341 only affected very slightly the complex mobility shift caused by MLK2.
Two lines of evidence indicated that MLK2 enhances E47 phosphorylation independently of the downstream kinase JNK. First, co-transfection of a dominant-negative JNK inhibited MLK2-driven phosphorylation of c-Jun to 18%, whereas that of E47 only decreased to 86% (Fig. 4D). Second, MLK2 enhanced phosphorylation of E47 with a similar efficiency in either wildtype or double null JNK1,2 mutant MEFs (Fig. 4E). In summary, our results indicate that MLK2 induces E47 phosphorylation in vivo in a JNK-independent manner.
To test whether MLK2 is able to phosphorylate E47 directly, we carried out in vitro kinase assays with an N-terminal recom- Fig. 4F) and exhibits a sufficient solubility in E. coli for efficient purification under native conditions (data not shown). We found that this fragment of MLK2, but not a kinase-dead mutant, strongly phosphorylated the abovementioned E47 peptide spanning residues 331-460 (Fig. 4G). Although the S379A mutant was still efficiently phosphorylated by MLK2, phosphorylation of the S/T5A quintuple mutant was severely reduced (Fig. 4G), indicating that the identified Ser/Thr residues are all important phosphorylation targets of MLK2. Although MLK2 phosphorylated very efficiently a shorter peptide that only contained Ser 379 , phosphorylation of a S379A mutant in this shorter context was undetectable. In summary, MLK2 phosphorylates in vitro at least five Ser/Thr residues within the LH motif that plays a key role as a transcriptional activation domain of E47 (7).
Transcriptional Repression by MLK2 Requires the Ser/Thr Target Residues within the LH Motif of E47-To test whether MLK2 would inhibit the transcriptional activation domain of E47 by phosphorylation of the identified target residues within the LH motif, we co-expressed different E47 mutants with wildtype and kinase-dead MLK2 proteins and analyzed TrkB promoter-driven expression by the aforementioned luciferase assay (Fig. 5A). Although the single S379A mutant was not significantly resistant to MLK2-mediated inhibition, expression produced by a quadruple mutant (S352A/T355A/S359A/ S379A, referred to as S/T4A) was significantly higher in the presence of MLK2. Moreover, expression levels attained by the S/T5A quintuple mutant were almost unaffected by co-expression of MLK2, indicating that MLK2-mediated repression effects require the same Ser/Thr residues in E47 that are phosphorylated both in vitro and in vivo by MLK2. The different E47 mutants stimulated TrkB promoter-driven expression very similarly in the absence of MLK2, which suggests that the serine/threonine to alanine substitutions introduced do not cause important functional or structural alterations in E47. In summary, these results give support to the notion that MLK2 inhibits E47 transcriptional activity by direct phosphorylation of Ser/ Thr residues within the LH motif.
MLK Inhibits TrkB Expression in Neuronal Cells-Inhibition of MLKs by CEP11004 has been shown to increase TrkB protein levels in cerebellar granule neurons (32), and our results suggest that, because MLK2 inhibits E47 activity, MLKs could regulate TrkB expression at a transcriptional level. To test the functional relevance of MLKs on endogenous TrkB transcription in a neuronal differentiation paradigm, we used SH-SY5Y cells in the presence or absence of RA to stimulate or not E47-driven transcription, respectively (34), and samples were collected from three independent experiments to determine TrkB and TFRC (as control) mRNA levels by reverse transcription real time PCR. Relative TrkB expression levels in the absence of RA were very low and insensitive to MLK inhibition (data not shown), FIGURE 4. MLK2 phosphorylates Ser/Thr residues within the LH motif of the AD2 activation domain. A, serine or threonine residues mutated in the AD2 activation domain of E47 are indicated in red. E47tp1 and E47tp2 indicate the extent of target peptides analyzed. B and C, effects of serine or threonine mutations to alanine on E47 phosphorylation by MLK2. TAP-tagged E47 constructs were transfected in the presence or absence of MLK2 in HEK293T (B) or SH-SY5Y (C) cells either cycling (cyc) or RA-treated (RA), and E47 mobility was analyzed by Western blot. The quintuple S/T5A mutant contains the following changes: S341A, S352A, T355A, S359A, and S379A. D, E47 phosphorylation by MLK2 and JNK in HEK293T cells. Cell extracts from HEK293T transfected with E47, MLK2, and a dominant-negative form of JNK (dnJNK) were used to analyze phosphorylation of E47 as a shift in mobility by Western blot. Ser 63 -phosphorylated c-Jun is shown as control for phosphorylation efficiencies, which are shown as relative percentage values below the corresponding lanes. E, E47 phosphorylation by MLK2 in JNK-deficient MEFs. Western blot analysis of E47 in the presence of transfected MLK2 in JNK ϩ/ϩ and JNK Ϫ/Ϫ (double null JNK1 JNK2 mutant) MEFs. JNK proteins were also detected as control. F, analysis of E47 full-length phosphorylation by different MLK2 constructs. E47 was transfected in HEK293T together with constructs carrying different fragments of MLK2. full, N-terminal 1-496 aa or 1-757 aa fragments, a C-terminal 327-954 aa fragment (C), and a kinase-dead (KD) mutant. E47 phosphorylation was assessed by Western blot. G, MLK2 in vitro kinase assay. Bacterially expressed E47tp1 and E47tp2 target peptides, as well as their S379A or S/T5A mutants, were purified as GST fusions and subjected to in vitro kinase assays with recombinant N-terminal MLK2 (1-496 aa). Corresponding autoradiographies are shown. wt, wild type; KD, kinase-dead. but, as expected from our previous work (34), they were easily detected under RA-elicited differentiation conditions. More importantly, TrkB mRNA expression levels showed a clear increase in cells treated with CEP11004 (Fig. 5B). Similar results were obtained in cerebellar granule neurons. Thus, TrkB promoter-driven expression was clearly stimulated by the addition of CEP11004, even under MLK2 overexpression conditions (Fig. 6A). Because MLKs require dimerization for autophosphorylation and full activation (17), we used the MLK2 kinasedead protein as a dominant mutant and found that it also stimulated TrkB promoter-driven expression (Fig. 6A). Finally, MLK inhibition by CEP11004 caused a clear increase of TrkB mRNA levels in cerebellar granule neurons levels as determined with two different probes targeted to 5Ј and 3Ј sequences of the TrkB mRNA (Fig. 6B). These results demonstrate that the MLK inhibitor CEP11004 increases expression of TrkB at the mRNA level and support our findings that point to MLK as a key repressor of E47-driven expression in neurons.
DISCUSSION
In a previous study we reported that bHLH protein E47 binds to E-box sequences within the promoter of the TrkB gene and activates its transcription (34). Here we show that MLK2 kinase, whose activity has been involved in the regulation of JNK pathway (17,39), represses TrkB promoter activity by phosphorylating E47 in a JNK-independent manner. In particular, we show that phosphorylation of Ser/ Thr residues within the LH motif of the AD2 activation domain inhibits E47 as a transcriptional activator of TrkB. Moreover, inhibition of MLK activity by CEP11004 (32) increases TrkB expression levels in cerebellar granule neurons and differentiating SH-SY5Y neuroblastoma cells. These findings point to a novel molecular mechanism by which MLK regulates TrkB expression by phosphorylating the E47 transcription factor.
The molecular mechanisms that control the transcriptional activity of E47 are not fully understood but appear to involve both positive and negative regulatory factors. Previous work has demonstrated that p38 MAPK activates muscle-specific gene transcription. Phosphorylation of E47 at Ser 140 by p38 MAPK enhances MyoD/E47 heterodimerization and subsequent binding to E-box sequences (12). Other results may suggest that p38 MAPK could exert a negative effect by phosphorylating other residues in the N terminus of E47 (13). Although this post-translational modifications would involve the N-terminal sequences of E47 that contain the AD1 activation domain, studies conducted by Quong et al. (7) and Aronheim et al. (5) identified a LH region within the AD2 activation domain capable of forming a loop structure between amino acids 345 and 394, and an amphipathic ␣ helix between amino acids 395 The mean values and standard errors of three independent determinations from three independent experiments are shown. Significant differences (␣ ϭ 0.05) with control are indicated with an asterisk. FIGURE 6. MLK2 inhibits TrkB expression in cerebellar granule neurons. A, MLK2 inhibition up-regulates TrkB promoter-driven expression. Cerebellar granule neurons cultured for 6 days were transfected with a TrkB-promoter luciferase reporter and either wild-type or a kinase-dead (KD) mutant of MLK2 and treated or not with 400 nM CEP11004 for 18 h after transfection to determine luciferase activities. Mean values and standard errors of six independent transfection experiments are plotted. Significant differences (␣ ϭ 0.05) with control are indicated with an asterisk. B, effect of MLK inhibition on TrkB expression in cerebellar granule neurons treated or not with 400 nM CEP11004 for 24 h. Reverse transcription real time PCR analysis of TrkB was done with two different probes targeting 5Ј and 3Ј sequences of the TrkB mRNA, and the results obtained were made relative to glyceraldehyde-3-phosphate dehydrogenase as control. Mean values and standard errors of six independent determinations are shown. Significant differences (␣ ϭ 0.05) with control are indicated with an asterisk. C, scheme showing the molecular link established by E47 between the JNK and phosphatidylinositol 3-kinase pathways to determine the cell toward death or survival. When trophic signals are high, E47 would provide with a high level of Trk receptors to activate cell survival pathways. On the contrary, in the presence of apoptotic stimuli, increasingly active MLK proteins would lead to JNK-mediated cell death and, in addition, limit E47-dependent Trk expression to abate cell survival signals. Thus, E47 inhibition by MLK appears as an important link that would negatively coordinate cell death and survival pathways in neurons. and 408. Interestingly, this region showed different transactivation properties depending on the cell line used (5), suggesting that it would likely be regulated by lineage-specific co-activators and/or co-repressors. As a relevant example, c-Jun has been reported to inhibit the transactivation potential of the AD2 activation domain of E47 (8). Although the mechanism by which c-Jun represses E47 transactivation activity remained to be elucidated, these authors demonstrated that serines at positions 63 and 73 in c-Jun were not involved in the repression mechanism. Phosphorylation of these serine residues by JNK is essential for c-Jun activation (44), but not for binding to activators such as ATF-2 (45), which would rule out the possibility that the MLK2 effect on E47 activity is mediated by downstream activation of JNK.
MLK2 is predominantly expressed in brain, and it is localized in both the cytoplasm and nucleus (26,46). Previous studies have reported interactions between MLK2 and components of transcriptional complexes. Thus, MLK2 has been shown to phosphorylate and enhance the silencing activity of Alien, a co-repressor for nuclear receptors. The authors suggested that this mechanism would represent a link between MLK2 and transcriptional repression of target genes during neuronal differentiation (26). On the other hand, it has been reported that MLK2 interacts with and phosphorylates NeuroD via the Huntingtin protein in mouse N2A neuroblastoma cells (25). Unfortunately, because NeuroD activity was assayed very indirectly as the fraction of Xenopus embryos displaying ectopic neurons after mRNA injection, a direct consequence on the transactivation properties of NeuroD was not demonstrated. In any event, the domains of MLK2 required for binding NeuroD and E47 are different. Although NeuroD requires the LZ domain of MLK2 (25), it is totally dispensable for E47 interaction.
Studies with neuronal models have involved MLKs as mediators of apoptosis induced by trophic factor deprivation (31,38). On the one hand, perhaps with the participation of Cdc42, MLK would sense trophic factor limiting conditions to activate JNK and induce c-Jun-dependent and -independent processes leading to apoptosis (31). On the other hand, Wang et al. (32) have shown that the MLK inhibitor CEP11004 increases TrkA and TrkB protein levels in both central and peripheral nervous systems, this activation process important being for phosphatidylinositol 3-kinase-mediated long term survival. Thus, the authors of this work proposed that MLK inhibitors would not only inhibit the JNK apoptotic pathway but, as they raised Trk receptor protein levels, they would also increase cell responsiveness to trophic factors. This dual effect could explain the protective action of MLK inhibitors in models of neuronal injury and neurodegeneration such as Alzheimer's disease, where basal forebrain cholinergic neurons down-regulate TrkA expression in aged rats (47). We show here that MLK2 phosphorylates E47 and represses its activity as a transcription factor, thus reducing TrkB expression levels as a direct consequence. Our results provide with a direct molecular link between the JNK and phosphatidylinositol 3-kinase pathways and would explain the effects caused by MLK inhibitors on Trk receptor levels and long term survival (see scheme in Fig. 6C).
It has been shown that the interaction between the SH3 domain of MLK2 and the proline-rich N terminus of Hunting-tin inhibits MLK2 activity in HEK293T and HN33 cells (48), and it has been proposed that, in normal cells, MLK2 would be sequestered in an inactive form by binding of its SH3 domain to the N terminus of Huntingtin (17,48). In Huntington disease, the polyglutamine-expanded mutant versions of Huntingtin would fail to bind MLK2 and, as a consequence, cause JNK activation and apoptosis (48). Interestingly, Ginés et al. (49) have shown that levels of TrkB receptor protein are diminished in a knock-in mouse model of Huntington disease. This observation could also be explained taking into account that, as we show here, unwanted MLK activation should inhibit E47-dependent expression of TrkB, raising the question as to whether enforced expression of TrkB could delay Huntington disease progression.
Neuron survival during development of the nervous system is at least in part determined by the limited availability of targetderived growth factors, which act to inhibit programmed cell death (50). Then, to ensure that apoptosis is only triggered once trophic factor levels fall below a precise threshold, mechanisms should exist that add robustness and convert linear response systems into switch-like devices. In our scheme (Fig. 6C), when trophic signals are high, E47 would ensure plenty expression levels of Trk receptors and positively contribute to activate cell survival pathways. On the contrary, when trophic factor availability becomes limiting, increasing levels of active MLK by apoptotic signals would raise JNK-mediated effects to boost cell death and, in parallel, reduce E47-dependent Trk expression levels to decrease cell survival signals even further. Thus, E47 would serve as a link between phosphatidylinositol 3-kinase and JNK pathways to promptly incline neurons toward survival or death.
|
2018-04-03T02:42:22.483Z
|
2009-09-28T00:00:00.000
|
{
"year": 2009,
"sha1": "fce4af552959ee49ce548adfddcd992faef32292",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/284/47/32980.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "5524b7fd5dd061a333cc591b95d718495dbdf09f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
267626237
|
pes2o/s2orc
|
v3-fos-license
|
Care quality and satisfaction at the cancer hospital – a questionnaire study of older patients with cancer and their family members
Background The unique life situations of older patients with cancer and their family members requires that health care professionals take a holistic approach to achieve quality care. The aim of this study was to assess the perceptions of older patients with cancer and family members about the quality of care received and evaluate differences between their perceptions. A further aim was to examine which factors explain patients’ and family members’ levels of satisfaction with the care received. Methods The study was descriptive and cross-sectional in design. Data were collected from patients (n = 81) and their family members (n = 65) on four wards in a cancer hospital, using the Revised Humane Caring Scale (RHCS). Data were analysed using descriptive statistics, crosstabulation, Wilcoxon signed rank test, and multivariable Analysis of Covariance (ANCOVA). Results Family members had more negative perceptions of the quality of care than patients did. Dissatisfaction was related to professional practice (p < 0.001), interaction between patient and health care professionals (p < 0.001), cognition of physical needs (p = 0.024), and human resources (p < 0.001). Satisfaction with overall care was significantly lower among those patients and family members who perceived that they had not been involved in setting clear goals for the patient's care with staff (p = 0.002). Conclusions It is important that older patients with cancer and family members receive friendly, respectful, individual care based on their needs and hopes, and that they can rely on professionals. Health care professionals need more resources and education about caring for older cancer patients to provide quality care.
Background
It is estimated that a quarter of the European population will be over 65 years of age by 2050 [1].Globally, the incidence of new cancer cases is rapidly growing, at 18.1 million in 2020 and expected to reach 28-30 million by 2040 [2,3].On average, half of these will be diagnosed in people over 65 years of age [3].This age group is very heterogeneous in terms of morbidity and cognitive disorders.Some older patients with cancer have complex needs, while others stay in good health [4,5].Addressing their various unique needs is costly, especially as resources are often inadequate in cancer care [5][6][7].
The main need of older patients with cancer is the maintenance of their independence and freedom.Physical mobility is important for performing the activities that give them joy and satisfaction in life [8].In addition, needs based on the individual's personal circumstances, preconceptions, and knowledge about cancer affect their experience regarding the quality of care [6].Furthermore, satisfaction is an important indicator of care quality, and measuring it provides insights into how well patients' care needs have been fulfilled [7].
Quality of care is a human right regardless of age.Various countries in Europe have charters, specific laws, or administrative regulations about patients' rights, most of which mention quality of care [9].The World Health Organization (WHO) and the Institute of Medicine (IOM) define quality of care in terms of efficiency, safety, patient-centredness, timeliness, equity, integration, and efficiency [7,10].In recent years, this definition has expanded to focus more on patient perspectives, psychological aspects, and care planning, as well as on meeting the needs of patients and their families [11], since older patients with cancer and their family members together face the challenges of different stages of cancer treatment, including other challenges related to heath and everyday life [6].Nevertheless, while family members of older patients have expressed their will to be involved in care during the patient's hospital stay, they have faced several challenges [12].It has previously been reported that older patients with cancer have fewer unmet needs than younger patients [13]; however, they may have difficulties expressing their needs to health care professionals [14], or they may hesitate to ask for help when needed [6].
While there is some literature exploring the experiences of older patients and their families in acute care [13], there is little research that focuses specifically on older patients with cancer, and this research has been limited to the contexts of ambulatory [15] and palliative care [16].Providing quality care is a challenge, and it is important to understand the perceptions that older patients with cancer and their families have of both the quality of care and their satisfaction with it [5,17].Therefore, the purpose of this study is to focus on patients' and family members' individual perspectives on the process of care provision, as this has been found to be a better measure of care quality and satisfaction than the outcomes of care [7].
Aim
This study aims to assess how older cancer patients and their family members perceive the quality of the care they are receiving and to evaluate the differences between their perceptions.A further aim is to examine which factors explain patients' and family members' levels of satisfaction with the care received.It thus addresses the following research questions: 1. How do older cancer patients and their family members perceive the quality of care? 2. Are there differences between the perceptions of older cancer patients and those of their family members with regard to the quality of care? 3. Which factors explain patients' and family members' satisfaction with the care received?
Design, setting, and sample
This study is quantitative, descriptive, and cross-sectional in design, and the participants are part of a larger study.Data were collected from a 78-bed cancer hospital providing acute care to older patients with cancer.The inclusion criteria were as follows: patients were aged 65 or more, had cancer, and were fluent in Finnish.The inclusion criteria for family member were that they were participating in the patient's care, including at home, and were fluent in Finnish.Age 65 was selected as the threshold age for patients in order to align with most wealthy countries' definitions of 'old' [18].The minimum sample size to detect the expected effect size of 0.3 would be 111.We did not achieve this, but the response rates (40.5% for patients, 32.5% for family members) were deemed to be acceptable.We delivered 200 questionnaires to patients and 200 questionnaires to family members to the wards, and 81 patients and 65 family members completed the questionnaires.
Data collection
Data collection was carried out between October 2016 and May 2018 using convenience sampling.Recruitment was organized by the researcher, nurses, and two contact persons in the cancer hospital.Information about the study protocol was presented to the nurses during meetings in October 2016 and January 2018.Written information about the study protocol was also provided, including the researcher's contact information for any enquiries.
Paper questionnaires were handed to patients on the ward.They could either complete the questionnaire on the ward or at home after discharge, returning it to the researcher using the pre-paid envelope provided.The patients were also given a questionnaire to pass on to a family member.Written information about the study was provided alongside the questionnaire, and written informed consent was sought from both patients and family members.
Instruments
The Revised Humane Caring Scale (RHCS) was used to measure quality of care and satisfaction.This includes 42 items, organized under six headings, which are measured using a 5-point Likert scale from 1 (full disagreement) to 5 (full agreement).The headings consist of professional practice (17 items), information and participation in own care (11 items), cognition of physical needs (4 items), human resources (3 items), pain and apprehension management (4 items), and interdisciplinary collaboration (3 items).The instrument also includes two outcome variables: 'A clear goal for care is set by me, family members and staff, together' and 'I am satisfied with my care' for patients and 'A clear goal for care is set by the patient, me as a family member, and staff, together' and 'I am satisfied with my family member's care' for family members [19].The RHCS has been used since the 1990s in various nursing contexts.Cronbach's alpha values were between 0.640 and 0.937, and they have previously been reported as between 0.775 and 0.970 [20,21].
Data analysis
Descriptive statistics (frequencies and percentages) were used to describe older cancer patients and their family members, while means and standard deviations (SD) were used to represent continuous variables (Table 1).In the analysis, the respondents' ages were organized into categories, and the domains of the RHCS and the outcome variables were transformed from 5-point Likert scale responses into dichotomous responses (disagree = fully disagree, disagree, and cannot say/unsure vs. agree = fully agree and agree).Cross-tabulation and a chisquare test were used to detect differences between older patients with cancer and family members regarding the quality of care (Table 2).
The normal distribution of the subscales of the RHCS was tested using the Kolmogorov-Smirnov test.Because this test showed that the subscales were not normally distributed, the non-parametric Wilcoxon signed-rank test was used to detect differences between older patients with cancer and family member pairs regarding the quality of care (Table 3).ANCOVA was used to examine the differences in the mean values for satisfaction with care, adjusting for the effect of other variables.To do this, the data for older patients with cancer and family members were combined into one dataset.Satisfaction with care was set as the dependent variable, the RHCS subscales were applied as covariates, and background characteristics and the outcome variable 'A clear care goal is set together' were applied as independent variables.From the data, seven models of the satisfaction with care of older patients with Table 2 Comparison between older patients with cancer (n = 81) and family members' (n = 65) perceptions of care quality Disagree = fully disagree + disagree + cannot say/unsure vs. agree = fully agree + agree a Answers depending on who is completing the questionnaire: I/my (patient), the patient (family member) cancer and family members were constructed (Table 4).
For the analysis of covariance, the normal distribution of the residuals was checked by visual inspection of the histograms.For all of these analyses, the level of significance was set at 0.05 [22], and the data were analysed using SPSS ® version 27.00 for Windows [23].
Results
The age of 81 older patients with cancer Mean ± SD was 72 ± 5.42, and the age of 65 family members Mean ± SD was 63 ± 12.9.More than half of the patients (53%) and two thirds of the family members (71%) were female.A majority of the older patients with cancer (75%) and family members (86%) had some professional education.Perceived health was reported as good by 70% of family members but only 28% of patients (Table 1).
Perceptions of care quality of older patients with cancer and family members
The perceptions of older patients with cancer and family members about the quality of care were good: the mean value of their responses to the subscales of the RHCS varied between 2.97 and 4.95 (range: 1 = lowest, 5 = highest).Patients attached most value to their treatment being carried out with respect (4.95) and friendliness (4.95) and to receiving help when needed (4.91).However, patients were less satisfied with their participation in care planning (3.75) and receiving pain relief through non-pharmacological methods (2.92).Family members perceived that their loved one's pain was taken seriously (4.71), they were treated with friendliness (4.62), and their care was safe (4.60).They were dissatisfied with the use of nonpharmacological pain relief (2.97).They also felt that patients' fears were not alleviated (3.54) and that they, as family members, did not have enough opportunities to take part in care planning (3.49).
The results revealed some differences between the assessments of older patients with cancer and of family members.For over half of the RHCS items, patients and family members gave statistically significantly different responses about the domains of care quality.In all of these instances, patients assessed their care more positively than family members did.The statistically significant results are presented in Table 2.
Differences in perceptions about the quality of care between patient and family member pairs
In total, 56 older patients with cancer and family member pairs (n = 112) were investigated.Statistically significant differences between paired patients and family members were observed for all six subscales and the total mean score of the RHCS.In order to form a pair, the patient had to have a person classified as a family member to whom they could give the questionnaire.The results show that family members' assessments of the quality of care were lower (4.07)than that of patients (4.44) (Table 3).
Factors explaining satisfaction with care
We examined the statistical difference between the outcome variable 'Satisfaction with care' and the RHCS, when controlling for age, gender, health status, education, the outcome variable 'A clear care goal is set together', and group (patient or family member).After controlling for the total mean score of the RHCS, perception about the variable 'A clear care goal is set together' (F (2,127) = 13.608,p < 0.001, η 2 = 0.176) and participant group (F (1,127) = 10.189,p = 0.002, η 2 = 0.074) had a relationship with satisfaction with care.Those who disagreed with having a clear goal of care perceived scores nine points lower than those who agreed (B = -0.924),and patients perceived scores three points higher than family members (B = 0.398).Exploring the subscales separately revealed similar results.Both independent variables together accounted for approximately 35% of the variance in satisfaction with care (R 2 = 0.357) (Table 4.)
Discussion
This was a cross-sectional descriptive study of older cancer patients who have rarely been studied in the context of acute care in a cancer hospital.The study also provided insight into their family members' experiences during the challenging time that patients spent in acute care.
Patients' perceptions of the quality of care
We found that older cancer patients described some difficulties with their participation in care planning.One explanation for this may be that older cancer patients have cognitive impairments [1] that health care professionals may be unaware of, leading to problems and misunderstandings [12].Second, nurses have described that elderly patients do not tell them what they want [6,14], also causing misunderstandings.Nevertheless, earlier studies have shown that it is important that patients participate in care planning and receive adequate explanations of their treatment goals and the future [5,11].
In this study, patients perceived that the care they received was friendly and respectful.Kind and friendly behaviour by nursing professionals towards patients has been shown to improve the experience of care quality among older cancer patients in hospital settings [24], and patients have described that respectful attitudes have a significant influence on their experience of care [25].Respectful care has also been previously associated with patients' needs [26].Our results showed that patients described their needs for help being fulfilled during their hospital stay, and previous research supports this result by showing that older people have fewer unmet needs [14].It must be remembered that, as older patients may have diseases that limit their comprehension and affect their daily lives, getting help when needed is especially essential to their perception of whether or not care is satisfactory [27].Sensitivity in meeting and interpreting the needs of older patients with cancer is essential [12].However, it has previously been shown that nurses may find it difficult to manage older patients' basic needs due to a lack of staff and time [14,28], meaning that adverse events occur, and care quality is undermined [29].
Family members' perceptions of the quality of care
Family members expressed that the patient's pain was taken seriously.This suggests that patients in this study received sufficient pain medication.However, nurses should nonetheless remain alert, as it has been shown that older patients may hide their pain or report it less than younger patients do [30].This may be because they feel that revealing pain shows weakness, or they might be afraid of painkillers causing addiction [31].
While family members were satisfied that patients were taken seriously when they were in pain, they also said that non-pharmacological pain management methods were deficient.It is possible that some such methodsfor instance, kinesiotherapy -may not be visible to family members because they are limited to visiting the hospital during restricted visiting hours [12].
In this study, family members valued the friendly way in which patients were treated.Family members appeared to draw comfort from this, particularly in acute situations where they were in daily contact with patients who were experiencing distress [30].Family members also felt that patient care on the ward was safe.Previous studies have shown that patients feel safe when nurses visit them very frequently, even if they are busy, but less so if the nurses do not listen but just carry out their tasks and then leave [12].
However, family members were concerned that patients' fears were not alleviated.There is a chance that this actually reflects poor communication between patients and their family members [32] -not telling relatives what they need [14].It is also likely that family members themselves have their own fears about the patient's illness and deterioration [33] and were reporting these fears rather than those of the patient [34].
Family members also reported not having enough opportunities to participate in care planning, aligning with a similar result that had been discovered earlier [14].This may be due to limited contact with nurses, which makes it difficult to participate in care planning [35].Perceptions of care quality have previously been shown to depend strongly on having clear information about the next step in care and follow-up [36].
Differences between patients' and family members' perceptions of the quality of care
We found significant differences between patients' and family members' perceptions of the quality of care, with family members giving a more negative assessment than patients.Family members were primarily concerned about a lack of resources for patient care, as well as the negative, rushed atmosphere on the ward.The number of nurses and the working environment have a direct impact on the satisfaction and quality of care [20,37].
It has previously been shown that nurses prioritize their activities during staff shortages to guarantee patient safety.They prioritize their time to perform vital symptom assessments or administer medication at the expense of bathing patients, carrying out skin care, or ambulation [28].Moreover, the basic physical needs of the patients are easier to fulfil than their psychological needs [14].However, sometimes even essential nursing tasks are found to be left undone due to lack of staff [29].
Family members described that there is a lack of communication between patients and staff.They felt that the staff did not enquire enough into the patients' state of health -the right level of interest was not shown, their worries were not listened to, and it was not possible to arrange discussions in confidence.Caring for older patients is challenging, causing emotional stress to staff, and sometimes keeping a certain distance from patients can relieve this stress.In general, nurses have positive attitudes towards older patients and do not deliberately compromise their professional or ethical principles when dealing with them [38].However, there is evidence to suggest that such compromises and negative attitudes sometimes do arise [14,39].
Finally, our results show a slight trend towards decreased confidence in the professionalism of nursing staff.This somewhat contradicts the findings that family members value nurses' friendliness and ability to meet patients' basic care and pain management needs.However, this may be partly explained by the prevalence, during the period of the study, of substitute nurses who had less knowledge about caring for older cancer patients than permanent staff might have had [24,38].
Factors explaining satisfaction with care
The results show that family members evaluated patient care more negatively than patients did and that overall satisfaction with participation in care was low among both patients themselves and their family members.Family members of older cancer patients have described having the feeling that only the patient gets noticed [12,40].
The involvement of family members is crucial to enabling patients to cope during their treatment and to clarifying the information that is received [11,12].However, both patients and family members reported a lack of involvement in decision making and that they did not receive enough information, to the extent of feeling uncertain about the treatment being given [41,42].For the latter, it is essential to use everyday language rather than medical language [7,11,12].
Limitations
The results of this study should be interpreted with caution.Firstly, data were collected from a single cancer hospital and cannot be generalized.Secondly, the cross-sectional design of the study means that the findings indicate the situation at a specific moment in time.Thirdly, the data were collected through a self-reported questionnaire, and it is possible that questions may have been misunderstood.It is also worth noting the relatively high level of education among respondents, which may have had an influence on the results.
Conclusions
In this study, we demonstrated that patients and family members were satisfied with their care.The care was described as friendly, respectful, and based on patients' needs.Nevertheless, patients and family members perceived that the goal of care was not fully clarified, and opportunities to participate in care were limited.Family members perceived that the atmosphere in which care was given was busy; on the ward, there were not enough nursing professionals to listen to patients' worries, feelings about their health, or fears.
It is important that older cancer patients and their family members receive friendly, respectful, individual care based on their needs and hopes and that they can rely on professionals.Further studies are needed to identify the necessary competencies for the holistic care of older cancer patients and their family members.Management needs to acknowledge that providing quality care requires adequate resources: additional education is needed, and hospitals should give nurses opportunities to use their clinical expertise in the complex care of older cancer patients and their family members.
Table 1
Descriptive characteristics of older patients with cancer (n = 81) and family members (n = 65) (n, %) Data are expressed as means ± SD for continuous variables and as number and percentage for different categorical variables SD Standard Deviation, n Number a pancreatic and peritoneal cancer, sarcoma Represent significant associations determined by Chi-square test, p-values < 0.05 are considered statistically significant (Fishers exact test) *
Table 3
Differences in perceptions about the quality of care between 56 patient-family member pairs (n = 112)
Table 4
The relationship between cancer patients and family members' characteristics, RHCS and satisfaction with care (n Abbreviations: RHCS Revised Humane Caring Scale, B Unstandardized coefficient * Significant results in ANCOVA, p-values < 0.05 are considered statistically significant and presented ** Adjusted for age, gender, health status, education, perceptions about 'A clear goal for the care is set together' , and group (patient or family member)
|
2024-02-13T06:17:16.596Z
|
2024-02-12T00:00:00.000
|
{
"year": 2024,
"sha1": "fa2748516c94c397fd27083f069ac437991d919b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b8df8a3f1b6b0950d17f36a589a3c74ea34c10a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249911592
|
pes2o/s2orc
|
v3-fos-license
|
Depressive Symptoms Have Distinct Relationships With Neuroimaging Biomarkers Across the Alzheimer’s Clinical Continuum
Background Depressive and anxiety symptoms are frequent in Alzheimer’s disease and associated with increased risk of developing Alzheimer’s disease in older adults. We sought to examine their relationships to Alzheimer’s disease biomarkers across the preclinical and clinical stages of the disease. Method Fifty-six healthy controls, 35 patients with subjective cognitive decline and 56 amyloid-positive cognitively impaired patients on the Alzheimer’s continuum completed depression and anxiety questionnaires, neuropsychological tests and neuroimaging assessments. We performed multiple regressions in each group separately to assess within group associations of depressive and anxiety symptoms with either cognition (global cognition and episodic memory) or neuroimaging data (gray matter volume, glucose metabolism and amyloid load). Results Depressive symptoms, but not anxiety, were higher in patients with subjective cognitive decline and cognitively impaired patients on the Alzheimer’s continuum compared to healthy controls. Greater depressive symptoms were associated with higher amyloid load in subjective cognitive decline patients, while they were related to higher cognition and glucose metabolism, and to better awareness of cognitive difficulties, in cognitively impaired patients on the Alzheimer’s continuum. In contrast, anxiety symptoms were not associated with brain integrity in any group. Conclusion These data show that more depressive symptoms are associated with greater Alzheimer’s disease biomarkers in subjective cognitive decline patients, while they reflect better cognitive deficit awareness in cognitively impaired patients on the Alzheimer’s continuum. Our findings highlight the relevance of assessing and treating depressive symptoms in the preclinical stages of Alzheimer’s disease.
Given the impact of depressive and anxiety symptoms on quality of life and even prognosis, improving our knowledge on their cognitive and brain substrates across the clinical continuum from normal cognition to Alzheimer's dementia is particularly relevant for clinical management and AD risk reduction. To date, the existing literature on this research field showed inconsistencies between studies especially regarding the direction of the neuroimaging findings. Furthermore, to our knowledge, no study included cognitive measures and complementary multimodal neuroimaging data throughout the Alzheimer's clinical continuum at the same time. Therefore, this study aims at providing a comprehensive assessment of the links between depressive and anxiety symptoms, and cognition as well as multiple measures of brain integrity, throughout the clinical continuum from normal cognition to Alzheimer's dementia, to further our understanding of the relevance and mechanisms of psychoaffective factors in preclinical and clinical AD. We hypothesized that depressive and anxiety symptoms would be associated with AD-related cognitive and brain alterations in both preclinical and later stages.
Participants
All participants were recruited as part of the Imagerie Multimodale de la Maladie d'Alzheimer à un Stade Précoce (IMAP +) Study in Caen, France. Ninety-one patients and 56 controls above 50 years old were included, all living at home, and with no history or clinical evidence of neurologic or psychiatric disorder, alcohol use disorder or drug abuse. Notably, none of the participants met diagnostic criteria for major depression or anxiety disorder, as defined in the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition. The inclusion and group classification of the participants were based on a clinical interview and a standardized neuropsychological assessment (including tests of episodic memory, working memory, language skills, executive functions, and visuospatial abilities), according to internationally agreed criteria [see details in (La Joie et al., 2012Perrotin et al., 2017)]. Patients were either patients with SCD (n = 35) or patients on the Alzheimer's continuum (ADC patients; n = 56) and were all recruited from local memory clinics. SCD patients reported memory complaints and showed normal performance in all tests of the standardized neuropsychological assessment. During the interview, the clinician ensured that the complaint was not related to current medication or medical condition, and did not fulfill NINCDS-ADRDA criteria for probable AD (McKhann et al., 1984).
ADC patients were all selected to be positive for amyloid (see Section "Neuroimaging data and processing" for details). They included patients with MCI selected based on Petersen's criteria (n = 28) (Petersen and Morris, 2005) and patients with dementia fulfilling NINCDS-ADRDA clinical criteria for probable AD (n = 28) (McKhann et al., 1984). Clinical diagnosis was assigned by consensus under the supervision of a senior neurologist and neuropsychologists. The SCD group consisted of both amyloid positive and amyloid negative participants (amyloid negative n = 23; amyloid positive n = 4; florbetapir-Positron Emission Tomography (PET) images not available n = 8). Finally, the 56 healthy elderly control subjects were recruited from the community, performed in the normal range on all tests from the standardized neuropsychological assessment, and were all selected to be amyloid negative (see below). IMAP + was approved by the regional ethics committee (Comité de Protection des Personnes Nord-Ouest III) and is registered with ClinicalTrials.gov (number NCT01638949). All participants gave written informed consent before the examinations.
Psychoaffective Assessment
The Montgomery-Åsberg Depression Rating Scale (MADRS) (Montgomery and Asberg, 1979), a clinician-administered 10-item questionnaire with scores ranging from 0 to 60, was used to assess depressive symptoms at the time of the evaluation. The Spielberger State-Trait Anxiety Inventory form Y-A (STAI-A) (Spielberger et al., 1970), a 20-item self-rated questionnaire with scores ranging from 20 to 80, assessed state anxiety symptoms at the time of the evaluation. For both scales, higher scores indicated higher levels of symptoms of depression and anxiety. All participants were screened so as not to meet diagnostic criteria for major depression or anxiety disorders (see exclusion criteria above), but our goal was to assess depressive and anxiety symptoms on a continuum below this threshold for disorder (Altman and Royston, 2006;Laborde-Lahoz et al., 2015). Therefore, these scores were used as continuous variables in all analyses (see Section "Statistical analysis") instead of classifying patients as presenting or not depressive/anxiety symptoms or disorders.
Cognitive Assessment
Global cognition was measured using the Mini Mental State Examination (Folstein et al., 1975) (MMSE, scores from 0 to 30). Verbal episodic memory was assessed using the Encoding, Storage and Recuperation (ESR) word list free recall sub-score (Eustache et al., 2015), consisting on the recall of two distinct 16-word lists after either a superficial or a deep encoding phase. The final score resulted from the sum of the superficial and deep encoding sub-scores (scores from 0 to 32).
Neuroimaging Data and Processing
All participants underwent neuroimaging scans on the same Magnetic Resonance Imaging (MRI) and PET scanners at the Cyceron Centre (Caen, France), as previously described in detail (La Joie et al., 2012;Besson et al., 2015), within 3-month interval from the psychoaffective and cognitive assessments. We measured GM volume with MRI, brain glucose metabolism with 18F-fluorodeoxyglucose (FDG)-PET and amyloid deposition with florbetapir-PET. The detailed acquisition procedure is available in the Supplementary Material 1. Global GM volumes from the MRI and neocortical standardized uptake value ratio (SUVr) from the PET scans were extracted as described below and used in the following analyses as continuous variables.
All neuroimaging pre-processing steps were performed with the Statistical Parametric Mapping version 12 (SPM12) software (Wellcome Trust Centre for Neuroimaging, London, United Kingdom). Briefly, T1-weighted MRI images were segmented and spatially normalized to the Montreal Neurological Institute (MNI) space and non-linear warping effects on volumes were corrected by modulating the resulting normalized GM segments (Chételat et al., 2008;Villain et al., 2008;La Joie et al., 2012).
PET images were corrected for partial volume effects using the Müller-Gärtner method (Müller-Gärtner et al., 2016). Resulting images were coregistered onto their corresponding MRI, normalized with the deformation parameters used for the MRI procedure, and then quantitatively normalized using the cerebellar GM as the reference region (Chételat et al., 2008;Villain et al., 2008;La Joie et al., 2012;Bejanin et al., 2019).
For each participant, a total GM volume and global FDG-PET value was calculated by applying a binary mask of GM (including voxels with a GM probability > 30% excluding the cerebellum) on the corresponding preprocessed images. The global neocortical SUVr was obtained from the florbetapir-PET images using a neocortex mask (including all regions but the cerebellum, hippocampus, amygdala and subcortical gray nuclei), as described in detail elsewhere (La Joie et al., 2012). This global neocortical SUVr was used as a continuous variable for further analyses (see below) and also to classify subjects as positive or negative for amyloid. This classification was used to select amyloid-negative HC and amyloid-positive MCI/AD patients. The threshold was calculated from a group of 45 young participants from the IMAP project (between 20 and 40 years old) using the mean + 2SD, corresponding to a SUVr of 1.02 under which participants were considered amyloid negative and above which patients were considered amyloid-positive (Besson et al., 2015;Perrotin et al., 2017).
Main Analysis
To assess the differences in psychoaffective factors across clinical groups, we conducted analyses of covariance (ANCOVA) with education, age and sex as covariates and then performed post hoc tests to assess group-differences when a main effect of the clinical group was found. We also performed multiple regressions to assess within group associations between psychoaffective measures (i.e., depressive and anxiety symptoms) on the one hand, and cognition (global cognition and episodic memory) and neuroimaging data (GM volume, glucose metabolism and amyloid load) on the other hand. Results are presented with correction for level of education, age and sex. When a significant association was found with either depressive or anxiety symptoms, then the same analysis was repeated adding the other psychoaffective score (anxiety or depressive symptoms, respectively) as an additional covariate to further test whether the association was specific to the one and independent from the other. Indeed, these two symptoms are known to be related and found to be correlated in the present study in the ADC patient group (Spearman's correlation p = 0.01, r = 0.356). Pairwise deletion was used in case of missing data and all statistical analyses were performed using the STATISTICA software (v13.0, StatSoft Inc., Tulsa, OK, United States).
Supplementary Analyses
Scores of depressive and anxiety symptoms were not normally distributed; thus, correlation analyses were repeated with nonparametric Spearman's correlation tests. The score for depressive symptoms also showed an important floor effect with a large number of participants having a score of zero. To check that the results were consistent and not driven by the floor effect, we repeated all analyses regarding depressive symptoms excluding these participants, i.e., within subgroups of participants with at least one depressive symptom (n = 21 HC, n = 23 SCD, n = 42 ADC patients). Values indicate mean (standardized deviation) unless otherwise stated. Between-group differences for demographic variables were assessed using ANCOVA for continuous variables (corrected for sex, as well as age or level of education, respectively) and χ 2 tests for categorical variables. All other ANCOVA were corrected for the level of education, age and sex, comparing HC, SCD and ADC patients. Values in bold correspond to significant p values (p < 0.05) and values in italic correspond to trends (0.05 < p < 0.1). ADC Alzheimer's continuum ANCOVA, analysis of covariance; GM, gray matter; HC, healthy controls; NS, not significant; SCD, subjective cognitive decline.
Group Characteristics
Group characteristics are listed in Table 1.
Psychoaffective Factors and Their Links With Cognition and Neuroimaging Biomarkers
With regard to psychoaffective factors, depressive symptoms significantly differed across groups (Table 1 and Figure 1A) even after accounting for anxiety symptoms (p < 0.001). Post hoc analyses showed that patients, either SCD or ADC patients, had more depressive symptoms than HC, but did not differ from each other (HC-SCD p < 0.001; HC-ADC patients p < 0.001; SCD-ADC patients p = 0.9). Regarding anxiety symptoms, we found a trend of group difference (Table 1 and Figure 1B) that did not remain when also correcting for depressive symptoms (p = 0.1). However, post hoc analyses revealed no significant between-group differences (HC-SCD p = 0.922; HC-ADC patients p = 0.144; SCD-ADC patients p = 0.129).
The results of the analyses assessing the links between depressive or anxiety symptoms on the one hand, and cognitive or neuroimaging measures on the other hand, are reported in Table 2. Higher depressive symptoms were associated with higher amyloid load in the SCD group ( Figure 2A1). In contrast, in the group of ADC patients, higher depressive symptoms were associated with better episodic memory performance and higher glucose metabolism ( Figure 2A2) and tended to be related to a better global cognition. We found the same results when also correcting for anxiety symptoms, but the association with global cognition was no longer a trend (p = 0.1; see Supplementary Table 1). Non-parametric analyses showed a similar pattern of results, though less strong (see Supplementary Table 2). We also found the same results as in our main analyses in the subgroup with at least one depressive symptom, except for the association between depressive symptoms and episodic memory performance in the ADC patient group, which was no longer significant (see Supplementary Table 3). Finally, to test whether the link between depressive symptoms and amyloid load was specific to the SCD stage, or reflected the greater variability of the amyloid measure in the SCD group as it included both amyloidpositive and amyloid-negative patients, we repeated this analysis in a group of MCI + AD that included both amyloid-positive and amyloid-negative patients (see Supplementary Table 4 for further details). We found no correlation (p = 0.9, r = 0.01) between depressive symptoms and amyloid load in this group.
As for anxiety symptoms, a trend for a positive association was found with global cognition within the SCD group (Table 2), as well as when also correcting for depressive symptoms (p = 0.06; see Supplementary Table 1). This association became significant with non-parametric analyses (p = 0.03; see Supplementary Table 2).
No association was found in the group of healthy controls with either depressive or anxiety symptoms ( Table 2 and Supplementary Tables 1-3).
Depressive Symptoms, Cognitive Performance and Awareness of Cognitive Deficits in ADC Patients
The positive relationship between episodic memory performance and depressive symptoms in ADC patients was not expected. Interestingly, a study showing similar results suggested that awareness of one's cognitive deficits could be associated with worsened mood (Cerbone et al., 2020).
Along this line, we hypothesized that our finding with episodic memory might reflect anosognosia, such that ADC patients at a more advanced cognitive stage (i.e., with lower episodic memory performance) would be more anosognosic, either about their depressive symptoms (so that they have a lower depression score because they are less aware of their depressive symptoms); or about their cognitive deficits (so that they are less depressed about those). We thus ran additional analyses within the ADC patient group to test this hypothesis. We computed two composite delta scores of anosognosia for each ADC patient , one for global cognition and one for episodic memory performance, corresponding to the difference between the z-score of objective performance and the reversed z-score of subjective assessment [using the Cognitive Difficulties Scale (CDS) (McNair and Kahn, 1983;Kuhn et al., 2019)] for either global cognition (MMSE) or episodic memory (ESR) (see Supplementary Material 2 for further details). To test our hypothesis, we assessed the link between these two delta scores and depressive symptoms with multiple regressions corrected for level of education, age and sex, within the ADC patients. We found a positive association between the delta scores (awareness of cognitive and memory difficulties) and depressive symptoms (p = 0.01 and p < 0.001, respectively), as well as objective performance in global cognition (p < 0.001) and episodic memory (p < 0.001). This suggests that ADC patients with better cognitive and/or memory performance, are more aware of their cognitive difficulties and show more depressive symptoms; or reversely, that those with greater cognitive/memory deficits are more anosognosic and report fewer depressive symptoms.
Voxelwise Brain Substrates of Depressive Symptoms in SCD and ADC Patients
We aimed to further investigate the brain substrates specifically involved in the positive associations found in the main analyses (Section "Psychoaffective factors and their links with cognition and neuroimaging biomarkers") between depressive symptoms and both amyloid load in SCD patients and glucose metabolism in ADC patients. For this purpose, we performed voxelwise multiple regression analyses in SPM12 between depressive symptoms and amyloid deposition in SCD patients on the one hand, and glucose metabolism in ADC patients on the other hand. All analyses were corrected for education, age and sex. Results were evaluated for significance at p uncorrected < 0.005 combined with a minimum cluster size determined by Monte-Carlo simulations using the AFNI's 3dClustSim program to achieve a corrected statistical significance of p < 0.05. In the SCD group, we found that higher depressive symptoms were associated with higher amyloid deposition mainly in the bilateral medial prefrontal, temporo-parietal, temporal, insular cortices and the right hippocampus ( Figure 2B1).
In the ADC group, we found that higher depressive symptoms were associated with higher glucose metabolism in the bilateral precuneus, posterior cingulate-retrosplenial area, left temporal, superior parietal and temporal regions and the left hippocampus ( Figure 2B2).
DISCUSSION
This study is, to our knowledge, the first to assess the links of anxiety and depressive symptoms with multiple ADrelevant indices, including cognitive and neuroimaging measures (i.e., global cognition, episodic memory, GM atrophy, glucose metabolism and amyloid deposition), in SCD and Alzheimer's continuum. Altogether, we showed that SCD and ADC patients had higher depressive symptoms compared to healthy elders, which were associated with higher amyloid pathology in SCD, and with higher episodic memory performance and glucose metabolism in ADC patients.
In this study, we found no difference between groups regarding anxiety symptoms, but higher depressive symptoms in SCD and ADC patients compared to controls. This is in line with a previous study showing a higher frequency of depressive symptoms in clinical MCI and AD patients compared to controls, while anxiety symptoms were only different between mild and severe AD (Fernández-Martínez et al., 2010). Additionally, we found no association of anxiety symptoms with cognition and neuroimaging data, which is at odds with previous studies showing significant relationships with GM atrophy (Tagai et al., 2014;Hayata et al., 2015) and glucose hypometabolism (Hashimoto et al., 2006) in AD. This might be due to the fact that, in the present study, we focused on state measures of either anxiety or depressive symptoms, while these previous studies assessed trait anxiety. In addition, in these previous studies the level of anxiety symptoms of the patients was measured through an interview with their caregiver, while in our study it was selfrated by the patient. Thus, the differences in results could reflect discrepancies between self-rated and informant-rated anxiety.
SCD Group
In the SCD group, we found that depressive symptoms were not related to objective cognitive/memory performance but were associated with higher amyloid load mainly in medial prefrontal and temporo-parietal regions, which are among the earliest regions affected by amyloid deposition in AD (Grothe et al., 2016, Grothe et al., 2017. While some previous studies found higher depressive symptoms (Balash et al., 2013;Buckley et al., 2013), and higher amyloid load (Amariglio et al., 2012Perrotin et al., 2012Perrotin et al., , 2017Snitz et al., 2015) associated with subjective cognitive decline, no study to date formally assessed the links between depressive symptoms and amyloid deposition, in this specific population. Our finding of an association between depressive symptoms with amyloid load in SCD could indicate that depressive symptoms represent a manifestation of the ongoing pathology and/or are a risk factor promoting amyloid accumulation, or that both depressive symptoms and amyloid plaques are caused by a common, yet unknown, factor. Furthermore, we found that this association between depressive symptoms and amyloid was specific to this SCD stage, where there is subjective but no objective cognitive deficit, as it was neither found in the HC group nor in the MCI/AD patients, even when not selected for being amyloid-positive. Previous findings were conflicting as, in cognitively unimpaired elders (corresponding to our HC), some (Krell-Roesch et al., 2018) but not all (Donovan et al., 2015) studies found a relationship between depressive symptoms and amyloid load. Similarly, while some studies reported a link between depressive symptoms and amyloid load in MCI (Brendel et al., 2015;Krell-Roesch et al., 2019), a recent systematic review showed that most studies in fact did not find this association (Banning et al., 2019).
Recent criteria for SCD and updates about this concept have highlighted that SCD with biomarker evidence for AD have increased risk for future cognitive decline (Jessen et al., 2014(Jessen et al., , 2020. As we found depressive symptoms to be associated with amyloid deposition in SCD patients in our study, this suggests that SCD patients with depressive symptoms may be at even greater risk for cognitive decline. Altogether, this suggests that depressive symptoms in SCD is not a mere psychological consequence of their memory concern, but might be considered as an additional risk factor, and thus treated, in this population.
Alzheimer's Continuum
In the ADC patient group, we found depressive symptoms to be associated with better episodic memory performance and higher glucose metabolism, i.e., with a less advanced clinical and neurodegenerative stage of the disease. Previous studies showed conflicting findings; thus, one study found worse cognitive performance to be associated with depressive symptoms, but they assessed a mixed group of cognitively healthy elders and MCI patients (Zhang et al., 2020). Another study found no association between depressive symptoms and global cognition in either MCI or AD patients (Fernández-Martínez et al., 2010). However, our findings are in line with recent studies showing higher glucose metabolism in frontal regions and the fusiform gyrus in MCI patients with depressive symptoms (Auning et al., 2015;Brendel et al., 2015). In contrast to the preclinical stage (SCD), the emergence of anosognosia (i.e., progressive decrease in awareness of cognitive deficits) is known to occur as the disease progresses from MCI to AD stages Vannini et al., 2020;Bastin et al., 2021;Cacciamani et al., 2021a,b). In these patients, the presence of anosognosia might impact their neuropsychiatric symptoms or ability to report those. Indeed, a recent study found that MCI patients with lower depression scores showed steeper decline in dementia severity measures compared to those with higher depression scores (Cerbone et al., 2020). The authors suggested that being aware of one's cognitive deficits could be associated with worsened mood (more depressive symptoms), while, conversely, MCI patients who are less or not aware of their cognitive deficits may be less affected and show less to no depressive symptoms. Alternatively, it is also possible that patients at a more advanced stage of the disease (i.e., with more cognitive deficits) tend to report fewer depressive symptoms as they are less aware of their symptoms (more anosognosic). In line with these interpretations, depressive symptoms were found to be associated with less anosognosia in a group of patients with AD dementia (Kashiwa et al., 2005). In addition, the functional neural substrates of depressive symptoms in ADC patients were localized in regions which are known to be related to anosognosia in MCI and AD patients. Indeed, previous works highlighted that greater anosognosia in these patients was associated with reduced glucose metabolism in the posterior cingulate cortex, hippocampus, superior temporal and parietal gyri (Salmon et al., 2006;Nobili et al., 2010;Perrotin et al., 2015;Vannini et al., 2017;Hallam et al., 2020). The fact that we found higher depressive symptoms to be associated with better awareness of cognitive difficulties in our ADC patient group is also in line with those hypotheses.
In contrast to cognition and glucose metabolism, we found no link between depressive symptoms and GM volume nor amyloid deposition in ADC patients. Previous findings are conflicting, with studies also reporting no links (Bruen et al., 2008;Starkstein et al., 2009;Berlow et al., 2010;Mori et al., 2014;Huey et al., 2017) while others found a positive association (Auning et al., 2015;Enache et al., 2015), or a negative (Lebedeva et al., 2014;Wu et al., 2020) with depressive symptoms in MCI or AD populations. Our findings suggest that glucose metabolism appears to be more strongly associated with depressive symptoms in amyloid-positive MCI-AD patients than GM volume or amyloid deposition.
Methodological Considerations and Perspectives
Even though the MADRS has been shown to be a good measure of depression relatively independent from dementia severity (Müller-Thomsen et al., 2005), as our participants were selected for the lack of clinically significant anxiety or depression, one of the limitations of our study refers to the skewed distribution of MADRS, with 42% of the total participants reporting depressive symptoms. To limit the bias associated with the low variability and thus low power of analyses, we repeated all analyses with non-parametric tests, as well as in subgroups including only individuals with at least one depressive symptom.
Our study, assessing multiple hallmarks of AD (i.e., global cognition, episodic memory, gray matter volume, glucose metabolism and amyloid load), was cross-sectional in design and thus couldn't assess the causality and direction of these links. Future longitudinal analyses would allow to investigate this question by examining the links between baseline levels and changes over time in depressive and anxiety symptoms and neuroimaging biomarkers. Similarly, as discussed above, we investigated state measures of depressive and anxiety symptoms as they represent states at a given time that we expect to easily change over time and be modifiable through treatment/interventions. Moreover, anxiety symptoms were assessed using a self-report questionnaire; as it is subjective, the measure could be biased by the subject's honesty, awareness and introspective ability.
CONCLUSION
This study showed that depressive symptoms were associated with higher amyloid load in SCD, and with better episodic memory and higher glucose metabolism in ADC patients. Overall, our findings suggest that depressive symptoms reflect distinct processes along the course of AD, with higher symptoms reflecting greater likelihood of AD biomarker at the SCD stage, while, conversely, they would reflect greater awareness of cognitive deficits associated with less severe cognitive stage of the disease in ADC patients. Thus, this study shows the relevance of assessing and following depressive symptoms in SCD, and to manage them in cognitively impaired patients, to improve the prevention as well as the prognosis and quality of life of both patients and caregivers.
DATA AVAILABILITY STATEMENT
The datasets presented in this article are not readily available because IMAP data are made available upon request to the sponsor (Caen University Hospital) and the principal investigator. Requests to access the datasets should be directed to GC, chetelat@cyceron.fr.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Comité de Protection des Personnes Nord-Ouest III. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
IM and GC contributed to the conception and design of the study. IM, ET, and GC contributed to the acquisition, analysis, or interpretation of the data and wrote the draft manuscript. IM, ET, FM, SD, VD, DV, NM, GP, and GC contributed to the critical revision of the manuscript for important intellectual content. IM performed the statistical analysis. GC obtained the funding. FM, DV, and GP contributed to the administrative, technical, or material support. GC and VD were the principal investigators of the IMAP + research protocol. All authors took public responsibility for the whole or part of the content, contributed to the acquisition, analysis and interpretation of data, manuscript revision, and read and approved the submitted version. analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and the decision to submit the manuscript for publication. ET received a thesis grant from the French Ministère de l'Enseignement Supérieur et de la Recherche (paid to ET). SD received French ministerial scholarship and University grant for traveling and attending a scientific meeting in Berlin (paid to SD). DV received fundings from Inserm and from European Union Horizon 2020 (all payments made to DV's institution). NM received grants from the Alzheimer's Society, European Union Horizon 2020 and Medical Research Council (all payments made to NM's institution) and European Union's Horizon 2020 research and innovation programme for travel (payment made to NM). GC received research support from the EU's Horizon 2020 research and innovation programme (grant agreement number 667696), Fondation d'entreprise MMA des Entrepreneurs du Futur, Fondation Alzheimer, Programme Hospitalier de Recherche Clinique, Région Normandie, Association France Alzheimer et maladies apparentées and Fondation Vaincre Alzheimer (all to Inserm), and personal fees from Inserm, Fondation Alzheimer and Fondation d'entreprise MMA des Entrepreneurs du Futur.
|
2022-06-22T15:21:37.543Z
|
2022-06-20T00:00:00.000
|
{
"year": 2022,
"sha1": "062b15a7e0836bbc0fe7229a4d2160e94b6545f0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2022.899158/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "576d36c37ecc23b787b3861d8b9892b409ec49b4",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": []
}
|
20309556
|
pes2o/s2orc
|
v3-fos-license
|
Tobacco Chewing and Adult Mortality: a Case-control Analysis of 22,000 Cases and 429,000 Controls, Never Smoking Tobacco and Never Drinking Alcohol, in South India
Tobacco consumption in any form, smoking or smokeless, is a major source of premature mortality. About 20% of global tobacco related mortality occurs in India (World Health Organization, 1999). Globally, among adults, 27% of males and 9% of females use smokeless tobacco tobacco (Global progress report 2010). Among those aged 15 or older, 28% of males and 12% of females were earlier found to use smokeless tobacco. The higher prevalence of tobacco chewing is noted less educated group, in rural population than in urban and in women compared to men (Gajalakshmi et al., 2012; Thakpur et al., 2013). For this study, we have used data from two large household surveys conducted in South India. One is a retrospective enquiry of the habits of 80,000 adults (48,000 urban, 32,000 rural) who had died a few years earlier (cases) (Gajalakshmi and Peto, 2004). The other is the baseline survey for an ongoing prospective enquiry of 600,000 adults (500,000 urban, 100,000 rural) in the same locations (controls). Follow-up information from
Introduction
Tobacco consumption in any form, smoking or smokeless, is a major source of premature mortality. About 20% of global tobacco related mortality occurs in India (World Health Organization, 1999). Globally, among adults, 27% of males and 9% of females use smokeless tobacco tobacco (Global progress report 2010). Among those aged 15 or older, 28% of males and 12% of females were earlier found to use smokeless tobacco. The higher prevalence of tobacco chewing is noted less educated group, in rural population than in urban and in women compared to men (Gajalakshmi et al., 2012;Thakpur et al., 2013).
For this study, we have used data from two large household surveys conducted in South India. One is a retrospective enquiry of the habits of 80,000 adults (48,000 urban, 32,000 rural) who had died a few years earlier (cases) (Gajalakshmi and Peto, 2004). The other is the baseline survey for an ongoing prospective enquiry of 600,000 adults (500,000 urban, 100,000 rural) in the same locations (controls). Follow-up information from RESEARCH ARTICLE Tobacco Chewing and Adult Mortality: a Case-control Analysis of 22,000 Cases and 429,000 Controls, Never Smoking Tobacco and Never Drinking Alcohol, in South India Vendhan Gajalakshmi*, Vendhan Kanimozhi the prospective study is not yet available, but those in the baseline survey can be used as controls for the cases in the survey of adult deaths.
Our aim is to assess the relationship, causal or otherwise, of chewing tobacco with mortality at ages 35-69, after excluding from cases and controls who had ever smoked tobacco or drunk alcohol.
Materials and Methods
The study took place in the late 1990s in the state of Tamil Nadu in South India. There were two study areas, one urban (the city of Chennai, which is the state capital, with a population of 4 million) and one rural (the district of Villupuram, with a population of 2.5 million in about 2000 villages). In both, efforts were made to identify deaths that occurred at the age of 25 and over in urban area during 1995-97 and at the age of >1 day in rural area during 1997-98. Made arrangements to visit the home of the dead person(cases for the study) to get information from the family about the educational level, circumstances that led up to the death, smoking, chewing tobacco and 1202 drinking (alcohol) habits of the dead person before they became ill. The controls were from our surveys of the general population at ages 35 years or above in the same two study areas (from were cases were recruited for the study) during 1998-2001.
Cases for the study
Urban cases: interviewed 48,000 families: Registration of the fact of death is almost complete in the city of Chennai. Necessary data were abstracted from the death registers in the Vital Statics Department (VSD) in Chennai city to locate the home of the deceased. Since cause of death stated on the death certificate was non-specific in about 50% of certificates, verbal autopsy(VA) was done, to arrive at underlying cause of death, at the time of visiting home of the deceased to collect required data for the study. The details of the novel verbal autopsy methodology developed and used in Tamil Nadu study was dealt somewhere else (Gajalakshmi et al., 2002;. This VA methodology consists of training non-medical graduates with at least 15 years of formal education on VA tool to interview the spouse, and/or close associates, and/or neighbours of the deceased and to write the verbal autopsy report. The verbal autopsy report is a narrative description of symptoms and events that led to death and written in local language to enhance the accuracy of the underlying cause of death . All VA reports were reviewed centrally and independently by two medical doctors, unaware of risk factor(s) data, to arrive at the probable underlying cause of death and coded the underlying cause of death according to the 9th International Classification of Diseases (ICD-9) (World Health Organization, 1977).
There were 72,000 deaths among adults aged 25 or older at the time of death during the study period 1995-97 in Chennai city. Of these, 5000 deaths were attributed to external causes (unintentional injuries, suicide or homicide) and 67,000 deaths to medical causes in the VSD records. Deaths due to external causes were excluded in the study. We were successful in tracing and interviewing 48,000 out of 67,000 households during 1998-1999. 19000 houses could not be visited because the address was missing or inadequate, the house no longer existed, or the family had moved. The cause of death of 1000 deaths that were attributed to medical causes in the VSD records were reclassified to external causes based on the VA report diagnosis, hence excluded from the study. Finally we left with 47,000 cases (27,000 men and 20,000 women) aged 25 or over at the time of death for the urban study. Of this, 5206 men and 8260 women were lifelong non-smoking non-drinkers of age 35-69 at the time of death.
Rural cases: interviewed 32,000 families: The registration of the fact of death is less than 60% complete in rural Tamil Nadu. Hence efforts were made to identify deaths at all ages (except deaths at age ≤1 day) irrespective of cause of death during 1997-98 in the study area from various sources such as records in the Village Administrative Offices in the study district, enquiring village health nurses /health care workers and village leaders in the study area. Field interviewers were natives of the study area and sought the help of the village leaders and/or village health care workers in obtaining introductions to the relatives, neighbours, or associates of the deceased. Verbal autopsy methodology and, the method of assigning and coding the probable underlying cause of death were as in urban case-control study (Gajalakshmi et al., 2002;. The total number of deaths identified during 1997-98 was 40 763 . Of these, 1927 deaths could not be traced because the addresses were missing/ incomplete in the Village Administrative Offices or the occupants had moved out after death. Of the 38 836 households traced and interviewed, 27,000 deaths (cases: 16,000 men and 11,000 women) were due to medical causes and 5000 were due to external causes at ages 25 or older. Of this, 3278 men and 5716 women were lifelong non-smoking non-drinkers of age 35-69 at the time of death.
Controls for the study are from population surveys conducted in urban (Chennai city) and rural (Villupuram) study areas in Tamil Nadu
Urban population survey: 500,800 individuals: A population survey (Gajalakshmi et al., 2007) was undertaken in Chennai city during 1998-2001. The men and women aged 35 years or over residing in the randomly chosen study area were interviewed at home. Precautions similar to those in the case-control studies were taken to ensure strict quality control of fieldwork, coding, and data entry. Details on the following variables were collected in this population survey: age, sex, educational status, tobacco smoking, tobacco chewing and alcohol drinking. Of 500,816 interviewed, 138,928 men and 219,241 women were lifelong non-smokers and non-drinkers aged 35-69 at the time of baseline survey.
Rural population survey: 100,000: A similar population survey was performed at the same time (1998)(1999)(2000)(2001) in the rural study area. Interviewed all people aged 35 years or over resident in seven of the 22 rural administrative blocks that make up the study area. Of 105,837 interviewed, 23,254 men and 47,883 women were lifelong non-smokers and non-drinkers aged 35-69 at the time of baseline survey.
Quality control
The survey reports and verbal autopsy reports submitted by the field Interviewers were validated by selecting randomly 5% of the households for re-interview by the senior investigator. This was done one week after receiving the output from the field Interviewers and blind to its results. The random checking was done partly because knowledge that a revisit might well take place would ensure reliably motivated fieldwork at the initial survey and to identify any systematic defect in the interview techniques. The questionnaires were checked centrally for consistency and missing values by coding clerks, and were double-entered into the computer.
Statistical methods
In this study chewing was defined as daily chewing of tobacco or related tobacco products, either alone or in combination, for at least 6 months. The term chewer was consistently used to mean ever chewer (former +current).
Asian Pacific Journal of Cancer Prevention, Vol 16, 2015 1203 DOI:http://dx.doi.org/10.7314/APJCP.2015.16.3.1201 Tobacco Chewing andAdult Mortality in Never Smoking Non Alcohol Drinkers in South India This study uses the controls from the population surveys in urban and rural areas since their exposure distribution should have been reasonably representative of the population at risk of becoming cases. In the general population, chewing habits may be associated with smoking and drinking habits. As the risk of chewing is likely to be much smaller than those of smoking and drinking, it is easiest to study them unbiasedly in people who never smoke or drink alcohol. Because the number of cases and of controls, in this study ,is so large, it is possible to restrict attention only to those who never smoked tobacco or drank alcohol.
The urban and rural case-control studies were analysed separately as well as combined. For each category of disease the cases were those who died of it and were compared with population controls; for each different category of disease the control group was always the same. Logistic regression models in STATA (version 8) statistical software (Statacorp LP 2005) were used to calculate mortality odds ratios. The excess death caused by chewing tobacco was calculated by multiplying over all number of deaths among chewers by 1-1/RR, in which RR is the adjusted mortality odds ratio.
Results
Detailed analyses of mortality among lifelong nonsmoking non-drinkers are reported here at the age range of 35-69 years because the underlying cause of death assigned by verbal autopsy is more reliable in middle age (35-69) than at older age (70+). A total of 22 460 cases (urban: 5206 men and 8260 women; rural: 3278 men and 5716 women) and 429 306 controls (urban: 138 928 men and 219 241 women; rural 23 254 men and 47 883 women) were analysed. Table 1 shows key characteristics of 22,460 cases and 429 306 controls. The cases were on average 9-11 years older than the controls. A higher proportion of women compared to men did not have formal education in both urban and rural study areas and a higher proportion of participants, both men and women, had no formal education in rural area compared to those in urban area. A higher proportion of cases compared to controls in both sexes and in both study areas were ever chewers of tobacco.
Discussion
Tobacco chewing is not considered as stigma in India. Hence we do not expect any misclassification of tobacco chewing habit in this study, even though the data were collected on dead people from the surviving family members. Since the analyses excluded smokers and drinkers the associations seen in this study could not be confounded by tobacco smoking or alcohol drinking, but they could, despite adjustment for educational level, be residually confounded by social factors.
The present study shows that ever-chewer mortality odds ratio adjusted for age and education for tuberculosis was 1.5-fold in rural men, 2-fold in rural and in urban women compared to never chewers. When both study areas were combined the age, sex, education and study area adjusted ever-chewer mortality odds ratio for tuberculosis was 1.7(1.5-1.9) and for respiratory diseases other than tuberculosis was 1.4(1.2-1.6). The present study results are consistent with the Cancer Prevention Study I in USA (Henley et al., 2005) that observed elevated hazard ratios among smokeless tobacco users for respiratory diseases combined (HR:1.28, 95% CI:1.03-1.59) and the Mumbai cohort study (Gupta et al., 2005) in India that noted about 40-46% higher risk for death from tuberculosis among tobacco chewers in both genders. The reason for this increased risk among tobacco chewers for respiratory diseases, including tuberculosis, is not clear.
In the urban study, cause of death due to cancer based on VA reports was confirmed with medical records (Gajalakshmi et al., 2004) and this was not feasible in rural study area. Among tobacco chewers the risk of death from cancer was about 40-50% higher in men and 60-70% higher in women compared to never chewers. The age and education adjusted mortality odds ratio associated with tobacco chewing was significant for deaths from upper aerodigestive, stomach and cervical cancers. Of the cancers mentioned above, except cervical cancer (McCann et al., 1992;Gajalakshmi et al., 2012), cancers in other sites are well known to be caused by chewing tobacco (Stockwell and Lyman, 1986;Sankaranarayanan et al., 1989;Gupta et al., 1980;Rao et al., 1994;Dikshit and Kanhere, 2000;Balaram et al., 2002;Znaor et al., 2003;;IARC Monographs 2004Gupta et al., 2005;Henley et al., 2005;Phukan et al., 2005;Razmara et al., 2013).
Noted increased heart rate and high blood pressure in tobacco chewers (Stockwell and Lyman, 1986;Bolinder and de Faire, 1998) and high levels of total cholesterol, low-density lipoprotein cholesterol and triglycerides in tobacco smokers and in tobacco chewers compared to never users of tobacco (Nanda and Sharma, 1988). A study conducted in Sweden (Bolinder et al., 1994) among men noted the relative risk of 1.4(1.2-1.6) for cardiovascular diseases in smokeless tobacco users. Mumbai cohort study (Gupta et al., 2005) in India found elevated risk of death from vascular diseases among women tobacco chewers only. Both in CPS I and II elevated risk of death from cardiovascular disease was found among smokeless tobacco users (CPS-I: Hazard ratio(HR):1.18, 95%CI:1.11-1.26 and CPS-II:HR:1.23, 95%CI:1.09-1.39) (Henley et al., 2005). The present study shows higher age and education adjusted mortality risk, of death from stroke among tobacco chewers compared to never chewers in rural men 2.2(1.6-3.0) , in rural women 1.3(1.0-1.6) and in urban women 1. 3(1.1-1.7).However, the mortality odds ratio (men, women and study areas combined) adjusted for age, sex, education and study area for stroke was 1.4(1.2-1.6) and for vascular diseases 1.1(1.0-1.2).
The strengths of the study are large sample size, inclusion of all deaths that occurred in the study areas, having used novel verbal autopsy method (with strict supervision and quality control) to assign the cause of death for all cases(deaths), use of population controls from general population surveys conducted in the areas from where cases were recruited for the study, and exclusion of smokers and alcohol drinkers from the analyses to avoid confounding by tobacco smoking and drinking alcohol.
In conclusion, the present study is the first large study in India on lifelong non-smoking non-drinkers at ages 35-69. The age, sex, education and study area adjusted mortality odds ratio was about 30% higher in ever-chewers in south India and the risk is higher among those in rural area compared to those in urban area. Chewing tobacco is the cause of 7.1% (n=1595) of deaths from all medical causes among non-smoking non-drinkers at ages 35-69 in south India. Of the cancers, ever-chewer mortality odds ratio adjusted for age and education was significant for upper aerodigestive cancers in urban men, urban women and in rural women, stomach cancer in rural men and urban women and, cervical cancer in both rural and urban women. The reason for increased risk among ever tobacco chewers for stroke and respiratory diseases including tuberculosis found in this study are not clear.
|
2018-04-03T04:44:21.547Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "638271e254dbddbceeb4266d254ede91abf0b51a",
"oa_license": "CCBY",
"oa_url": "http://koreascience.or.kr/article/JAKO201510534323798.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "74a916e1149fa72b2978d9bc2edd5e97c2d37588",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255393744
|
pes2o/s2orc
|
v3-fos-license
|
Mechanical scanning probe lithography of perovskites for fabrication of high-Q planar polaritonic cavities
Exciton-polaritons are unique quasiparticles with hybrid properties of an exciton and a photon, opening ways to realize ultrafast strongly nonlinear systems and inversion-free lasers based on Bose-Einstein polariton condensation. However, the real-world applications of the polariton systems are still limited due to the temperature operation and costly fabrication techniques for both exciton materials and photon cavities. 2D perovskites represent one of the most prospective platforms for the realization of strong light-matter coupling since they possess room-temperature exciton states with large oscillator strength and can simultaneously provide planar photon cavities with high field localization due to the huge refractive index of the material. In this work, we demonstrate for the first time the mechanical scanning probe lithography method for the realization of low-cost room-temperature exciton-polariton systems based on the 2D perovskite (PEA)$_2$PbI$_4$ with exciton binding energy exceeding 200 meV. Precisely controlling the lithography parameters, we broadly adjust the exciton-polariton dispersion and radiative losses of polaritonic modes in the range of 0.1 to 0.2 of total optical losses. Our findings represent a versatile approach to the fabrication of planar high-quality perovskite-based photonic cavities supporting the strong light-matter coupling regime for the development of on-chip all-optical active and nonlinear polaritonic devices.
(*Electronic mail: anton.samusev@gmail.com) (Dated: 6 April 2023) Exciton-polaritons are unique quasiparticles with hybrid properties of an exciton and a photon, opening ways to realize ultrafast strongly nonlinear systems and inversion-free lasers based on Bose-Einstein polariton condensation. However, the real-world applications of the polariton systems are still limited due to the temperature operation and costly fabrication techniques for both exciton materials and photon cavities. 2D perovskites represent one of the most prospective platforms for the realization of strong light-matter coupling since they support room-temperature exciton states with large oscillator strength and can simultaneously be used for the fabrication of planar photon cavities with high field localization due to the high refractive index of the material. In this work, we demonstrate the affordable for research purposes mechanical scanning probe lithography method for the realization of room-temperature exciton-polariton systems based on the 2D perovskite (PEA) 2 PbI 4 with the Rabi splitting exceeding 200 meV. By the precise control of the lithography parameters, we broadly adjust the exciton-polariton dispersion and, in particular, vary the radiative coupling of polaritonic modes to the free space. Our findings represent a versatile approach to the fabrication of planar high-quality perovskite-based photonic cavities supporting the strong light-matter coupling regime for the development of on-chip all-optical active and nonlinear polaritonic devices.
Photonics deals both with fundamental and applied aspects of operating with optical signals, as well as with prospective designing energy-efficient optical computing devices. Implementing such devices where light is controlled by light requires systems with strong optical nonlinearity. Optical systems with the strong coupling of photon cavity mode with an exciton resonance, resulting in exciton-polariton, demonstrate a nonlinear response up to 3-4 orders of magnitude higher than in weakly coupled systems. 1 Such systems are realized by embedding an excitonic material with high exciton oscillator strength into a photon cavity supporting a mode with strong field enhancement and a long radiative lifetime. 2 The search for excitonic materials as well as the design of photon cavities suitable for the incorporation with efficient fabrication methods is therefore of great importance for polaritonics today.
One of the most studied and widely used material platforms for the exciton-polariton systems is the GaAs quantum well (QW), embedded into the vertical Bragg cavity. 3 Due to the low exciton binding energy, the operation of these polariton systems is limited to cryogenic temperatures. 4,5 The temperature limitations can be overcome with wide-gap semiconductor QWs such as ZnO 6 or GaN 7 , but they still require time-consuming and costly fabrication methods such as epitaxial growth techniques. Monolayer transition metal dichalcogenides have become perspective materials for roomtemperature polariton systems, 8,9 though their potential applications are still limited by technological scalability. Currently, halide perovskites represent the promising platform for a) These authors contributed equally exciton-polariton systems due to their easy and cost-efficient fabrication as well as their outstanding excitonic properties making it possible to implement room-temperature excitonpolariton systems. 10 Moreover, two-dimensional perovskites with enormous exciton binding energy in the range of 190 − 400 meV 11 and exceptionally strong excitonic response 12 have experimentally demonstrated the record-high value of Rabi splitting among perovskites exceeding 200 meV at room temperature 13 and therefore represent one of the promising materials for polaritonic systems.
The most commonly used photon resonator in polaritonic systems is the vertical Bragg cavity since it provides all necessary requirements such as low optical losses, controllable lifetimes, and high field enhancement. 14 Exciton-polaritons in perovskite materials and also in 2D-perovskites have been already demonstrated in the Bragg resonators. 13,15,16 Nevertheless, such structures are bulky, i.e. have large vertical sizes and also require sophisticated and costly fabrication methods, which severely hinder real-world applications. 17 Meanwhile, compatible with on-chip designs planar photon cavities, such as metasurfaces or photonic crystal slabs (PCSs) can demonstrate comparable characteristics and have been recently employed in exciton-polariton systems with various materials. 9,18,19 Moreover, high-Q symmetry-protected bound states in the continuum (BICs), appearing in metasurfaces, when strongly coupled to the exciton resonance, 20 allow to even realize polariton Bose-Einstein condensation. 21 Although planar photon cavities based on perovskites are more suitable for future applications, there is still a lack of efficient and low-cost cavity fabrication techniques.
Previously, several methods for perovskite nanostructur- 22,23 . Direct laser writing avoids this problem but has a limited lateral resolution above 200 nm. 24 Nanoimprint method maintains the resolution of ion (or electron) beam lithography and does not cause degradation, but the stamp geometry can not be changed after its fabrication. 19,25 From this point of view, mechanical scanning probe lithography (m-SPL) 26 (Fig. 1a) appears one of the most versatile and convenient nanostructuring techniques for the perovskite planar exciton-polariton system since the mechanical cutting of perovskites does not cause material degradation, the atomic force microscopy (AFM) tip can be less than 10 nm in its lateral sizes, and high-precision piezo-stages of m-SPL allow for the dynamic tuning of various parameters of the resulting structure.
In this work, we demonstrate a universal and affordable for research purposes technology of 2D-perovskite film nanostructuring for the realization of room-temperature excitonpolariton planar cavities based on PCS with the precise control of polariton dispersion. By varying the period and modulation of PCS we change the exciton-polariton dispersion and its radiative lifetime. The developed m-SPL method for perovskites opens the way for the realization of planar polaritonic cavities with on-demand optical properties for nonlinear and active polaritonics.
To fabricate the sample, first, a thin film of 2D perovskite (PEA) 2 PbI 4 is synthesized by the solvent engineering method 27 . The solution for the synthesis is prepared by dissolving 149.4 mg of PEAI and 138.3 mg of PbI 2 in 1 ml of dimethylformamide. Before the synthesis of the thin film, we clean 12×12 mm SiO 2 substrates with soapy water, acetone, and isopropanol consistently.To achieve the hydrophilic surface of the substrates, they are placed in the oxygen plasma cleaner for 10 minutes. The synthesis of (PEA) 2 PbI 4 films is performed in a glove box with a dry nitrogen atmosphere with the spin-coating method. The prepared perovskite solution is deposited on top of the substrate and after it accelerates for 2 seconds and rotates at a speed of 4000 rpm for 60 seconds. The resulting film is annealed at 70 • for 10 minutes. The morphology of the synthesized film is studied with AFM, resulting in 130 nm film thickness and surface roughness of 15 nm (See Fig. S1 in Supplementary Information (SI)).
For m-SPL we use an atomic force microscope AIST-NT SMART SPM and cantilevers with a single-crystal diamond tip (TipsNano DRP-IN) with a resonant frequency of 500 -1000 kHz, a normal spring constant of 350 N/m (See SI for the details), and a tip curvature radius of 25 -35 nm. Before the lithography, the film morphology was characterized with AFM in a semi-contact regime. The use of the piezo-stages of the atomic force microscope makes it possible to control the diamond-tipped cantilever position with an accuracy of nanometers. Thus, fabricating the 1D PCS, we precisely control the period, the height modulation, and the trench width (see Fig.1a).
During the fabrication, the cantilever is pushed towards the surface with force controlled by the AFM feedback system. The cantilever moves once along the specified direction, forming trenches in the perovskite film. During trench formation, the material is partially compressed and partially ejected towards the sides of the trench. However, most of it is moved by the tip to the end of the trench. At the end of each trench, the probe lifts off the film surface. The cut grains remain at the edge of the fabricated structure and do not affect the optical properties of the whole PCS. The height modulation depends on the applied cantilever force, which is defined by the shift of the cantilever from the initial position and its stiffness. The force required to achieve a modulation of 15 − 50 nm on (PEA) 2 PbI 4 perovskite films is experimentally estimated to be in the range of 5 − 30 µN. Since the tip is conical in shape, the minimum trench width depends on the modulation h m . For modulation of 15−50 nm, the width at the half-height of the trench is 80 − 130 nm (See SI for more details). The speed of the cantilever during the lithography process is limited by 3 µm/s because at higher values the probe begins to pull out perovskite grains. The optimal speed for the 2D-perovskite film lithography is found to be approximately 1 µm/s.
By choosing the trajectory of the AFM tip with piezostages, this method allows the realization of mostly arbitrary structures. Particularly, it is possible to change the period of the PCS by programming the cantilever movement coor- dinates with nanometer precision. One of the most important advantages of m-SPL is the potential applicability of this method for the creation of PCSs, coupled waveguides, or other planar photonic designs on one 2D-perovskite film, combining them into one photonic on-chip system. The fabricated PCSs have a lateral size of 15x30 µm 2 . The typical morphology of the structure studied with AFM and scanning electron microscopy (SEM) is shown in Fig. 2a. By varying the cantilever displacement coordinate, we fabricated PCSs with the periods of d = 320, 340, 360, 380 nm and modulation of about h m = 20 nm (Fig. 2b). By changing the pushing force in the range of 9-24 µN, we also realize structures with different modulations of h m = 16, 24, 40, 49 nm and a period of d = 340 nm (Fig. 2c). Resulting structures are expected to have different spectral positions of the resonances and also different coupling with the free space, which we study further.
In order to study leaky optical modes of the fabricated PCSs, we perform angle-resolved spectroscopy measurements based on the back focal plane (BFP) setup. The BFP of the objective lens (Mitutoyo NIR ×50 with an N.A. of 0.55) is imaged on a slit spectrometer coupled to a liquid nitrogen-cooled imaging CCD camera (Princeton Instruments SP2500+PyLoN) using the 4f scheme (see SI for the details). For the illumination of the sample as well as for the measure-ments of the reflectance spectra, a halogen lamp is used. The plane of incidence contains both normal to the sample and the direction of periodicity of the PCS (see Fig. 1a). Before impinging the slit of the imaging spectrometer, light reflected from the sample passes through a linear polarizer aligned such that TE modes are studied. The scheme is also used to obtain the angle-resolved photoluminescence spectra using a femtosecond laser (Pharos, Light Conversion) coupled with a broad-bandwidth optical parametric amplifier (Orpheus-F, Light Conversion) at the wavelength of 480 nm, 100 kHz repetition rate as a non-resonant excitation source. All measurements were performed at a room temperature of 300K.
We measure angle-resolved reflectance and photoluminescence spectra for every of the fabricated PCSs, shown in Fig. 3. The measured data show the pronounced leaky modes in the spectral region below the exciton resonance around 2.37 eV. All the studied samples demonstrate the curving of the mode dispersion asymptotically approaching the exciton level in the high-frequency region, revealing the signs of the strong light-matter coupling regime. 19 In order to verify the strong light-matter coupling regime, we extract the modes from the experimental data by the following procedure: first, we subtract the unbound exciton photoluminescence signal from the experimental dispersion at each k x /k 0 , then we fit the resulting modes by the peak Lorentz function data. Combining the spectral peak positions for each in-plane wavevector k x /k 0 , we obtain the extracted mode dispersion. Since the upper polariton branch (UPB) above the exciton resonance does not exist due to the strong non-radiative absorption, the only way to confirm the strong light-matter coupling regime is to fit the extracted mode with a lower polariton branch (LPB), estimated by the two-coupled oscillator model as 28 : where E x = E x − iγ x is complex energy accounting for the spectral position and the linewidth of the uncoupled exciton resonance, E c (k) = E c (k) − iγ c is the complex dispersion of the uncoupled cavity photon mode, g -is a light-matter coupling coefficient. The Rabi splitting Ω R corresponds to the minimal energy distance between UPB and LPB, however, as UPB does not exist, we can only estimate this value based on the described model: The uncoupled photon cavity mode has linear dependence of the energy on the wavenumber k x /k 0 since the refractive index is considered to have negligible changes in the considered spectral range without accounting for the exciton resonance. Therefore, we estimate uncoupled photon cavity dispersions as E c (k x ) = k × k x + b based on the calculations of Fourier modal method 29 (see SI for details). The coupling coefficient g, as well as the half-widths of an unbound photon γ c and exciton γ x , are chosen as the optimization parameters in the fitting of the LPB. The resulting real part of the PL dispersion E LP optimized for each of the samples is shown as red curves in Fig. 3.
The estimated values from the fitting of the uncoupled cavity photon γ c and exciton γ x do not exceed 23 meV and 18 meV, respectively. The resulting values of Rabi splitting Ω R for each of the PCSs are shown in Figs. 4e and 4f, which exceed 230 meV for all PCS modulations. The obtained values satisfy the strong light-matter regime criteria (g > |γ C − γ X |/2; Ω R > |γ C + γ X |/2) 30 with a margin in all studied samples.
The leaky mode dispersion of the 1D PCS is determined by the waveguide modes folded towards the first Brillouin zone with the edges of k BZ x = ±π/d, where d is a PCS period. For a planar waveguide with the chosen thickness, with the change of the PCS period, the spectral position of folded uncoupled leaky modes, and, hence, polariton branches shift proportionally. 31 Actually the difference in the spectral position of the polariton modes can be noticed in Figs. 3(a-d).
In order to reveal the dependence of the spectral position of polariton mode as a function of period, we extract the reflection spectra at normal incidence k x /k 0 = 0 (Fig. 4a). The fre- quencies of the modes are estimated by fitting with the Fano resonance function 32-34 (See SI for details) and as expected show a monotonous decrease with the increase of PCS period (Fig. 4c). The value of the Rabi splitting Ω R depends on the coupling coefficient and the linewidths of uncoupled exciton and cavity photon modes (Eq. 2). Since the coupling coefficient, g depends on the cavity mode localization and oscillator strength 28,35 , it should not change strongly with the PCS period or other geometrical parameters. This was confirmed by the results of fitting all the experimental data. In turn, the uncoupled exciton linewidth γ X is the property of the material and thus should not depend on the PCS design, which we also confirmed by analyzing the data. Hence, the only way to tune the Rabi splitting is to vary the radiative part of leaky mode losses γ C , which is dictated by the PCS modulation. Thus, with the variation of the PCS period, we do neither expect nor observe the pronounced dependence of the estimated Rabi splitting Ω R values (Fig. 4e).
The variation of the modulation h m with constant pe-riod provides the change in differential reflection contrast of the experimentally measured polariton modes, as shown in Figs. 3e-h and Fig. 4b. Higher modulation causes higher coupling of the leaky mode with the free space, or in other words, increases the radiative losses of the mode. In order to reveal the dependence, we estimate the ratio γ rad /γ total by fitting the amplitude and asymmetry parameter of the Fano resonance (See SI for details) for different modulations h m at the k x /k 0 = 0.14 and show it in Fig. 4d. Note that the non-radiative losses should be nearly constant for each of the PCSs because they are mostly dictated by the material defect states and excitonic absorption. Hence, the total optical losses γ C rise with increasing the PCS modulation, which leads to the moderate reduction of the Rabi splitting value (Fig. 4f). Thus by applying the different forces on the cantilever during the m-SPL process it is possible to control the modes contrast and the value of the Rabi splitting in the planar exciton-polariton PCS leaky modes.
To conclude, in this work, we have demonstrated the method of mechanical scanning probe lithography for the realization of planar room-temperature exciton-polariton systems based on 2D perovskites. The fabricated PCSs support the long-living polariton modes with a Q-factor up to 100. Thanks to the flexibility of the m-SPL method, it is possible to vary the modulation of the structures with a few tens of nanometers precision. The period of the PCS is controlled with nanometer precision. The width of the trenches is tens of nanometers. In this way, we have achieved full control the dispersion, optical radiative losses, and the Rabi splitting of the exciton-polariton states in the planar photon cavity based on (PEA) 2 PbI 4 . Note that the demonstrated method can be introduced for other halide perovskites and for the fabrication of other planar photon cavities, including metasurfaces and PCSs. Our work reveals the affordable for research purposes and time-efficient method for the fabrication of planar high optical quality exciton-polariton systems based on 2D perovskite film, which is highly demanded for the realization of non-equilibrium exciton-polariton condensation in perovskite metasurfaces 36 and optical nonlinear and active on-chip polaritonic devices. The topographical surface map of the pristine quasi-2D perovskite (PEA) 2 PbI 4 thin film is shown in Fig. S5a. The topography is obtained by atomic force microscopy (AFM). Structuring by mechanical scanning probe lithography requires the roughness of the sample lower than the assumed modulation of the photonic crystal slab. The roughness can affect the non-uniform broadening of the optical modes, leading to a decrease in the radiative lifetime. Based on the obtained data we estimate the height distribution, shown in Fig. S5b. The width at the half-height of this distribution determines the roughness of the sample. The roughness of the fabricated samples does not exceed 15 nm.
S2. TIP FOR MECHANICAL SCANNING PROBE LITHOGRAPHY
For the procedure of mechanical scanning probe lithography, we use an atomic force microscope SMART, cantilevers with a single-crystal diamond tip (TipsNano DRP-IN). The tip with a resonant frequency of 500 -1000 kHz, a spring constant of 100 -600 N/m, and a normal spring constant of 350 N/m. Normal spring constants for AFM tips used in experiments were calculated using the Sader method ? . This method allows us to determine the spring constant of an atomic force microscope cantilever using the following parameters: the resonant frequency, the quality factor of the cantilever in air, and their geometrical dimensions. The spring constant is given by: where ω vac is the resonant frequency of the cantilever in air, b, L are width and length of the cantilever, Q is the quality factor, ρ air is the density of the air and Γ i is the imaginary part of the so-called "hydrodynamic function" ? . This hydrodynamic function Γ(ω) only depends on the Reynolds number Re = ρ air ωb 2 /(4η), where η is the viscosity of the surrounding environment and is independent of the cantilever thickness and density. We experimentally measured parameters such as Q, L, b and ω for each of cantilever. The physical constants associated with the surrounding environment were taken from the literature: ρ air = 1.85kg m −3 and η = 1.86 × 10 −5 kg m −1 s −1 . Fig. S6 shows schematically the geometry of the tip used. The tip has a conical shape. The radius of curvature of its tip is less than 35 nm. The solution angle is 45 degrees. The height of the tip is (500 ± 100) nm. Thus, knowing the geometry of the probe used, it is possible to unambiguously determine the minimum dimensions of the cavities depending on the modulation depth.
S3. PUSHING FORCE
For the procedure of mechanical scanning probe lithography, we set the pushing force on the perovskite thin film. It is possible to estimate the pushing force as ? : where k -the cantilever stiffness, ∆Z -the value characterizing the bending of the cantilever, induced by its shift with respect to the sample along the surface normal. In order to make the PCS with the necessary modulation, the pushing force was selected as follows. With some steps, starting with the minimum pressure for each following structure, we increased the force, until the huge pushing force completely carve the perovskite grains. We estimated the force required to obtain PCS with modulation in the 15-50 nm range based on quasi-two-dimensional perovskite (PEA) 2 PbI 4 using the probes indicated above for the m-SPL procedure.
S4. ANGLE-RESOLVED SPECTROSCOPY AT ROOM TEMPERATURE
For the optical characterization of photonic crystal slabs (PCS), angle-resolved photoluminescence and reflection spectra were measured using the optical setup shown in Fig. S7. The sample was placed in a chamber with a vacuum pump and optical window. The vacuum pump evacuates air up to pressure around 10 −5 bar to avoid the degradation of the perovskite sample from the oxygen. All measurements were carried out at room temperature. (See "optical experiment" section in the main text)
S5. SPECTRAL PROCESSING OF EXPERIMENTAL DATA
In order to make the angle-resolved reflection spectra more contrast, we perform the data treatment shown in Fig.S8. The first step is to normalize the reflection spectra at each wavenumber k x /k 0 by the average value of the spectrum in terms of energy at that wavenumbers k x /k 0 (Fig.S8b). The second step is to find a linear function which is an underlying background of the spectra at each wavenumber k x /k 0 . Then the spectra at each wavenumber k x /k 0 are normalized to the corresponding linear functions and the final data are obtained (Fig.S8c).
S6. FITTING OF POLARITON DISPERSIONS
To extract the polariton mode from the angle-resolved photoluminescence spectrum, we first subtract the uncoupled exciton resonance signal from the measured spectrum at each wavenumber k x /k 0 and fit the data by a Lorentz peak function. Combining the positions of the peaks at all wavenumbers k x /k 0 , we obtain the polariton dispersion. It is possible to estimate the coupling strength between cavity photon and exciton resonance using the fitted bound oscillator 28 , a model that takes into account the spectral position and linewidth of the unbound exciton E x = E x − iγ x and the photon mode parameters of the unbound cavity E c (k) = E c (k) − iγ c . Polariton states are defined as: where g -is the light-matter coupling strength. To use this model for parameter fitting, it is necessary to estimate the parameters of the uncoupled excitation resonance and photon. The exciton level E x was found from the reflection spectrum from the unstructured sample. The uncoupled photon parameters E c (k x ) = k × k x + b for structures with different periods or modulations were determined in different ways. For a period-changing PCS, the slope coefficient of an unbound photon k is determined from the calculated dispersion of a polariton by the modal Fourier method 29 (See Fig.S9a). The slope coefficient was estimated from the extrapolation of the polariton dispersion to large k x . The free parameter b was estimated by the least squares method from fitting the data of experimental polariton dispersions. For a PCS with a change in modulation, the parameters of an unbound photon k, b were estimated from the modal Fourier method. Then were the fitting parameters in the model of two coupled oscillators. After that they were averaged and fixed for all structures.
A. Fano resonance in optical cavities
Using the theoretical model of a single-mode optical resonator associated with 2 ports, one can obtain the intensity reflection coefficient: 34 where ω 0 and τ are the center frequency and the lifetime of the resonance, respectively. r and t are real parameters of S-matrix with r 2 + t 2 = 1. Sign ± corresponds to the parity of the resonant mode with respect to the mirror plane. In all systems except those with r or t being zero, the spectral profile of the reflectance coefficient has Fano line shape.
Let us reduce the formula (S6) to the form in which it is used to fit the experimental data. Total losses of the system are γ total = 1/τ. Using the formula (S6) we obtain: Let's change the parameters r 2 = A, ω−ω 0 γ total = ε, ± √ 1−r 2 r = q and B = Aq 2 . Thus we obtained a formula (S7) for fitting optical modes in reflection.
B. Fit Fano function with losses
The Fano function with an amplitude coefficient was used to fit the parameters of the exciton-polariton modes in reflection, shown in Fig. S10 33 : where B is the amplitude coefficient, q is the phenomenological shape parameter, ε = (E − E c )/γ tot . E c is the resonant energy and γ tot is the width of the autoionized state. In the limit |q| → ∞, the shape of the line is determined by the transition through a discrete state with a Lorentz profile, as the transition to the continuum is very weak 32 . The total loss can be represented as a sum of radiative and nonradiative losses γ tot = γ rad + γ nrad , then the Lorentz profile can be expressed as 44 : The parameters B, q, E c , γ tot are estimated during the fit by the Fano function of the experimental reflectance spectrum at k x /k 0 = 0.14. We use the limit pass of the Fano resonance to the Lorentz profile to express the ratio of the radiative and total losses.
ε 2 + 1 = B (γ rad + γ nrad ) 2 (E − E c ) 2 + (γ rad + γ nrad ) 2 = f L (S11) Thus it is possible to relate the fit parameters to the ratio of radiative and total losses: Mechanical scanning probe lithography is an excellent method for structuring other perovskites as well, including 3D perovskites. As an example, the structuring of a thin film of MAPbBr 3 perovskite was carried out (See Fig. S11b).
|
2023-01-04T06:42:11.177Z
|
2023-01-03T00:00:00.000
|
{
"year": 2023,
"sha1": "494d361f4e5b2930632843d809031b268505c10e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ed0fdfccaf9e8e86181bf3d91cca2ad3115750de",
"s2fieldsofstudy": [
"Physics",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
25907978
|
pes2o/s2orc
|
v3-fos-license
|
From powders to bulk metallic glass composites
One way to adjust the properties of materials is by changing its microstructure. This concept is not easily applicable on bulk metallic glasses (BMGs), because they do not consist of grains or different phases and so their microstructure is very homogeneous. One obvious way to integrate inhomogeneities is to produce bulk metallic glass composites (BMGCs). Here we show how to generate BMGCs via high-pressure torsion (HPT) starting from powders (amorphous Zr-MG and crystalline Cu). Using this approach, the composition can be varied and by changing the applied shear strains, the refinement of the microstructure is adjustable. This process permits to produce amorphous/crystalline composites where the scale of the phases can be varied from the micro- to the nanometer regime. Even mixing of the two phases and the generation of new metallic glasses can be achieved. The refinement of microstructure increases the hardness and a hardness higher than the initial BMG can be obtained.
Microstructural evolution as a function of strain
In Fig. 1a, the microstructure of Zr-MG + 20 wt% Cu as a function of the applied strain is presented (the shear direction is indicated with an arrow). All micrographs are backscatter-detector images to distinguish easily between Cu and the Zr-based MG. Images in the lower row have a significant higher magnification. The two phases in the coarse state differ in their mechanical properties, which strongly influences the deformation. In the beginning, Cu sustains most of the deformation as it has a lower hardness and yield strength. Zr-MG is more difficult to deform and the initial particles are easily distinguished as they change their initial shape and size only marginally (one of those particle is marked with a white arrow in Fig. 1a). Cu works as a glue and holds the amorphous particles together. Due to the heavy plastic deformation, its grain size is refined down to the range of several hundreds of nanometers. Higher strains (see micrographs for γ = 50) force the Zr-MG to deform and bond together. Therefore, the amorphous phase forms long elongated regions with a length of about 100 µm in the shear direction and widths up to 10 µm, where some features from the initial particles are still detectable (area marked by the black arrow). The Cu bands become thinner and more evenly distributed, but large Cu-containing areas can still be found. The Cu grains refine and become less than 100 nm in size. Micrographs at γ = 80 show a further refinement of the microstructure. Cu and Zr-MG lamellae decrease in length and in width: All of the Cu lamellas are thinner than 1 µm and most of them exhibit a thickness of 20 nm. Cu grains are not detectable in the SEM micrographs anymore, but the thickness of the thinner lamellas can be seen as an upper limit for the grain size in this bands. The elongated regions of the amorphous phase shrinks below 100 µm in length and 10 µm in width. From γ = 80 to γ = 165 a pronounced increase of the number of the thin Zr-MG lamellas occurs, so that the Cu network in the images at low magnification, is in reality a Cu rich region consisting of Cu lamellas with lengths below 1 µm and widths below 100 nm separated by about equally sized amorphous lamellas. Additionally, the boundaries between the two phases change from sharp and easily detectable to more blurry interfaces as the Cu starts to mix with the amorphous phase at higher strains. This process starts already at γ = 80, but is clearly recognizable at γ = 165. The mixing of the two phases progresses with increasing applied deformation. At γ = 250 elongated Cu rich and Zr-MG rich regions can still be detected with smooth transitions in between, but at γ = 390 Cu the SEM micrographs seems to show a single phase metallic glass.
The ratio of the amorphous and crystalline phase can be easily modified by varying the ratio of the two initial powders. To investigate the influence of the ratio of the two phases, compositions with higher contents of Cu were produced and SEM micrographs of the cross sections of samples with 40 wt% Cu at different applied strains are shown in Fig. 1b. At first glance the major difference visible in the composition with less Cu in Fig. 1a, is the higher amount of the crystalline phase, but the deformation mechanism does not differ. The softer Cu phase starts to deform first, but with higher amount of applied strain, the harder Zr-MG starts to form long elongated bands, which shrink with increasing deformation. At higher strains, the two phases start to mix again until a homogeneous material is formed. However, by comparing the micrographs for the two compositions, it can be clearly seen that a higher content of Cu requires a higher strains to reach the same refinement of the microstructure. To obtain the saturation microstructure (i.e. the monolithic metallic glass), the applied strain for Zr-MG + 40 wt% Cu has to be doubled compared to the Zr-MG + 20 wt% Cu.
The content of Cu was further increased to investigate the limits of mixing and to see any resulting change of the deformation behavior. In Fig. 1c, SEM micrographs of Zr-MG 60 wt% Cu can be seen. Compared to the samples with 20 and 40 wt% Cu, the deformation behavior in the beginning does not change and a lamellar structure is formed (see γ = 550-1600). At higher applied strains (γ = 3300-5000), the micrographs exhibits a deviation from the deformation behavior from the samples with low Cu content. The lamellas shrink in thickness with higher applied strain, but mainly a break-down of the lamellar configuration occurs. A transition microstructure develops with residual lamellar blocks embedded (see circled areas in Fig. 1c) in a nanocrystalline (see black arrow) and amorphous structure (see white arrow). At γ = 11000, both phases seem to be fully mixed and a single phase microstructure is obtained. The amount of strain needed to reach this saturation microstructure is about 10 times higher than for Zr-MG 40 wt% Cu and about 20 times higher than for Zr-MG 20 wt% Cu.
The composition with the highest Cu content investigated was Zr-MG 80 wt% Cu. SEM micrographs of the cross sections are displayed in Fig. 2. Again, the Zr-MG particles elongate and a lamellar structure is obtained in the beginning, but at an applied strain of γ = 5000 and higher, the lamellas start to break and the microstructure does not change significantly anymore even after applying extremely high strains (up to γ = 18900). Since the microstructure does not change significantly even after this enormous increase of the applied strain, it is assumed that a saturation is reached. The saturation microstructure seems to consist of a nanocrystalline Cu-rich matrix (in Fig. 2 at higher magnification crystals indicated with black arrows can be seen as small freckles in the light grey area) in which elongated Zr-MG bands (some are indicated with a white arrow in Fig. 2) with a length of several micrometers are embedded.
The evolution of the microstructure is reflected also in XRD measurements. XRD profiles of samples from Zr-MG (undeformed and deformed), crystalline Cu, and the four compositions investigated in a near saturated condition are displayed in Fig. 3. The measured range is concentrated on the first (and strongest) peak of Zr-MG, which is very broad and has low intensity compared to the more distinctive peaks of crystalline Cu. The XRD results of a HPT deformed pure Cu powder are also depicted in Fig. 3, which has a grain size of about 100 nm 24 . The XRD peaks of the undeformed and deformed Zr-MG coincide exactly, which indicates that the HPT process does not have an influence on the single phase BMG. The influence of the applied strain can be clearly seen at the three samples of Zr-MG 60 wt% Cu as the number of rotations in the HPT process was varied from 50 to 200 and 500. The two Cu-peaks are very dominant for the specimen with 50 rotations, but shrink as the applied strain increases. After 200 rotations, the first Cu peak has a similar height as the amorphous peak and both Cu peaks broadened significantly, which can be caused by two effects: smaller crystal sizes and higher defect densities in the crystals. The crystalline peaks disappear after 500 rotations and only the amorphous peak remains. crystalline Cu (indicated with arrows in Fig. 3). On the other hand, even after 500 rotations Zr-MG 80 wt% Cu shows a dominant Cu peak and a very weak amorphous peak (indicated with an arrow in Fig. 3). Comparing the crystalline peak of Zr-MG 60 wt% Cu and Zr-MG 80 wt% Cu to the pure Cu sample, the Cu peak is shifted to lower angles due to partial solution (or formation of a supersaturated solid solution) and broadens mainly due to the small crystalline size. Furthermore, mixing Cu into the amorphous phase shifts its peak positions to higher 2-Theta values and the difference between Zr-MG and Zr-MG 60 wt% Cu is approximately 5°. In order to demonstrate the possibility of analyzing the alloying region for generation of new metallic glasses, a sample with Zr-MG 20 wt% Cu 10 wt% Ni was investigated and after 150 rotations a fully amorphous structure is obtained.
Effect of strain on the mechanical properties
Microstructure influences strongly the mechanical properties, which were investigated in this study by using hardness measurement. In Fig. 4, the Vicker's hardness is plotted versus the applied strain (see methods). Since the strain varies over about three orders of magnitudes, several specimens have been used. The fitted lines are for guiding the eye. The hardness of the single phase Zr-MG scatters strongly at low applied strain, because the particles are not welded together completely in this early state of deformation. To eliminate an effect from the HPT deformation on the hardness beside consolidation, undeformed powder and a Zr-MG sample with 30 rotations were investigated via nanoindentation. Both conditions show the same hardness with 6.12 ± 0.33 GPa for the powder and 6.11 ± 0.07 GPa for the deformed specimen. The influence of the applied strain is more pronounced for the BMGCs, because not only the particles are welded together but also a refining and mixing of the two phases occur. Three compositions (with 20, 40 and 60 wt% Cu) show a similar behavior. At low strains, the hardness is lower than the single phase Zr-MG but increases strongly with deformation. The slope decreases with higher deformation and the curves level off at a hardness higher than the Zr-MG. Exceeding the hardness of both initial powders can only be explained by mixing the two phases, shifting the chemical composition and so forming a new metallic glass with higher hardness. Higher contents of Cu shift the curve to the right, which means that welding, refining and mixing require more strain if the fraction of the softer phase is higher. This behavior of the hardness corresponds to the evolution of the microstructure where the influence of Cu can also be seen clearly by comparing micrographs of the different composition. Increasing the content of Cu in the amorphous phase also leads to higher hardness (see Zr-MG 20 wt% Cu and Zr-MG 40 wt% Cu), but it requires more deformation. Zr-MG 80 wt% Cu also show an increase in hardness with larger applied strains, but the slope is less steep compared to the other compositions and the hardness of Zr-MG is not reached even after γ = 10 4 . The material gets harder, but the rate is so slow that a real mixing (as it is the case for the other three compositions) is not practical, as days are needed to achieve sufficient deformation. Not only the shape and position of the hardness curves depend on the composition, but also the hardness at saturation. In Fig. 5, the approximated hardness for the saturated microstructure depending on the composition is shown and the highest value is reached for the composition with 40 wt% Cu (which corresponds to 32at% Zr and 55.1at% Cu). The hardness drops by approximately 15% for Zr-MG compared to Zr-MG 40 wt% Cu.
Discussion
The microstructure of the HPT deformed Cu + Zr-MG mixture and their mechanical properties depend strongly on the composition and on the applied strain. At low strains, a bulk composite is formed, in which the elongated phases start to refine and the hardness increases strongly. Increasing the applied strain, continuous thinning of the lamellae can be found for 20 and 40 wt% of Cu, whereas, the lamellas tend to break down at higher contents of Cu. For 60 and 80 wt% Cu, the length of the lamellae decreases more than their thickness. Additional, a mixing of the two phases takes place and at very high strains, a single phase BMG is obtained for all composition except Zr-MG 80 wt% Cu. However, compared to the beginning the microstructure changes slower as the applied strain increases, which is also reflected in the hardness curves as all level off at higher degrees of deformation. The strain necessary to reach saturation increases significantly with increasing content of the softer crystalline phase (e.g.: more than 20 times if the content of Cu is increased from 20 wt% to 60 wt%). This can be seen in SEM, XRD and the hardness measurements (more Cu also shifts the hardness curves to higher strains). The position of the amorphous peak shifts to higher degrees as more Cu is mixed into the MG (the peak position of Zr-MG and Zr-MG 60 wt% Cu differs with approximately 5°), but it can also been shown how the crystalline Cu in the composites is affected. As the applied strain increases, the crystalline peak broadens due to grain refinement. The grain size becomes smaller during HPT deformation in the presence of a second phase than in a single phase crystalline material. Mixing of the two phases takes place also for Zr-MG 80 wt% Cu. In this case, a part of Zr-MG is dissolved in the crystalline Cu, it remains crystalline (nanocrystalline) and a supersaturated solid solution is formed. The exact quantity is hard to obtain, because the Zr-MG contains four other elements beside Cu (if only Zr was considered, the peak shift would indicate dissolving 0.16 at% into the crystalline Cu 46 ). Two effects could simultaneously prevent a full mixing of the two phases for this composition: The solubility of Cu in the amorphous phase is restricted and the limit lies between 71at% (Zr-MG 60 wt% Cu) and 85.9at% (Zr-MG 80 wt% Cu). Secondly, the amorphous particles are not forced to deform as strongly as in the other compositions, because the high content of soft Cu carries all of the deformation. The consequences of the second effect is the possibility of a further mixing of the two phases if the applied strain is pushed to higher values. However, this can only happen if the emerging amorphous phase has a higher strength and forces the initial MG to deform further. Comparing the approximated hardness at saturation, it can be seen that all compositions show a higher hardening than Zr-MG 80 wt% Cu. The highest value is obtained for Zr-MG 40 wt% Cu. Zr-MG 80 wt% Cu still consists considerably of a crystalline phase but has already a similar hardness to Zr-MG. Nevertheless, the composition can be continuously changed in a range from Zr 57 Cu 20 Al 10 Ni 8 Ti 5 (Zr-MG) to at least Zr 20 Cu 71 Al 3.6 Ni 2.9 Ti 1.8 (Zr-MG 60 wt% Cu). Additional to the compositions in this study, a fully amorphous sample with Zr-MG 20 wt% Cu 10 wt% Ni was produced. Compared to compositions from literature, the adjustable range is large (see Fig. 6) and new BMGs with a significant different chemical composition can be produced. Hence, the HPT can be used to fathom the limits of possible BMG's chemical compositions.
In summary, it is shown that BMGCs can be produced via HPT and their microstructure and mechanical properties can be adjusted by changing composition and the applied strain. The BMGCs can be forced to mix and the solubility range is highly extended, producing new BMGs. The chemical composition of this new BMGs can be adjusted over a wider range compared to conventional casting and by adding additional elements (e.g. Cu and Ni), the possibility of feasible BMG compositions increases drastically. Changing the composition influences also the mechanical properties and the hardness can be increased by approximately 15%. While the generation of new BMGs is important, even more interesting is the potential for tuning the properties of metal-BMG composites with the possibility to vary the second phase. The second phase can be freely changed in choosing different materials and compositions and its dimensions can be altered from the µm to the nanometer regime.
Methods
The metallic glass powder (Zr 57 Cu 20 Al 10 Ni 8 Ti 5 , spherical particle with diameters between 1 µm to 40 µm) was fabricated via high pressure gas atomization and was then mixed with crystalline Cu powder (spherical particle with diameters between 10 µm to 40 µm) in the respective compositions and blended by hand. The powder mixture was then filled into the gap between two grooved anvils and compacted by applying 4 GPa and 10° rotation in the HPT. For further deformation, the pressure was increased to 8 GPa (or 9 GPa) and samples with 2 to 500 rotations at room temperature were produced. The dimensions of the specimens were 6 mm in diameter and a height of approximately 450-600 µm. The applied shear strain γ can be estimated with where r is the radius, N is the numbers of rotations and t is the thickness after deformation. As the dimensions were approximately the same for all samples, the degree of deformation is adjusted by the numbers of applied rotations (more turns lead to higher deformation) and at the radius where the sample was investigated (the applied strain increases with the radial distance from the center of the disk). For SEM and hardness measurement, the coin-like HPT samples were cut in half, ground and polished to investigate the cross sections. In SEM the back-scatter detector was used to provide mass contrast so that the two phases are more easily distinguishable. Vickers hardness was measured along the diameter on the cross section with a load of 0.5 kg. An error of 2% for the measured hardness values is assumed and the standard deviation errors of the fitted curve were calculated and used for the error bars for Fig. 5. For nanoindentation testing of the powder and the deformed sample, a platform nanoindenter G200 (Keysight Tec) was used and the experiments were conducted under constant indentation strain rate (0.05 s −1 ) to a maximum indentation depth of 500 nm. The hardness was measured continuously over indentation depth and was averaged between an indentation depth between 400 to 450 nm. The non-deformed powder was embedded and then mechanical ground and polished like the surface of the deformed sample. Several particles were indented and for both materials, a mean value and the standard deviation were calculated from all indentations. A 5-circle X-ray diffractometer equipped with a source for Cu-K α radiation was used for XRD phase analysis of the specimens. Half samples were used and the surface was grounded to remove any impurities from the HPT process. The investigated 2-Theta range was concentrated around the first amorphous peak (the strongest).
|
2018-04-03T02:31:57.970Z
|
2017-07-27T00:00:00.000
|
{
"year": 2017,
"sha1": "5f92103c684605d5e95b5b282daab0b4329899d1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-06424-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ba3c5b83d4a7686a0c593d1452655eacdf85f38",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
233264795
|
pes2o/s2orc
|
v3-fos-license
|
Nutritional requirements for the enhanced mycelial growth and yield performance of Trametes versicolor
As one of the most precious medicinal mushrooms, Trametes versicolor is widely used in the treatment and prevention of cancer. In an attempt to identify nutritional requirements, this study explores the influence of carbon source (C), nitrogen source (N), and cultivation substrate on the mycelial growth and yield of T. versicolor. The optimal C and N sources and their ratios for the mycelial growth of T. versicolor were fructose and yeast extract at a 3:1, respectively. T. versicolor cultivated on a substrate mixture of 62% sawdust + 30% rice husk + 3% wheat bran + 1% CaCO3 exhibited the highest biological efficiency (12.58%). The findings in this study will provide important information regarding spawn production and T. versicolor cultivation at an industrial-scale. *Corresponding Author: Nghien Xuan Ngo, Department of Microbial Biotechnology, Faculty of Biotechnology, Vietnam National University of Agriculture, Hanoi 131000, Vietnam. E-mail: vanvecnshk53@gmail.com Journal of Applied Biology & Biotechnology Vol. 9(1), pp. 1-7, Jan-Feb, 2021 Available online at http://www.jabonline.in
INTRODUCTION
Trametes versicolor (synonym Coriolus versicolor), belonging to the family Polyporaceae, has been reported as one of the most popular medicinal mushrooms [1,2]. Several studies have reported that T. versicolor has the ability to produce a rich source of biologically active components such as tyrosol [3], friedelin, triterpenoids, alnusenone, α-D-glucan, and β-D-glucan [1]. Therefore, this mushroom has been widely consumed as a health supplement in the treatment and prevention of cancers and protective body functions [4]. Along with medicinal values, T. versicolor has been considered a useful white-rot fungus in environmental protection issues. The enzyme laccase that it produces, for instance, has been used for the biological pretreatment of lignocellulosic biomass [5], bioremediation [6], wastewater decontamination, pharmaceuticals, and polycyclic aromatic hydrocarbons degradation [7].
Two main factors affecting microbiological mycelia growth and fruiting body formation are nutrient and growth conditions [8]. However, at present, only a few studies have been carried out to optimize culture conditions for mycelial growth and fructification of T. versicolor. The optimal carbon and nitrogen sources and mineral salts were found to be dextrin, yeast extract and MgSO 4 .7H 2 O, respectively [1]. T. versicolor is able to grow in a wide range of pH values, between 4 and 9 [1]. The optimal temperature and humidity for fructification are 25 ± 2°C and 80-85%, respectively [9]. To date, only sawdust is known to be a suitable basal substrate for the cultivation of T. versicolor [9,10]. In general, the substrate used for mushroom cultivation is designed based on the local availability of agro-industrial wastes. Thus, for largescale commercial cultivation in different regions, additional studies will be needed to identify various substrates for the development of T. versicolor cultivation technology. Taken together, the present study aims to ascertain the favorable culture conditions for promoting the vegetative growth and yield performance of T. versicolor.
Mushroom Strains
T. versicolor strain VT1 was kindly provided by the Mushroom Research and Development Center, Vietnam. The culture was stored in sterilized potato dextrose agar slants at 4°C under complete darkness for maintenance [11].
Carbon Sources
To determine the carbon source most favored by T. versicolor, eight different sources, including fructose, glucose, maltose, lactose, saccharose, xylose, soluble starch, and dextrin, were screened. The individual carbon source was taken at 20 g/l and added into a basal medium (200 g infused potato, 15 g agar powder, and 20 g carbon DOI: 10.7324/JABB.2021.9101 source). The potato infusion was prepared according to Nguyen et al. [11]. Based on the obtained results, fructose with one-sixth of each case at specific concentrations (5 g/l, 10 g/l, 15 g/l, 20 g/l, 25 g/l, and 30 g/l, respectively) was supplemented into the basal medium to optimize the carbon concentration in culture media.
Nitrogen Sources
According to the study of carbon sources and concentration, T. versicolor was inoculated in a basal medium supplemented with 20 g/l fructose. To determine the nitrogen source and requirement for mycelial growth, three different inorganic nitrogen sources, including ammonium chloride (NH 4 Cl), ammonium nitrate (NH 4 NO 3 ), and ammonium sulfate (NH 4 ) 2 SO 4 , and four complex organic nitrogen sources, including peptone, yeast powder, casein, and urea, were tested with a concentration of 2 g/l.
Carbon/Nitrogen (C/N) Ratios
Following the study of carbon and nitrogen sources, 20 g/l fructose as a carbon source and yeast extract with individual concentrations (1, 2, 3, 4, and 5 g/l), similar to the nitrogen source, were used to determine the most favorable C/N ratio.
Substrate Preparation and Cultivation
Sawdust, cotton waste, and corncob were soaked in a lime solution (4 kg of lime per 1000 L of water), fermented for 7 days, and then allowed to sit an extra 1-2 days until the water content of the substrates reached 65% moisture level. The resulting substrate was filled with 1 kg of the substrate in a polyethylene bag and autoclaved at 121°C for 90 min. The inoculated bags were incubated at 25°C in the spawn-running room under dark conditions and then transferred into the cropping room after complete colonization of the substrate. For fruiting body formation, the temperature and relative humidity in the cropping room were set to 25 ± 2°C and 85 ± 5%, respectively.
Data Collection and Statistical Analysis
The diameter growth, mycelial density (high, regular, and low), texture (cottony, floccose), and colonial diameter were recorded after 4 days of incubation. The period of spawn running (days) is defined as the time required for the mycelium to colonize the substrate completely. The period of primordia formation (days) is defined as the day of inoculation to the formation of the primordia. Biological efficiency (BE) (%) was calculated as follows: The ratio of the weight of the fresh fruiting body (g) per the dry weight of the substrate (g) and expression as a percentage.
The raw data were statistically analyzed using GraphPad Prism (version 8.0, GraphPad Software Inc., San Diego, CA). Significant differences among the group means were determined by a one-way ANOVA, followed by Tukey's multiple range tests (P < 0.05) and indicated with letters.
Effect of Carbon Sources on Mycelial Growth of T. versicolor
It was observed that strain VT1 was able to use the eight carbon sources [ Figure 1a]. The mycelium color and texture of the T. versicolor grown in all media were found to be white and cottony. However, comparatively, among the eight evaluated carbon sources in this study, fructose and xylose showed the highest growth rates (P < 0.05). By contrast, lactose was not observed as an efficient carbon source for enhanced mycelial growth. In addition, the mycelial density was found to be high for fructose, glucose, and saccharose but regular for maltose, lactose, xylose, starch, and dextrin. Therefore, fructose was selected as the most suitable carbon source for further experiments.
Fructose at various concentrations (5 g/l, 10 g/l, 15 g/l, 20 g/l, 25 g/l, and 30 g/l) was tested for its effects on growth performance. As shown in Figure 1b, 15 g/l, 20 g/l, and 25 g/l fructose treatments exhibited high mycelial density, whereas the 5 g/l and 10 g/l fructose treatments showed a thin mycelial density. The highest diameter of mycelia was found in the media containing 15 g/l of fructose (32.29 ± 2.22 mm), while the lowest was detected in the 5 g/l (28.00 ± 1.0 mm). Based on the colonial diameter and mycelial phenotype, the best growth performance of T. versicolor was obtained in media containing 15-20 g/l fructose. Thus, fructose with 15 g/l was selected as the carbon concentration for further experiments.
Due to providing essential nutrients, the growth medium is considered to be the most important factor for mushroom production [12]. As structural and storage compounds in the cell, carbon sources play a critical role in mycelial growth [13]. Therefore, among the nutritional composition of the medium, the carbon source was selected as the first ingredient for optimization in the present study. In general, mushrooms can use a variety of compounds such as monosaccharides, polysaccharides, organic acids, amino acids, alcohols, and natural products as a carbon source [14]. Furthermore, the influence of carbon source on growth rate strongly depends on species, growth condition as well as the medium [15]. Jo et al. [1] proved that the efficient carbon sources for mycelial growth of T. versicolor were dextrin, fructose, and mannose. In our study, fructose with economic feasibility was the most beneficial carbon source for the radial growth of the mycelium and should be used as an ingredient for T. versicolor spawn production at the industrial-scale.
Effect of Nitrogen Sources
With regard to nitrogen sources, the highest mycelial growth performance of T. versicolor was achieved with peptone (28.14 ± 2.41 mm), yeast extract (30.33 ± 2.25 mm), and casein (30.167 ± 2.32 mm) [ Figure 2a]. By contrast, very weak growth (4.667 ± 1.32 mm) was observed in urea treatment. Among the seven nitrogen sources, yeast extract showed the highest mycelial density, whereas the rest of the nitrogen sources exhibited moderately thin or thin mycelial density. Based on the results of mycelial characteristics and growth rates, the yeast extract was considered as the optimal nitrogen source for luxuriant mycelial growth of T. versicolor. Nitrogen source is an essential nutrient that the growth culture medium must provide for the synthesis of all nitrogen-containing compounds and chitin cell wall components [14]. Nitrogen sources used for the growth of mycelium are inorganic (salts of nitrate, salts of the ammonium ion, etc.), and organic nitrogen [14]. Therefore, in this study, both organic and inorganic nitrogen sources were utilized to determine their effects on mycelium growth. According to the obtained results, organic and inorganic nitrogen sources exhibited a difference in supporting the growth of mycelium. Relative to organic nitrogen sources, except for urea, all investigated inorganic sources did not benefit the mycelial growth, which is consistent with the observations of Jo et al. [1]. Most of the mushrooms exhibited a preference for using complex organic nitrogen sources. For instance, peptone and beef extracts were considered to be useful nitrogen sources for the enhanced mycelial growth of Cordyceps sinensis [16]. Similar to C. sinensis, the mycelial growth of Volvariella esculenta [17] was promoted considerably in the medium containing organic nitrogen sources as compared to inorganic nitrogen. Inorganic nitrogen does not enhance the growth of mycelium [18], probably due to inorganic nitrogen sources being unable to be used for the biosynthesis of essential amino acids [19].
Effect of Carbon/Nitrogen Ratio
Based on the results obtained from the carbon and nitrogen sources, fructose and yeast extracts were utilized to optimize the C/N ratio. The C/N ratios 5:1 and 3:1 showed the fastest mycelial extension growth rate [ Figure 2b], whereas other ratios exhibited a lower rate. In addition, in terms of mycelial density, compared with the other C/N ratios, the C/N ratio of 3:1 exhibited a thicker mycelial density and can be considered an optimal C/N ratio.
An ideal balance between the carbon and nitrogen sources plays a key role in the growth of mycelium. The mycelium could stop growing under nitrogen deficiency conditions. By contrast, due to its metabolism by-products, a very high concentration of nitrogen can result in inhibiting the growth of mycelium [20]. The optimal C/N ratio suitable for the mycelial growth may vary according to species. For instance, the suitable ratios for the enhanced mycelial growth of Cystoderma amianthinum [21], Macrolepiota procera [22], Oudemansiella radicata [23], Paecilomyces fumosoroseus [24], and Ganoderma applanatum [25] were found to be 30:1, 10:1, 20:1, 40:1, and 2:10, respectively. Our result indicated that a C/N ratio of 3:1 was optimal for mycelial growth of T. versicolor, which is in high agreement with studied by Jo et al. [1].
Effect of Basal Substrates
For primordia formation and development, different species require different basal substrates [26]. To ascertain the most suitable treatment for the cultivation of T versicolor, four treatments with different combinations of basal substrates were used. The duration of the growth cycle, fruiting body morphological characteristics, and BE (%) are shown in Figure 3a and c. The obtained results reveal that basal substrates significantly affected (P < 0.05) the mycelium run rate. T. versicolor showed the ability to grow on all tested treatments and completed the period of mycelial colonization between 19 days (Treatment I) and 29 days (Treatment II and IV). Of the substrate treatments, Treatment I exhibited the fastest mycelia extension rate. Primordia formation was observed on day 25 (Treatment I) and day 35 (Treatment II and IV), after inoculation. Morphologically, no significant differences in fruiting bodies were observed in all treatments used for cultivation [ Figure 4]. All treatments showed two flushes during the cultivation period. Among the investigated basal substrates, Treatment III (sawdust and rice husk) showed the best yield performance with a BE value of 11.07%, followed by Treatment I (8.45%) and Treatment IV (8.43%) [ Figure 3b]. T. versicolor was capable of completing the growth cycle within 86-96 days.
To successfully cultivate mushroom, the three factors of spawns, substrate, and conducive environment must be considered [27]. The substrate provides essential nutrients for mycelium growth and fruiting body development stages and plays a critical factor in determining the success in mushroom cultivation [27]. Based on the available agroindustrial waste in Vietnam, we used sawdust, rice husk, cotton waste, and corn cob to optimize substrate for the cultivation of T. versicolor. The mycelial growth of T. versicolor was recorded in all treatments. During the colonization stage, to minimize the risk of fungal and bacterial contamination, the spawn running period should be reduced. Compared to other treatments, Treatment I showed the highest mycelium growth rate and, therefore, completed the colonization stage earlier. T. versicolor was able to form primordia and adapt to cultivation conditions for all four treatments. To the best of our knowledge, this is the first report showing that rice husk, cotton waste, and corn cob can be used as the basal substrates for the cultivation of T. versicolor. As one of the key factors in mushroom cultivation, BE is the main purpose of this set of experiments. Although the highest mycelial growth was observed in Treatment I, the BE value of this substrate (8.45%) was lower than Treatment III (11.07%). Thus, it would seem that the growth rate of mycelium does not have a very strong correlation with BE, which is in agreement with the findings of past studies by Liang et al. [28]. A combination of diverse substrates could enhance the yield performance of mushrooms because of a variation in the capability of such substrates to provide nutritional and environmental requirements and the difference in cellulose, hemicellulose, and lignin contents [29]. Compared to sawdust alone (Treatment I), Treatment III (62% sawdust + 30% rice husk + 7% wheat bran + 1% CaCO 3 ) exhibited higher BE and thus, should be used as the optimal substrate mixture to cultivate T. versicolor.
Effect of Wheat Bran
The highest mycelial extension of T. versicolor was found in 9% wheat bran in the first 15 days of spawn run [ Figure 5a]. Further, no significant differences were observed in all treatments. The time required for the mycelium to colonize the substrate completely and for primordia formation across all the treatments was found to be 22 days and 6 days, respectively. As expected, the addition of wheat bran of up to 7% to the substrates showed an improvement in the yield as well as BE in all treatments, whereas the 9% wheat bran-supplemented basal substrate did not enhance the yield performance of T. versicolor [ Figure 5b]. Substrate mixtures without wheat bran exhibited the lowest BE (4.85%). Substrates supplemented with 3-7% wheat bran could be considered as the most favorable substrate mixtures for fruiting body formation of T. versicolor with satisfactory yield (12.58-11.51%).
In general, basal substrates are known to be poor in nutrients. Thus, supplements are used as co-substrates to improve the nitrogen content for cellular protein and enzyme synthesis [30]. For the cultivation of mushroom, wheat straw was used to provide a reservoir of cellulose, hemicellulose, lignin, and nitrogen, which was utilized during the period of mycelial colonization and primordia formation [31]. To reduce the growth period, improve productivity, and enhance economic efficiency, wheat bran, which has high vitamin content, was used to supplement the cultivation substrate of T. versicolor. However, high supplementation of the substrate can inhibit the growth of mushroom [32], increase contamination rate [31], and reduce the yield [33]. In addition, due to faster metabolic activities induced by extra nitrogen, supplementation has been reported to increase substrate temperatures [33].
CONCLUSION
This study presented the influence of nutrients on mycelial growth and yield performance of T. versicolor. Mycelial growth was found to be enhanced by using fructose as the carbon source and yeast extract as the nitrogen source. Along with sawdust, other agroindustrial wastes such as rice husk, cotton waste, and corn cob can be used basal substrates for the cultivation of T. versicolor. The highest yield of T. versicolor was obtained when cultivated on a substrate mixture of 62% sawdust + 30% rice husk + 3% wheat bran + 1% CaCO 3 .
AUTHOR CONTRIBUTIONS
All authors made substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; took part in drafting the article or revising it critically for important intellectual content; agreed to submit to the current journal; gave final approval of the version to be published; and agree to be accountable for all aspects of the work.
|
2021-04-16T17:24:28.925Z
|
2021-01-17T00:00:00.000
|
{
"year": 2021,
"sha1": "ecb4789e52c9bbc6b773c021a457b9f55292c1ed",
"oa_license": null,
"oa_url": "https://doi.org/10.7324/jabb.2021.9101",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ecb4789e52c9bbc6b773c021a457b9f55292c1ed",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
257032839
|
pes2o/s2orc
|
v3-fos-license
|
PEGylated lipid nanocarrier for enhancing photodynamic therapy of skin carcinoma using curcumin: in-vitro/in-vivo studies and histopathological examination
The use of (PEG)-grafted materials has a positive impact on drug delivery. In this study we designed PEGylated lipid nanocarriers (PLN) loaded with curcumin (Cur) to target skin cancer by photodynamic therapy. Cur is a polyphenolic compound having vast biological effects masked due to its low aqueous solubility. PLN were prepared using Tefose 1500 with different surfactants. PLN3, containing Tween 80, had the smallest particle size (167.60 ± 15.12 nm), Z = − 26.91 mV and, attained the highest drug release (Q24 = 75.02 ± 4.61% and Q48 = 98.25 ± 6.89%). TEM showed spherical, well-separated nanoparticles. The dark and photo-cytotoxicity study on a human skin cancer cell line (A431) revealed that, at all tested concentrations, the viability of cells treated with PLN3 was significantly lower than those treated by Cur suspension and, it decreased upon irradiation by blue light (410 nm). The amount of Cur extracted from the skin of mice treated by PLN3 was twice that of mice treated by aqueous drug suspension, this was confirmed by the increase in fluorescence intensity measured by confocal laser microscopy. Histopathological studies showed that PLN3 could extend Cur effect to deeper skin layers, especially after irradiation. This study highlights the possible efficacy of curcumin-loaded PEGylated lipidic nanoparticles to combat skin cancer by photodynamic therapy.
Photodynamic therapy (PDT) is a modality of cancer treatment that relies on a photochemical reaction between a compound called (photosensitizer) and a light source of a specific wavelength in the presence of oxygen. Photosensitizers are typically nontoxic, however, they are excited upon exposure to light of specific wavelength resulting in the production of either free radicals (type I reaction), or reactive oxygen species (ROS) (type II reaction) causing massive destruction of the tumor tissues 1 . The effect of the generated ROS is mainly confined to the tissues in the vicinity of the irradiation site. Consequently, if the tumor tissues are exclusively exposed to light, the distant healthy tissues can be kept unaffected 2 .
PDT has many advantages compared to chemotherapy and radiotherapy due to low invasiveness and systemic toxicity, decreased drug resistance, and high selectivity 3 . Consequently, PDT has been studied as a promising modality for the treatment of different types of skin cancers 4 . The success of PDT depends on the adequate choice of the photosensitizer and the corresponding light source. Light sources with short wavelengths, as blue light (400-450 nm), have short depth of penetration, so they can be used only for superficial skin lesions. Blue light has been used successfully in treating some types of skin cancers as basal cell carcinoma 5 and melanoma 6 . Topical application of the photosensitizer on the desired site of action can avoid the severe cutaneous photosensitivity caused by their systemic use 7 .
The use of an effective and safe photosensitizer originating from a natural source, and the enhancement of its dermal delivery can contribute in the success of the photodynamic therapy.
Curcumin (Cur) is a polyphenolic compound derived from turmeric root with vast biological effects comprising photocytotoxicity upon excitation by blue light 8,9 . Unfortunately, these effects are limited by its low Scientific RepoRtS | (2020) 10:10435 | https://doi.org/10.1038/s41598-020-67349-z www.nature.com/scientificreports/ aqueous solubility and low stability. The selection of a suitable nanocarrier for Cur is essential to overcome these limitations. Lipoid nanocarriers have attracted many researchers for enhancement of the dermal application of plantderived drugs with low aqueous solubility [10][11][12] because of the increase of drugs chemical stability, solubilization, topical film formation, increased skin hydration and occlusion, enhancement of skin penetration and nanoparticles physical stability. In addition, some recent studies have proved the capability of these nanocarriers to be deposited within the skin layers 10,11 , this can provide a beneficial advantage in the current study allowing for skin targeting and localized cytotoxic effect.
On another hand, poly(ethylene glycol) (PEG)-grafted biomaterials have great importance, this technology has a positive impact in drug delivery 13 . Based on this strategy, this study was oriented to the use of the PEGylated Tefose 1500 (mixed PEG-6 stearate and PEG-32 stearate) as the lipid component to prepare lipid-based nanoparticles, as this is expected to increase the skin deposition and enhance the localized photo-cytotoxicity of Cur allowing skin cancer targeting.
Solid lipid nanoparticles of various lipid components have been studied as nanocarriers for Cur in previous literatures 14,15 . Another recent study has performed surface modification to improve Cur bioavailability 16 .
However, the novel point in this study is focused on the nanoencapsulation of Cur using a PEGylated lipidbased nanosystem for skin targeting; different surfactants were investigated. The designed Cur-loaded PEGylated lipid nanocarrier (PLN) was evaluated and examined for dark-and photo-cytotoxicity against human epidermoid squamous cell carcinoma cell line (A431). In-vivo skin deposition and histopathological studies were also included in the study.
Methods. Preparation of the Cur-loaded PEGylated lipid nanocarriers (PLN).
A previously described method was followed to prepare the Cur-loaded PEGylated lipid nanocarriers (PLN), where a coarse emulsion was produced followed by probe sonication using Sonifier Model 250 (Branson Ultrasonics, USA) 10,11,17 . Briefly, the drug (10 mg/g) together with the lipophilic component (Tefose 1500) were mixed to form the lipid phase which was melted at 50 °C. The aqueous phase was heated to the same temperature and then, it was added to the oily phase to form a coarse emulsion which was subjected to probe sonication at 20 W for 90 s. As listed in Table 1, different surfactants were used namely: Span 85 (S85), Span 20 (S20), Tween 80 (T80) and Tween 20 (T20). The final preparation was kept in amber glass vial and kept at room temperature till use.
Characterization and evaluation of the Cur-loaded PEGylated lipid nanocarriers (PLN). Encapsulation efficiency. The un-entrapped drug was separated from the entrapped by centrifugation for 30 min, at 10,000 rpm at 8 °C (Centrikon T-42K, Kontron, Instruments, UK). The precipitated loaded nanoparticles were then dissolved in ethanol and Cur concentration was measured spectrophotometrically at 420 nm by UV-VIS double beam spectrophotometer (Rayleigh UV-2601) 18 . The encapsulation efficiency (EE) was calculated as a ratio of the initially added drug amount. Table 1. Composition, HLB, particle size analysis (PS, PDI &Z) and in-vitro release of the Cur-loaded PEGylated lipid nanocarriers (PLN). All values present the mean ± SD (n = 3). Hydrophilic lipophilic balance (HLB)/average particle size (PS)/polydispersity index (PDI)/zeta potential (Z)/encapsulation efficiency (EE)/ cumulative amount of drug release % (Q) after 24 h or 48 h. Span 85 (S85)/Span 20 (S20)/Tween 80 (T80)/ Tween 20 (T20). www.nature.com/scientificreports/ Particle size analysis and zeta potential. Mean particle size, size distribution and zeta potential measurements were performed using the Malvern Zetasizer Nano ZS (Malvern Instruments Ltd., Malvern, UK) by photon correlation spectroscopy (PCS). Before measurement, samples were diluted with distilled water appropriately.
In-vitro drug release study. Samples (100 mg) from each formula were accurately weighed and placed in a dialysis membrane (molecular weight cut off 12,000-14,000). To allow sink conditions, the dialysis membranes was immersed in 50 ml PBS buffer (pH 7.4) containing 10% ethanol as a receptor medium, and kept under stirring (100 rpm) at 37 °C. Aliqouts of 1 ml were withdrawn at different time intervals up to 48 h and replaced by fresh medium. The concentration of Cur in the withdrawn samples were measured spectrophotometrically at 420 nm.
Characterization of the selected Cur-loaded PEGylated lipid nanocarriers (PLN3). Differential scanning colorimetry. Differential scanning calorimetry (DSC) analysis was carried out using DSC 60 (Shimadzu, Japan). The free drug, the drug-loaded nanoparticles and individual components of the nanoparticles were placed in aluminium pans. The temperature was increased from 25 to 200 °C at a rate of 10 °C/min under nitrogen atmosphere.
Transmission electron microscopy (TEM). A diluted colloidal suspension of the sample was spread on a carbon-coated copper grid without staining prior to examination by transmission electron microscope (JEM 100S, Jeol, Ltd., Tokyo, Japan).
In-vitro cytotoxicity. Cells suspended in fresh medium were seeded at a concentration of 1 × 10 4 cells/well in 96-well microtiter plastic plates and incubated for 24 h till complete attachment. Afterwards, the cells were incubated in dark for 24 h either in the fresh media alone (negative control) or with different concentrations (20, 10, 5, 2.5 and 1 μg/ml) of the tested samples: free Cur suspension (Cur) and the selected Cur-loaded PEGylated lipid nanocarriers (PLN3). To investigate the PDT effect on the cytotoxicity, some of the cells, after incubation with tested samples, were irradiated at a fluence of 300 mW/cm 2 for 4 min by blue light delivered from a noncoherent light 200 W halogen lamp (Photon scientific, Cairo, Egypt) mounted in a housing cooled by fans. The light emitted passed through glass filter containing circulating water for omission of ultraviolet and infrared radiation then through a band pass filter (Rosco Laboratory, Ltd., Stamford) with transmission spectral range of 320-540 nm and maximum transmission at 410 nm 18 . Finally, the dark toxicity (after incubation with the tested samples) and the phototoxicity (24 h post irradiation) was assessed by MTT assay. Briefly, the medium was washed out, 40 μl MTT solution (2.5 μg/ml) were added to each well and incubated for further 4 h at 37 °C under 5% CO 2 . To dissolve the formed crystals, 200 μl of 10% Sodium dodecyl sulfate (SDS) in deionized water was added to each well and incubated overnight at 37 °C. The absorbance was then measured using a microplate multi-well reader (Bio-Rad Laboratories Inc., model 3350, Hercules, California, USA) at 595 nm and a reference wavelength of 620 nm.
In vivo studies. Animals. Male Mus musculus Albino mice (23 ± 2 g, 7 weeks old) were supplied from the animal house of National Research Centre, Egypt and they were kept there for 24 h in clean plastic well ventillated cages under standard laboratory conditions with free access to food and water. The study protocol was approved by Cairo University Institutional Animal Care and Use Committee (CUIACUC), approval number (CU-I-F-31-19). All experiment protocol was performed in accordance with relevant guidelines and regulations set by the institutional committee.
Experiment. The hair on the dorsal skin was removed by razor. Afterwards, the samples (100 mg of PLN3 or Cur suspension containing an equivalent amount of the drug) were topically applied on 1 cm 2 area of the shaved dorsal skin. The animals were randomly subdivided into different groups so that each group contained six animals, as following: Control negative: animals did not receive any treatment and served as control. Control-light: animals were irradiated but didn't receive any treatment. Group A: animals were topically treated by Cur aqueous suspension. Group B: animals were topically treated by Cur suspension, followed by irradiation after 1 h. Group C: animals were topically treated by the selected Cur-loaded PEGylated lipid nanocarrier (PLN3). Group D: animals were topically treated by PLN3, followed by irradiation after 1 h.
The irradiated groups were exposed to blue light delivered by light emitting diode LED (420 nm) for 10 min at a fluence of 90 mW/cm 2 (Photon scientific, Cairo, Egypt). The fluence was adjusted using a powermeter (Gentecsolo PE, Canada). During irradiation, the animal behavior was monitored, and animals showed any signs of pain or distress were subjected to very low doses of isoflurane inhalation.
Animals were kept separately in clean plastic cages (one animal in each cage) for 24 h. Afterwards; execution of animals was done by cervical dislocation under anesthesia and the treated skin areas were separated by scissors.
Skin deposition. The separated dorsal skin were accurately weighed, cut into small pieces and homogenized in ethanol in order to extract the Cur from the skin. The extract was centrifuged at 5,000 rpm for 10 min, and the content of Cur in the supernatant was measured spectrophotometrically at 420 nm as mentioned above. and were analyzed using one-way analysis of variance, followed by the least significant difference procedure using SPSS software (SPSS, Inc., Chicago, Illinois, USA). Statistical differences yielding p < 0.05 were considered significant.
Results and discussion
Characterization and evaluation of the prepared formulae. Drug encapsulation efficiency (EE). Table 1 is showing the drug encapsulation efficiency results, the surfactant used in the preparation didn't have an effect on this parameter, therefore all the PLN have an EE value ranging from 35.51 to 40.02% and there was no significant difference between them (p > 0.05). Table 1, it is clear that the PLN prepared using the Tweens (PLN3 and PLN4) had a smaller particle size than those prepared using the Spans (PLN1 and PLN2) (p < 0.05). This may be due to the higher HLB in case of the former as it was previously reported that the HLB value influence the formation and properties of lipid-based nanoparticles 19,20 . A previous study has reported that surfactants of lower lipophilicity formed nanovesicles with smaller sizes 20 . Table 1 is showing the PDI values, Spans-containing PLN had higher PDI values than Tweens-containing PLN. Low PDI values are recommended (< 0.5) as an indication of a uniform distribution and low aggregation possibility.
Particle size analysis. As listed in
The Z potential values are also listed in Table 1; the values are − 35.90, − 31.71, − 26.91 and − 17.92 for PLN1, PLN2, PLN3 and PLN4, respectively. The range of reported values is considered to be high enough to ensure satisfactory physical stability with a low aggregation tendency due to electrostatic repulsion between the particles 21,22 . The use of PEGylated lipids can help to enhance the stability and prevent the agglomeration of the prepared nanoparticles due to the formation of the polymeric-coated nanoparticles advantageous for drug delivery 13,23 . In-vitro drug release study. Curcumin release pattern from the prepared PLN is shown in Fig. 1 and release data are listed in Table 1. It is clear that Tweens-containing PLN attained faster release than Spans-containing PLN (p < 0.05); this runs in agreement with the results discussed above regarding the smaller particle sizes of the former; besides their higher hydrophilicity which allows for a more rapid release. PLN3 had the fastest drug release profile (p < 0.05) and showed a complete drug release at 48 h. The cumulative amount of drug release (%) was 75.02 ± 4.61 and 98.25 ± 6.89 at 24 and 48 h, respectively. Based on the above studies, PLN3 was selected for further investigations. These nanoparticles had the smallest particle size with uniform distribution and attained the fastest drug release pattern.
Transmission electron microscopy. The photograph of the selected PLN (PLN3) is shown in Fig. 2. The nanoparticles are spherical and well-separated with a particle size range correlating with that recorded in the particle size analysis. The core-shell nanostructural architecture can be observed in the photo, this runs in agree with a previous study pointing out to the presence of a solid core surrounded by a PEG coating in such a type of nanocarriers 13 .
Differential scanning calorimetry. The thermograms of the selected preparation (PLN3) and its individual components are illustrated in Fig. 3. The used surfactant (T80) did not show transition endothermic peaks within the tested temperature range (from 25 to 200 °C) while the drug (Cur) and the lipid (T1500) showed endothermic peaks corresponding to their melting points at 171.23 °C and 50.86 °C, respectively. The mentioned characteristic peaks were not observed in the lipid nanoparticles thermogram, which can indicate the absence of the crystalline state of the drug and its entrapment within the lipid core of the formulated lipid-based nanoparticles.
in vitro cytotoxicity. The dark-and photo-cytotoxicity of free Cur suspension and PLN3 were assessed on a human epidermoid squamous cell carcinoma cell line (A431). Visual inspection of the cells under inverted microscope proved the efficacy of the tested samples, cells of all groups treated with the highest drug dose (equivalent to 20 μg/ml) are detached and dead (Fig. 4).
MTT assay results (Fig. 5) revealed that the cytotoxicity of Cur and PLN3 were concentration dependent. The higher efficacy of the designed Cur-loaded nanoparticles (PLN3) compared to Cur suspension is obvious at all studied concentrations in dark as well as light conditions. The viability of the cells treated with PLN3 was significantly lower (p < 0.05) than those treated by free Cur suspension. Moreover, the cell viability of cells treated by PLN3 at all concentrations was significantly decreased upon light radiation (p < 0.05).
These results might prove that Cur can induce cytotoxicity which can be enhanced upon loading it in a suitable nanocarrier, and can be further increased upon exposure to irradiation. Nanocarriers can enhance the Cur cytotoxicity due to the drug solubilization, increase of surface area, enhancement of permeability and cellular uptake. Upon exposure to the blue light (430 nm), the cytotoxicity was greatly enhanced due to the generation of reactive oxygen species (ROS) that damage the cellular organelles and disrupt the mitochondrial membrane integrity leading to apoptosis 14,18,24 . www.nature.com/scientificreports/ After irradiation, curcumin and demethoxycurcumin were reported to induce apoptosis through mitochondrial pathways, this effect was studied in details on keratinocyte cell line (HaCat) and squamous cell carcinoma (A431) 4,25 .
In this study, we designed PEGylated lipid nanoparticles to be explored as Cur carrier for improvement of skin targeting; the designed nanosystems were found to be superior to Cur suspension in inducing cytotoxicity. Moreover, the above-mentioned studies used UVB as a light source to excite Cur. Instead, in this study we used a cheaper, safer and readily manufactured visible light source emitting blue light; the obtained results proved that it induced photodynamic effect efficiently. This light source was used previously to induce photo-toxicity of Cur on HePg2 cells 18 . In Vivo studies. Skin deposition. The amount of Cur extracted from the skin of groups treated by PLN3 was approximately twice that of groups treated by aqueous Cur suspension (Table 2). This significant difference (p < 0.05) could be attributed to the unique features of the prepared PEGylated lipidic nanoparticles which promote penetration and accumulation of the drug in the skin layers. These results confirm the expectation which pushed us to conduct this study, as Yuan et al. reported that PEGylated solid lipid nanoparticles enhanced the oral bioavailability 26 . Furthermore, it was proved that association of PEG with the lipid nanoparticles prevented their aggregation and decreased enzymatic degradation in gastrointestinal fluids, this hypothesized that PEGstearate formed a stabilizing/enzyme repellent coating on the lipid nanoparticles 27 . Another study proved that www.nature.com/scientificreports/ polymeric-coated nanoparticles showed a higher drug retention and lower clearance at the site of administration compared to the uncoated nanoparticles 23 . Our results provide additional beneficial effect of PEGylated lipid nanocarriers in dermal delivery as they were able to improve the skin penetration and deposition of Cur. However, the light didn't affect the deposition of Cur in skin layers.
Confocal laser scanning microscopy. Confocal microscopy was used to visualize the fluorescence of delivered Cur across the skin. Cur is a fluorescent molecule that emits fluorescence upon excitation by blue light, however, www.nature.com/scientificreports/ at this excitation wavelength the untreated mice skin was found to produce auto-fluorescence due to the presence of endogenous fluorophores such as elastin and collagen 28 . Therefore, the confocal images (Fig. 6) obtained for skin sections treated by Cur suspension and PLN3 were compared to those of untreated skin of the control group in terms of fluorescence intensity. The fluorescence intensity was increased by about 260% and 480% in case of Cur suspension and PLN3, respectively. The enhancement of fluorescence intensity indicates that encapsulating Cur in the suggested PEGylated lipidic nanocarrier has improved its penetration and accumulation into the skin. These results run in accordance with all the above-mentioned results regarding the better tissue penetration and deposition of the designed carrier.
Histopathological findings. Histopathological examination recorded the changes in the structure of different skin layers in different groups as illustrated in Fig. 7.
No histological changes were observed in all skin layers of the negative control and the irradiated control groups (Fig. 7a,b respectively).
In group A, treated by aqueous Cur suspension, the epidermis and the underlying superficial layers of dermis exhibited areas of focal ulceration and necrosis, due to the interaction between Cur and the skin surface. www.nature.com/scientificreports/ The areas of epidermal ulceration and necrosis were wider and more obvious in group C treated by PLN3, because the PEGylated lipidic nanoparticles delivered and deposited a higher amount of the drug into the skin as revealed from the above-mentioned results concerning cytotoxicity and skin deposition. Moreover, the interaction between the lipids of the formula and the skin lipids may lead to disruption of the lamellar arrangement of the epidermis 29 . The effect on the epidermal layer was exaggerated after radiation with blue light as indicated by acanthosis (increasing skin thickness) of the epidermal layer noticed in the irradiated groups (group B and D) as shown in Fig. 7. The subcutaneous tissues and musculature showed focal inflammatory cells infiltration in all groups. However, in the groups treated by PLN3 in presence or absence of light (groups C and D), the inflammatory cells infiltration was massive and associated with aggregation (Fig. 7, Group Dii).
From these finding we can conclude that the prepared PLN3 could increase the penetration and deposition of Cur into skin layers, intensifying and extending its effect to reach deeper skin layers, in addition to its stabilizing effect. Consequently, it potentiates the efficacy of Cur as a photosensitizer in photodynamic therapy of skin cancer.
conclusion
This study proves the positive impact of the use of PEG-grafted pharmaceutical ingredients. The displayed results show the feasibility of a targeted and enhanced photodynamic therapy of skin carcinoma using Cur-loaded PEGylated lipidic nanoparticles. This can shed the light on a promising, safe, economic and effective method to fight cancer.
|
2023-02-20T15:01:58.233Z
|
2020-06-26T00:00:00.000
|
{
"year": 2020,
"sha1": "56c20674cb6ed8c9cd98ef23749c951e16658074",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-67349-z.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "56c20674cb6ed8c9cd98ef23749c951e16658074",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
38327853
|
pes2o/s2orc
|
v3-fos-license
|
Influence of Quercetin on Diabetes-Induced Alteration in CYP3A Activity and Bioavailability of Pioglitazone in Rats
Problem statement: Quercetin-a common bioflavonoid, is present in herb al preparations consumed by diabetic patients along with routine an ti-diabetic agents. We recently showed that quercetin increases the bioavailability of pioglita zone in non-diabetic rats. Thus, present study investigated whether this pharmacokinetic interacti on is also evident in diabetic animals, especially because diabetic subjects have altered Gastrointest inal (GI) function and CYP3A activity. Approach: The study was carried out in alloxan-induced (40 m g kg, i.v.) diabetic rats. After 2 weeks of diabetes induction, rats were treated for 2 week s with quercetin (10 mg kg , p.o.) or vehicle (5% methyl cellulose, 10 mL kg ). At the end of 4 weeks, these rats were used to i nvestigate: (1) GI function in terms of gastric emptying and intestinal transit of semisolid barium sulphate meal; (2) CYP3A activ ity in liver and intestinal microsomes by erythromycin N-demethylase assay; (3) plasma levels of orally an d intravenously administered pioglitazone (10 mg kg , p.o.; 5 mg kg, i.v.). Results: The results revealed that diabetic rats exhibited: (1) delayed gastric e mptying and intestinal transit; (2) decreased CYP3A activity and (3) a significant increase in the oral and intravenous AUC 0-∞ of pioglitazone as compared to non-diabetic rats. Quercetin treatment prevented the diabetes-induced GI dysfunction, whereas diabetes-induced decrease in CYP3A activity and inc reased bioavailability of pioglitazone remained unaffected. Conclusion: The results suggested that quercetin attenuated GI ysfunction but did not affect the bioavailability of pioglitazone in diabe tic rats contrary to the increase reported in nondiabetic rats. However, the safety of co-joint use of quercetin containing herbs and pioglitazone in clinical practice requires further pharmacokinetic substantiation.
INTRODUCTION
Several herbal preparations are often used by the diabetic patients along with the routine antidiabetic agents with an intention to produce better glycemic control and to reduce diabetic complications [1,2] . Such herbal preparations often contain bioflavonoid that are antioxidant in nature and thus help to eliminate hyperglycemia-induced oxidative stress [1,3] . Some of the bioflavonoid are reported to inhibit the activity of CYP3A-a class of CYP450 isoenzymes [4][5][6] and incidentally this class of enzymes is responsible for metabolizing pioglitazone and nateglinide, the often used oral antidiabetic agents [7,8] .
Recently, we have shown that quercetin-a bioflavonoid, decreases the metabolism of pioglitazone by inhibiting the CYP3A activity in non-diabetic rats and increases its bioavailability [9] . However, the CYP3A content and its activity in liver and intestinal tissues are low in diabetic condition [10] . Moreover, type 2 diabetes mellitus is associated with polymorphism of CYP3A4 [11] . In addition, diabetic subjects often exhibit variety of Gastrointestinal (GI) dysfunctions such as delayed gastric emptying and intestinal transit [12][13][14] . Under these circumstances, the drug-drug interaction between quercetin and pioglitazone evident in nondiabetic condition cannot be straightway extrapolated to diabetic condition.
Hence, the present study investigated: (1) the functional status of GIT; (2) the CYP3A activity in intestinal and liver tissues and (3) the pharmacokinetics of pioglitazone in alloxan-induced diabetic rats treated with vehicle or quercetin.
MATERIALS AND METHODS
Animals: Subjects were young, healthy adult male Sprague-Dawley rats (250-270 g), purchased from National Institute of Nutrition, Hyderabad (AP), India. Rats were housed in groups of 3-4 per cage in opaque polypropylene cages and maintained at 25±2°C with 12 Experimental design and animal model: Rats were divided randomly into three body weight-matched groups (n = 12). One group was non-diabetic control group and was administered saline (1 mL kg −1 , i.v.); diabetes was induced in others by administering alloxan (40 mg kg −1 , i.v.) dissolved in cold saline (1 mL kg −1 ). Induction of diabetes was checked 48 h after injection of alloxan by glucometer. Two weeks after the induction of diabetes, diabetic rats were treated for further two weeks orally with vehicle (5% methylcellulose, 10 mL kg −1 ; diabetic control group) or with quercetin at a daily dose of (10 mg kg −1 ; quercetin treated group). At the end of 4 weeks, 3-4 animals each from a group were used for oral and intravenous pharmacokinetic studies and estimation of hepatic and intestinal CYP3A activity. The animals used for oral pharmacokinetic studies were later employed for GI function studies. The study design is represented schematically in Fig. 1. animals each from a group were used for oral and intravenous Pharmacokinetic (PK) studies and estimation of hepatic and intestinal CYP3A activity. The animals used for oral PK studies were employed for GI function studies.
The dose of alloxan was selected on the basis of previous report [15] . The dose of quercetin was selected from previous reports and our preliminary studies in such a way that treatment of diabetic rats for four weeks did not cause reversion of diabetes [16,17] . The duration of diabetes required for precipitation of gastroparesis was selected on the basis of previous report [18] and our previous observations (unpublished data).
Estimation of blood glucose levels: Blood glucose levels were measured by GOD-POD method using commercially available glucometer (Ascensia Entrust, Bayer Healthcare LLC, USA) and expressed in mg dL −1 . The rats showing fasting blood glucose levels more than 250 mg dL −1 after two days of alloxan administration were retained in the diabetic group.
Assessment of CYP3A activity in liver and intestinal microsomes:
The influence of various treatments on the CYP3A activity in liver and intestinal microsomes was studied using the method of erythromycin N demethylase assay. In brief, at the end of four weeks, overnight fasted rats were euthanized by pentobarbitone overdose and the livers were perfused with 0.1 M Phosphate Buffered Saline (PBS) and isolated. In addition, a piece of intestine ~25 cm) was isolated and cleaned with PBS. The intestinal and liver microsomes were prepared by methods reported by us earlier [9] .
The mixture of microsomal suspension (0.1 mL, 25%), erythromycin (0.1 mL, 10 mM) and potassium phosphate (0.6 ml, 100 mM, pH 7.4) was incubated at 37°C. The reaction was initiated by adding NADPH (0.1 mL, 10 mM) and terminated after 10 min, by adding ice-cold trichloroacetic acid solution (0.5 mL, 12.5% w/v). The mixture was centrifuged (2000×g) to remove proteins. 1.0 mL of NASH reagent (2 M ammonium acetate, 0.05 M glacial acetic acid, 0.02 M acetylacetone) was mixed with 1.0 ml of supernatant and heated in a water bath at 50°C for 30 min. After cooling the absorbance was measured at 412 nm. The activity was calculated from standards (1-100 µM formaldehyde) prepared by substituting sample with standard solution, which were run in parallel. All the samples were run in duplicate. The CYP3A activity was expressed as nM of formaldehyde obtained per milligram of protein per min.
Assessment of gastric emptying and intestinal transit:
Gastric emptying of a non-nutrient semi-solid meal was assessed by previously reported method [19] . In brief, at the end of four weeks and overnight fasting, a suspension of 7.5 g barium sulphate in 10 ml water was given orally to rats in a volume of 2 mL/100 g body weight. After 30 min, rats were euthanized by pentobarbitone overdose. The abdomen was cut open and esophageal and pyloric ends of stomach were clamped with a string to prevent any leakage of its residual content. The stomach was weighed along with its contents and then incised, the contents removed and stomach weighed again. The gastric barium sulphate content was calculated from the weight difference between the filled and empty stomach and the amount of BaSO 4 suspension administered was determined by weighing the dose (syringe) before administration. The percent gastric emptying rate was calculated by the following formula: Where: % GE = Percent Gastric Emptying W1 = Wt of BaSO 4 suspension administered W2 = Wt of stomach before washing-Wt of stomach after washing Intestinal transit was determined as the percentage of movement of barium sulphate in the intestine in relation to the whole length of the gut as determined by visual inspection of the test meal passage in the intestine.
Pharmacokinetic and bioavailability studies in rats:
At the end of four weeks, rats were fasted overnight and were fed 4 h after dosing pioglitazone orally. For intravenous studies animals were not fasted. The animals had free access to water during all these studies. Each rat was given either a single intravenous dose (5 mg kg −1 ) or oral dose (10 mg kg −1 ) of pioglitazone (n = 3-4 each). The intravenous doses were administered as a bolus via the saphenous vein and the oral doses by oral gavage. Pioglitazone was administered as a solution in a vehicle composed of 0.1 M citric acid in dose volume of 10 mL kg −1 for oral administration and in (30% propylene glycol+3% DMSO) mixture at a dose volume of 2 mL kg −1 for intravenous studies. Blood samples (0.3 mL) were withdrawn through tail vein from unrestrained animals [20] and collected in heparinized tubes at 0.5, 1, 2, 4, 8 and 24 h after oral administration and 0.083, 0.5, 1, 2, 8, 12 and 18 h after intravenous administration of pioglitazone. Plasma was prepared immediately by centrifugation and stored at -20°C until samples were analyzed.
Pioglitazone concentration was determined as reported by us earlier [9] . In brief, to 100.0 µL of plasma sample, 50.0 µL of rosiglitazone (12.5 mg in methanol) solution as internal standard and 100.0 µL of acetonitrile were added to precipitate the proteins. The mixture was vortex mixed for 5 min after which it was centrifuged at 10,000×g for 10 min and 20.0 µL of the supernatant was injected onto the HPLC system for analysis. The UV detector was set at 269 nm. C18 (2) column (4.6×250 mm, 100°A) Luna, PHENOMENEX, USA was set at 30°C. The flow rate was 1. Statistical analysis: Wherever required the data is analyzed by using one-way ANOVA followed by Newman-Keuls test for multiple comparisons. p<0.05 was considered statistically significant.
Induction of diabetes:
Single intravenous dose of alloxan (40 mg kg −1 ) was sufficient to induce diabetes in the rats and the blood glucose levels were above the threshold (250 mg dL −1 ) in all the animals during four weeks. The mortality due to diabetes was about 20%. The blood glucose levels were checked weekly to ensure maintenance of diabetic state. The initial and final blood glucose values along with body weights, water intake and urine output are shown in Table 1. Pharmacokinetic studies: The mean plasma concentration-time profiles of pioglitazone after its oral and intravenous administration at a dose of 10 and 5 mg kg −1 respectively, to non-diabetic rats, vehicle treated diabetic rats and quercetin treated diabetic rats are shown in Fig. 6 and 7 and some relevant pharmacokinetic parameters are shown in Table 2 and 3. One-way ANOVA revealed a significant influence of all the treatments on the oral and intravenous pharmacokinetics of pioglitazone. The AUC 0-∞ (81% increase), T 1/2, Vd and MRT of pioglitazone was significantly increased (p<0.05) and K el and Cl was significantly decreased (p<0.05) in diabetic rats after its intravenous administration. Similarly, the AUC 0-∞ of pioglitazone was significantly increased (p<0.05; 73.0% increase) in diabetic rats after its oral administration. However, other pharmacokinetic parameters were not significantly affected.
Quercetin treatment had no influence (p>0.05) on the diabetes-induced changes in the oral and intravenous pharmacokinetics of pioglitazone.
DISCUSSION
Pioglitazone is known to sensitize the tissues to insulin to produce its antidiabetic action [21] . If given along with a sulphonylurea or insulin, it often leads to severe hypoglycemia [22,23] . In view of this risk factor, its bioavailability is therapeutically crucial in view of possible pharmacokinetic drug-drug interactions. We had earlier shown that administration of pioglitazone along with quercetin to non-diabetic rats increases its bioavailability, as it is principally metabolized by CYP3A and quercetin inhibits the same. However, it is difficult to extrapolate such possibility in diabetic condition, particularly because diabetic rats have altered GI function and CYP3A activity. Therefore, the present investigation was carried out to study the fate of pharmacokinetic drug-drug interaction between quercetin and pioglitazone in diabetic condition.
These studies revealed that the Cmax and Area Under The Curve (AUC) of orally given pioglitazone were higher in alloxan-induced diabetic rats as compared to non-diabetic rats and the results in nondiabetic rats were in accordance with the previous reports [24] . However when given intravenously, AUC was higher, clearance was lower and Cmax remained unaffected in diabetic rats as compared with nondiabetic group. These observations indicate that the bioavailability of pioglitazone is as such higher in diabetic group as compared to non-diabetic group. The studies further revealed that pretreatment with quercetin did not influence the increased bioavailability of pioglitazone in diabetic rats and this is contrary to our previous observation that quercetin increases the same in non-diabetic rats [9] .
The GI function studies revealed that the diabetic group exhibited significant delay in gastric emptying and intestinal transit and thus mimicked the condition of diabetic gastroparesis seen in humans after long standing diabetes mellitus. In addition, it was observed that treatment with quercetin significantly prevented these diabetes-induced changes in GI function. Recently, we have reported that reduced NO levels in GI tissues is the pivotal reason for diabetes-induced GI dysfunction [19] . Quercetin is reported to enhance the bioavailability of NO in diabetic rat aortas [25] . Thus, quercetin might have produced beneficial effects in diabetic rats through modulation of NO levels in the GIT.
The delay of gastric emptying and intestinal transit is normally expected to postpone the Tmax, while improvement by quercetin is expected normalize it. However, the Tmax in diabetic group was same as that of non-diabetic group, indicating that prevailing GI function had no contribution to the bioavailability of pioglitazone and hence quercetin-induced improved GI function did not show any change in the Tmax. It is known that diabetic gastroparesis affects the emptying of solid meal while the emptying of liquid meal remains unaltered [26,27] . Incidentally, in the present investigation pioglitazone was administered in the solution form.
The diabetes-or quercetin-induced changes in CYP3A activity could be responsible for the changes in Cmax and reduction in clearance of pioglitazone. The erythromycin N-demethylase assay revealed decreased CYP3A activity in liver and intestinal microsomes obtained from diabetic group. These observations are in accordance to the earlier reported changes in the CYP3A activity in diabetic condition [10] . The CYP3A activity in these tissues was uninfluenced by quercetin treatment to diabetic group. In our earlier study we found that quercetin significantly inhibited CYP3A activity in non-diabetic rats, which is not evident in diabetic rats in the present investigation. Western blotting studies have reported that the content of CYP3A of liver and intestine is as such low in diabetic condition [10] . Thus, inadequate availability of CYP3A and decreased metabolism could be the possible reason for higher Cmax and AUC of pioglitazone in diabetic condition. For the same reason, quercetin-induced inhibition of CYP3A was not evident in diabetic group, which is otherwise seen in non-diabetic group in our earlier studies.
Thus, these studies reveal that the bioavailability of pioglitazone is higher in diabetic condition but remains unaffected by quercetin treatment. Part of this observation is contrary to a clinical report wherein type II diabetic patients exhibited normal clearance of pioglitazone [7] . However, the study did not assess CYP3A activity and/or the functional status of GIT in these patients.
CONCLUSION
The present investigations revealed that quercetin treatment improved diabetes-induced GI dysfunction with no concurrent influence on the hepatic and intestinal CYP3A activity and hence the bioavailability of pioglitazone. However, extensive clinical pharmacokinetic studies are necessary to extrapolate these observations for therapeutic enrichment.
|
2019-03-12T13:03:27.780Z
|
2009-06-30T00:00:00.000
|
{
"year": 2009,
"sha1": "82d1be5d8cc4bd117e07ebd45b924115e1a02547",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/ajidsp.2009.118.125",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "838ce81208b2d6a292508fcd951f0959b064e86c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219082568
|
pes2o/s2orc
|
v3-fos-license
|
Solvent effects leading to a variety of different 2D structures in the self-assembly of a crystalline-coil block copolymer with an amphiphilic corona-forming block
We describe a polyferrocenyldimethylsilane (PFS) block copolymer (BCP), PFS27-b-P(TDMA65-ran-OEGMA69) (the subscripts refer to the mean degrees of polymerization), in which the corona-forming block is a random brush copolymer of hydrophobic tetradecyl methacrylate (TDMA) and hydrophilic oligo(ethylene glycol) methyl ether methacrylate (OEGMA). Thus, the corona is amphiphilic. This BCP generates a remarkable series of different structures when subjected to crystallization-driven self-assembly (CDSA) in solvents of different polarity. Long ribbon-like micelles formed in isopropanol, and their lengths could be controlled using both self-seeding and seeded growth protocols. In hexanol, the BCP formed more complex structures. These objects consisted of oval platelets connected to long fiber-like micelles that were uniform in width but polydisperse in length. In octane, relatively uniform rectangular platelets formed. Finally, a distinct morphology formed in a mixture of octane/hexanol, namely uniform oval structures, whose height corresponded to the fully extended PFS block. Both long and short axes of these ovals increased with the initial annealing temperature and with the BCP concentration. The self-seeding protocol also afforded uniform two-dimensional structures. Seeded growth experiments, in which a solution of the BCP in THF was added to a colloidal solution of the oval micelles led to a linear increase in area while maintaining the aspect ratio of the ovals. These experiments demonstrate the powerful effect of the amphiphilic corona chains on the CDSA of a core crystalline BCP in solvents of different hydrophilicity.
Introduction
Block copolymers (BCPs) with a crystallizable block can selfassemble in selective solvents to form micelle-like objects with a semicrystalline core. 1 At temperatures below the melting point of the core-forming polymer, crystallization can provide the driving force for the assembly. This crystallization-driven self-assembly (CDSA) process leads to different types of colloidally stable structures. The earliest examples of CDSA, with poly(ethylene oxide) (PEO) as the core-forming block, led to square platelet structures. [2][3][4][5] Later examples, with polyferrocenyldimethylsilane (PFS) as the core-forming block led to rod-like one-dimensional (1D) structures, 6,7 although subsequent examples with shorter corona-forming chains generated more ribbon-like planar (2D) structures, with the core characterized as a single crystal. 8,9 Core-crystalline micelles formed by CDSA have now been reported for BCPs with a broad variety of different core-forming blocks. Examples include poly(3-hexylthiophene), 10 poly(L-lactide), [11][12][13] polycaprolactone (PCL), 14,15 polycarbonate, 16,17 polyethylene (PE), 5,18-20 oligo(pphenylenevinylene), 21,22 and BCPs with a liquid crystalline block. 23 The nding of both 1D and 2D objects is consistent with the theoretical predictions of Vilgis and Halperin. 24 In their description, the greatest free energy contribution to selfassembly is the formation of a lamellar crystalline core. The number of folds in the core is determined by a balance of energies that include repulsion between solvent-swollen corona chains. They pointed out that corona repulsion in systems with very long corona chains could limit the extent of lateral growth of the semicrystalline core. This would lead to square micelles if the core chains crystallized at equal rates in both planar directions and elongated micelles if the crystal growth was much faster along one of the crystal growth axes. While the core of these elongated micelles was predicted to have a rectangular cross-section, the overall shape of micelles that includes the solvent-swollen corona can be thought of as cylindrical, if the core cross section is sufficiently narrow.
From this perspective, for a series of BCPs with a crystalline core-forming block of a given length, the length and dimensions of the corona-forming block should play an important role in determining the morphology formed by self-assembly. Most BCPs examined for their CDSA behavior consist of a crystallizable core-forming block coupled to a soluble corona-forming block. The corona dimensions can be manipulated by varying the length of the soluble block. Some control over the corona dimensions is possible by examining self-assembly of a single BCP in different solvents, but this range is oen limited. Generally, solvents employed for self-assembly should be poor solvents for the core-forming block and effective or good solvents for the corona-forming block. Typically, one studies the self-assembly in non-polar solvents of BCPs with a non-polar corona-forming chains such as polydimethylsiloxane (PDMS), 25,26 polystyrene (PS) [2][3][4][5]27 or polyisoprene, 9,26,[28][29][30] and in polar solvents for BCPs with a polar corona-forming chain such as poly(acrylic acid), [31][32][33] or PEO, 17,[34][35][36][37][38] or polyvinylpyridine (P2VP), [39][40][41][42][43][44][45] or poly(N-isopropyl acrylamide). [46][47][48][49] Some authors have looked at two-component solvent mixtures, for example tetrahydrofuran/isopropanol (THF/iPrOH) to enhance the solubility of the core-forming block to promote self-assembly, 50 or hexane-iPrOH mixture to create a solvent in which micelles with both PDMS and P2VP corona chains are colloidally stable. 30 There are very few reports in the literature about how a change in solvent or solvency can affect the morphology of a core-crystalline BCP micelle. Schmalz and coworkers 19 showed that solvent had a strong effect on the self-assembly of PS-b-PEb-PMMA (PMMA ¼ poly(methyl methacrylate)) and PS-b-PE-b-PS when hot solutions of the polymers were cooled. In relatively poor solvents, phase separation occurred above the melting point of the PE block. Spherical micelles formed, and the PE block crystallized upon cooling within the conned geometry of the spherical core. In better solvents, micelle formation occurred at lower temperatures, leading to crystallizationdriven formation of elongated micelles. Several papers from Xu and coworkers [51][52][53] have shown that solvents that promote swelling of the corona chains can induce a morphology change from cylinders to spheres. In another paper, 54 this group showed that addition of hexanol to an aqueous dispersion of spindle-like platelet micelles with a PCL core led to disassembly into elongated micelles. Solvent will also affect the rate of crystallization of the core-forming block. For example, we have shown more rapid micelle growth of PFS-b-PDMS micelles in poor solvents for the PFS block like hexane compared to a somewhat better solvent such as ethyl acetate. 55 In principle, a broader range of solvents can be examined if the corona-forming chains consist of an amphiphilic copolymer. Here we examine the self-assembly of a PFS BCP in which the corona chain is a random copolymer of tetradecyl methacrylate (TDMA) and oligo(ethylene glycol) methyl ether methacrylate (OEGMA, M n z 300). We show that the reactivity ratios of both monomers are close to 1. TDMA was chosen because the polymer has a melting point below room temperature, 56 and in this way we avoid complications of corona chain crystallization. The length of the OEGMA was chosen to be similar to that of TDMA pendant group. The homopolymer PTDMA is strongly hydrophobic, readily soluble in hexane and other simple alkanes and insoluble in methanol and ethanol. POEGMA is hydrophilic. It is soluble in water as well as in simple alcohols. This imparts unusual solubility characteristics to the ca. 1 : 1 copolymer, which in turn affects the self-assembly of the BCP sample examined here. We examined self-assembly in iPrOH, hexanol and octane, all poor solvents for the core-forming block PFS. We found that this PFS BCP formed ribbon-like micelles in iPrOH, an unusual mixture of structures in hexanol, uniform rectangular platelets in octane, and uniform oval-shaped platelet micelles in a 1 : 1 (v/v) mixture of octane/hexanol. We explore strategies to control the length of the uniform rod-like micelles and the size of the platelet micelles. It is very interesting that such different shapes can be obtained by CDSA of a single BCP through a simple variation of the solvent medium.
Polymer synthesis and characterization
A random copolymer of tetradecyl methacrylate and oligoethylene glycol methacrylate was synthesized by atom transfer radical polymerization (ATRP) in toluene using 2-azidoethyl 2bromoisobutyrate as the initiator as described in (ESI †). The reaction was carried out to a monomer conversion of 65% ( 1 H NMR), and the polymer had a composition with a mole ratio of TDMA/OEGMA of 1 : 1.06 (see . We conclude that the hydrophobic and hydrophilic pendant groups were randomly distributed along the polymer backbone. Because of the amphiphilic nature of the copolymer, it was not possible to completely remove unreacted monomer by selective precipitation of the polymer in methanol or hexane. This mixture was carried forward into the next step of the synthesis. Because of overlapping peaks in the 1 H NMR spectrum, it was not possible to resolve the end groups. The degree of polymerization (DP n ) was characterized aer coupling to the PFS block.
The azido-end-capped copolymer was coupled to PFS 27 -C^CH by Cu(I) catalyzed azide-alkyne coupling. The synthesis and characterization of PFS 27 -C^CH (DP MALDI n ¼ 27, Đ GPC ¼ 1.05) has been described previously, and the coupling reaction followed the protocol of our previous publications. 57 The reaction is depicted in Scheme 1. The purication of the BCP to remove homo-and copolymer impurities is described in ESI † and corresponding SEC traces are presented in Fig. S3. † The 1 H NMR spectrum of the puried BCP is presented in Fig. S4. † Since the PFS block has a narrow size distribution (Đ ¼ 1.03) and has been characterized by MALDI-TOF measurements, it serves as an excellent NMR reference for characterizing the corona-forming block. In this way we determined that the corona block had an overall DP n ¼ 134 with 65 TDMA units and 69 OEGMA units, i.e., PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) (Đ ¼ 1.17).
In parallel, we synthesized individual samples of PTDMA and POEGMA homopolymers (for details, see ESI †). These serve as reference points for exploring the solubility of the components of the 1 : 1 random copolymer.
Solubility and cloud point behavior of PTDMA, POEGMA, and P(TDMA 65 -ran-OEGMA 69 ) To begin, we examined the solubility of PTDMA, POEGMA and their copolymer in a series of alkane and alcohol solvents, which exhibit quite distinctive hydrophilicities. PTDMA (at 10 mg mL À1 ) was very soluble at room temperature (RT, 23 C) in octane, hexanol, and in a (1 : 1 v/v) octane/hexanol mixture. It was soluble in hot iPrOH but phase separated upon cooling. A turbidimetric analysis (Fig. 1a) gave a cloud point of ca. 70 C. POEGMA is insoluble in octane. It is very soluble in water but exhibits upper critical solution temperature (UCST) behavior in alcoholic solvents. 58,59 Our sample at 10 mg mL À1 was soluble in iPrOH at temperatures down to 10 C, implying that the cloud point is below 10 C. It was soluble in warm hexanol with a sharp cloud point at 26.5 C and in hot octane/hexanol (1 : 1 v/ v) with a transition between 50 and 55 C (Fig. 1b).
The copolymer P(TDMA 65 -ran-OEGMA 69 ) has nearly an equal number of the two pendant groups, which were designed to be of similar length. It exhibited very different solubility behavior. It was soluble in iPrOH and hexanol temperatures between 10 and 80 C. At 10 mg mL À1 , it became soluble in warm octane/ hexanol (1 : 1 v/v) and in octane upon heating above 60 C (Fig. 1c). This difference in solubility prole is related to the copolymer composition, where the PTDMA could decrease the cloud point of POEGMA in hexanol and enhance its solubility in octane as shown in Fig. 1b and c. It is known that the UCST of POEGMA or POEGMA-containing polymers show a dependence on concentration. 59 For the case of P(TDMA 65 -ran-OEGMA 69 ) in octane, we found that the UCST decreased to ca 20 C at 3 mg mL À1 and was undetectable at 1 mg mL À1 . The POEGMA end group is also known to have a signicant inuence on its UCST behavior, where exible end groups were found to lower the critical temperature while rigid aromatic end groups raised the transition temperature. 59 Within the core-crystalline PFS-based BCP micelles, it is foreseeable that the cloud points of the corona chains in the corresponding media would be increased. These solubility and cloud point measurements serve as a useful guide for dening self-assembly conditions for the PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ).
Transmission electron microscopy (TEM) images ( Fig. 2b and S5 †) show that the micelles were polydisperse in length but uniform width (W n ¼ 30 nm, W w /W n ¼ 1.03). Corresponding atomic force microscopy (AFM) images ( Fig. S6a †) indicate that the micelles have a mean height H n ¼ 6.5 nm with a narrow height distribution (H w /H n ¼ 1.06). In PFS, the Fe-Fe spacing is 0.65 nm (see the XRD spectrum in Fig. S24 †), 60,61 from which we 15 to 19 nm for the length of the fully extended PFS block with DP n ¼ 27, Đ ¼ 1.05. If the core height is on the order of 6 nm, we can infer that the PFS block forms two to three folds on average as it packs in the core of these micelles. Since the width determined by TEM is primarily sensitive to the high electron density of the PFS core and is much larger than the height determined by AFM, we infer that these micelles are ribbon shaped and not cylindrical. In fact, close inspection of Fig. 2b shows regions of thicker elongated platelet-like structures, which can be seen more clearly in ESI Fig. S5 and S6c. † As a general principle, BCPs with a long or highly solventswollen corona-forming block tend to form elongated berlike micelles, whereas BCPs with shorter, more compact corona chains form platelet-like 2D structures. 8,62 Given the high solubility of the corona-forming block in iPrOH over the entire temperature range of the self-assembly experiments (Fig. 1c), we believed that the swollen corona in iPrOH promoted the formation of the long ribbon-like structures.
We used two approaches in an attempt to generate elongated micelles of controlled length in iPrOH, namely self-seeding and seeded growth. Both approaches start with a sample of the long micelles shown in Fig. 2b. These micelles were subjected to sonication for 30 min at 23 C. As seen in the TEM image (Fig. 2c), no long micelles survived, and the resulting micelle fragments were characterized by a mean length of L n ¼ 48 nm (L w ¼ 52 nm, L w /L n ¼ 1.09). These micelle fragments were used as seeds for micelle growth.
Self-seeding. Self-seeding experiments in solution take advantage of the variation in crystallinity of semicrystalline polymer samples. Upon heating, the least crystalline domains dissolve rst into unimers. Only the most crystalline domains survive. They serve as nuclei for the epitaxial deposition of unimers as the solution cools. 63 For PFS BCP micelles, we have explained self-seeding in terms of a Gaussian distribution of melting (dissolution) temperatures, and in this way, we can account for the apparent exponential decrease in the number of surviving seed nuclei as the annealing temperature is increased. 64 Fig. 2a provides an overview of the self-seeding mechanism.
Self-seeding experiments were carried out in which samples of PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) micelle fragments in iPrOH at 0.05 mg mL À1 were annealed for 30 min at temperatures ranging from 60 to 90 C and then allowed to cool to RT. Fig. 2e shows that the sample annealed at 70 C upon cooling yielded micelles of uniform length with L n ¼ 661 nm, L w /L n ¼ 1.01. Fig. S7 † shows that the corresponding sample annealed at 80 C gave micelles characterized by L n ¼ 1208 nm, L w /L n ¼ 1.01. A sample heated to 90 C led to mm-size branched micelles (Fig. S7 †) with elongated protrusions. It is interesting to note that this BCP, which dissolved initially at 80 C to yield long micelles, behaved very differently when its micelle fragments were heated to 80 or 90 C. This difference in behavior is likely a consequence of the increase in crystallinity of the PFS block as the micelle fragments were annealed.
Seeded growth. In a seeded growth experiment, one adds a small volume of a concentrated solution of BCP unimer in a good solvent to a dilute suspension of core-crystalline micelle fragments in a selective solvent. If self-nucleation is slow, then the unimer deposits epitaxially on the open ends of the micelle fragments that serve as seeds for nucleated growth. In this way, one can obtain elongated micelles of uniform length, where the nal length obtained is related to the amount of unimer added. For 1D or ribbon-like structures, if the mean number of BCP molecules per unit length does not change, then the nal micelle length can be predicted from the ratio of unimer-toseed. 65 The short micelle fragments from the sonication step were employed as seeds for seeded growth. The initial seed concentration was 0.05 mg mL À1 , and four independent vials with the same volume of seed solution were prepared under the same conditions. Different mass ratios of unimer-to-seed (m unimer / m seed ) were obtained by adding different volumes of unimer solution (10 mg mL À1 in THF). In this way, micelles with different lengths were prepared (Fig. 2f and S8 †). The micelles obtained were uniform in length and similar in width to the starting seeds. For example, with m unimer /m seed ¼ 4, micelles with L n ¼ 240 nm, L w /L n ¼ 1.02 were obtained (Fig. S8b †). Fig. 2g shows that L n increased linearly with m unimer /m seed . The agreement between the measured L n values and the theoretical line shows the behavior expected for living CDSA.
In summary, iPrOH is a good solvent for the corona chains of PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) over the entire temperature range of RT to 80 C. When the BCP is heated in this solvent, it dissolved and formed ribbon-like micelles upon cooling, driven by the crystallization of the PFS block. Micelle fragments were formed upon sonication of the long micelles at RT. These could be transformed into uniform structures both by self-seeding and by seeded growth.
Self-assembly in hexanol
When a sample of BCP in hexanol was prepared at a concentration of 0.5 mg mL À1 and heated at 80 C for 60 min and then agitated for a few minutes, the solution turned clear, indicating that the polymer had fully dissolved. Upon slow cooling, structures very different from those formed in iPrOH were observed (Fig. 3). These objects consist of oval platelets connected to long ber-like micelles that are very uniform in width (W n ¼ 20 nm, W w /W n ¼ 1.02). These micelles are polydisperse in length and one can see shorter micelles with lengths below 500 nm as well as micelles longer than 5 mm. The AFM image in Fig. S9 † shows that the platelets do not appear to be at and they are typically twice as thick as the ber-like micelles. There is an overall cluster-like composition to the structures in the images.
In order to explore this self-assembly process, we repeated this experiment but heated the initial solution to 100 C for 3 h before allowing the sample to cool slowly. As seen in Fig. 3c and d, oval shaped platelets formed, also attached to a network of very long micelles of uniform width. There are many similarities to the structures seen in Fig. 3c, but the platelets are considerably larger and more uniform in size. The ber-like connectors are very long. A lower magnication image in Fig. 3c shows that the overall shape is ower-like with a dark central core, with wavy ensembles of bers that extend tens of mm from the core. Fig. S10a † shows that sonication of the micelles formed at 80 C leads to a polydisperse mixture of ill-formed structures. Self-seeding experiments with these fragments also afforded mixed morphologies (see ESI and Fig. S10b †).
Hexanol is a less polar solvent than iPrOH. As shown in Fig. 1c, P(TDMA 65 -ran-OEGMA 69 ) is soluble at relatively high concentrations over the entire temperature range examined. Two distinct micelle shapes were obtained, suggesting the corona-forming blocks might promote more than one kind of morphology due to the interaction between them and the solvent.
Self-assembly in octane
The rst alkane solvent we examined was decane. A sample of PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) at 0.5 mg mL À1 dissolved upon heating to 80 C. Upon cooling, we obtained uniform rectangles with lengths of ca. 4 mm and widths of ca. 900 nm (TEM images, Fig. S11 †). These platelets are fascinating structures and very different from the self-assembled structures obtained in iPrOH or hexanol. In the examples shown in Fig. S11, † one can see that the edges show some local curvature and two of the examples are slightly narrower at one end. The frustrating aspect of this experiment is that aer many tries, we were unable to reproduce these simple structures, including by varying the polymer concentration and the annealing temperature. These new experiments led to a precipitate at the bottom of the vial. Thus we turned our attention to octane.
A sample of PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) at 0.5 mg mL À1 in octane also dissolved when heated to 80 C for 1 h. Upon slow cooling to RT, we also obtained rectangular platelets as shown in the TEM images in Fig. 4, accompanied by much smaller round spots, which may be due to spherical micelles. The platelets were relatively uniform in size, with a mean long axis of L n ¼ 2348 nm (L w /L n ¼ 1.02) and a mean width of W n ¼ 566 nm (W w /W n ¼ 1.04). Each platelet seemly contained a dark circle in the center that spanned the width of the object. We tried to vary the selfassembly conditions to optimize formation of the rectangular platelets. All attempts at varying sample concentration, dissolution temperature or cooling rate for octane as a solvent led to mixed morphologies consisting of platelets and spherical micelles. Nevertheless, we could separate the rectangular platelets from the smaller micelles by selective sedimentation. Using gentle centrifugation (1000 rpm, 10 min, 23 C), we could selectively sediment a powder that represented the material that formed the dark round spots. The platelets remained in suspension. Fig. 4c and d present TEM images of the puried rectangular platelets. An AFM image of one of the separated rectangular platelets (Fig. S12 †) shows that it is relatively at over its entire surface with a mean height of ca. 15 nm, which is consistent with extended PFS 27 chains in the core.
The natural habit of PFS homopolymer crystals is a rectangular platelet, reecting more rapid growth along the long axis. 66 And PFS BCPs with short corona chains also form rectangular platelet micelles. 9 In the Vilgis and Halperin model, for BCPs with a preferential crystal growth direction, corona repulsion limits crystal growth in the lateral direction. Monte Carlo simulations by Hu and coworkers 67 on crystallization driven ber growth by BCP predict that growth is retarded in poor solvents for the corona chains. Collapse of the corona chains shields the edges of the growing crystal block, and this effect should also retard 2D growth. From this perspective, the self-assembly of this amphiphilic BCP in octane (and in decane) to give uniform rectangular platelets is unexpected. The platelets themselves are shorter but wider than those formed by PFS BCP examples previously reported, 9 and this likely reects contributions of the short PFS 27 block as well as the contracted dimensions of the amphiphilic corona block. The large size for the platelets formed in octane suggests that nucleation is a relatively rare event, whereas the uniform size suggests that nucleation occurs more rapidly than growth. Hot octane and hot decane are much better solvents for PFS than hot iPrOH. This increased solvency likely plays an important role in enabling the PFS chains to assemble onto the edges of the platelets in a more extended conformation.
Self-assembly in a mixed solvent of octane/hexanol
Since hexanol and octane gave such different self-assembled structures under similar protocols, we decided to examine self-assembly in a 1 : 1 (v/v) mixture of octane/hexanol. These results were very different but fascinating. PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) dissolved when heated in the octane/hexanol solvent mixture, and upon cooling, led to the formation of rather uniform planar oval structures, as seen in the TEM image in Fig. 5a. Analysis of multiple micelles with ImageJ determined by measuring more than 200 samples in several images showed that not only the areas had low dispersity (A n ¼ 679 440 nm 2 , A w /A n ¼ 1.02) but the long axes (a n ¼ 1249 nm, a w /a n ¼ 1.01) and short axes (b n ¼ 683 nm, b w /b n ¼ 1.01) were also uniform. The aspect ratios were also uniform, with a n /b n ¼ 1.85 AE 0.06. An AFM image of the ovals is presented in Fig. 5b. The height prole in Fig. 5c shows an overall concave shape with an edge height of ca. 20 nm, and the center is somewhat thinner (16 nm). Multiple ovals are shown in the AFM images in Fig. S13. † These images emphasize the uniformity of the ovals and conrm the observation that the edges are somewhat thicker than the interior. The thickness in the centers of the ovals is more than twice that of the ribbon-like structures formed in iPrOH (c.f., Fig. 2) and is comparable to the mean fully extended length of the PFS 27 block (17.5 nm).
The formation of uniform oval platelets by PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) in a 1 : 1 (v/v) octane/hexanol mixture is unexpected. We have previously reported that PFS BCPs will form pointed oval micelles of uniform size, but only via a carefully designed seeded growth protocol 68,69 or addition of substantial amounts of PFS homopolymer. 70 The structures observed here did not require a blend with a large content of PFS homopolymer, nor did it need an intentionally added rodlike seed micelle to catalyze or initiate platelet formation. Because the observation of uniform oval micelles was unprecedented, we designed a variety of new experiments, described below, to explore the scope of this self-assembly process.
Varying the size of the oval micelles Effect of micelle preparation temperature. In this section, we examine the inuence of sample preparation temperature on self-assembly. Aliquots of PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) were added to 1 : 1 (v/v) octane/hexanol at 0.5 mg mL À1 and heated to different temperatures for 60 min, cooled slowly to RT and then aged 24 h. For sample annealed at 60 C, the micelles had an overall oval shape but showed fuzzy irregularities at the edges (Fig. S14 †). Annealing at higher temperatures (65, 70, 75, 80 C) led to micelles that were more regular in shape and the edges became better dened (Fig. 5 and S14 †). For this range of temperatures, both the long axes and the short axes increased in length with the increase in preparation temperature as shown in Fig. 6a.
Samples prepared by heating the BCP-solvent mixture to higher temperatures (85, 90 C) gave more complicated structures. As shown in Fig. S15, † the oval structures that formed were lled with dark occlusions that sometimes protruded through the exterior edges. For ovals prepared at 85 C, we calculate a n ¼ 3000 nm, a w /a n ¼ 1.01 for the long axis and b n ¼ 1500 nm, b w /b n ¼ 1.01 for the short axis. For ovals prepared at 90 C, the structures were larger, with a n ¼ 6200 nm, a w /a n ¼ 1.003 for the long axis and b n ¼ 3100 nm, b w /b n ¼ 1.004 for the short axis. An AFM image (Fig. S16 †) of several ovals in the 85 C Fig. 6 Effect of (a) sample preparation temperature and (b) initial concentration for micelle preparation at 80 C on the dimensions of oval micelles formed by PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) in 1 : 1 octane hexanol. Values of the corresponding areas are plotted in Fig. S19. † sample show that the at portions of the structure are similar in height (20 nm) to those obtained at 80 C, but the occlusions protrude as high as 60 to 70 nm.
Effect of BCP concentration on micelle formation. We also noted that the concentration of PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ), ranging from 0.1 mg mL À1 to 1.2 mg mL À1 affected the size and shape of the micelles formed under the selfassembly conditions described above (1 : 1 v/v octane/hexanol, 80 C, 1 h, slow cooling, 24 h aging). TEM images of the resulting micelles are shown in Fig. S17. † At 0.1 mg mL À1 , dark circles and a few oval structures were observed. At 0.2 and 0.3 mg mL À1 oval structures ca. 5 mm long and lled with occlusions can be seen. At higher concentrations, the structures were smaller. They were somewhat irregular at 0.4 mg mL À1 and became larger and more regular as the sample concentration was increased (Fig. S17 †). In Fig. 6b we show that the lengths of both the long axis and the short axis increased linearly as the initial sample concentrations was increased from 0.4 to 0.9 mg mL À1 . At the highest concentrations (1.0, 1.2 mg mL À1 ) the structures became more irregular (Fig. S17 †) with occlusions and with a background of dark spots similar to those seen in Fig. 4b above as well as short ber-like micelles (Fig. S18 †).
It is interesting to note that by combining the variables of sample preparation temperature and initial BCP concentration, we can exercise considerable control over the size of the oval micelles obtained. We are able to vary the long axis of these ovals from 390 nm to 4280 nm and the overall area from 65 000 nm 2 to 6 500 000 nm 2 in a well-controlled manner. Plots showing the increases in area with self-assembly temperature and with sample concentration are presented in Fig. S19. † Self-seeding. We carried out self-seeding experiments in octane/hexanol beginning with a sample of the oval micelles shown in Fig. 5. A sample of these micelles was subjected to bath sonication as described above for ribbon-like micelles in iPrOH. This led to the polydisperse mixture of fragments shown in the TEM images (Fig. S20 †). When examined at higher magnication one can see that the larger objects appear to be aggregates of smaller fragments <50 nm in length. The size distribution of the fragments could not be determined.
Self-seeding experiments were carried out at a 10-fold lower concentration than the original oval micelle preparation at 80 C. Aliquots (1 mL) of these fragments at 0.05 mg mL À1 were then annealed 30 min at various temperatures (60, 70, 80 C), cooled directly to RT and allowed to age for 24 h. Somewhat surprisingly this treatment led to the formation of rounded platelets. Self-seeding at 80 C yielded uniform ovals with a mean long axis of axis a n ¼ 1471 nm, a w /a n ¼ 1.01, short axis b n ¼ 826 nm, b w /b n ¼ 1.01, and areas A n ¼ 960 443 nm 2 , A w /A n ¼ 1.03. One of the unusual features of this process is that direct self-assembly at this low concentration did not generate uniform ovals, whereas the objects obtained here had dimensions not very different from those obtained by direct selfassembly from octane/hexanol at 0.5 mg mL À1 when heated to 80 C. For example, for the sample shown Fig. 5 prepared by direct self-assembly, we found a n ¼ 1249 nm, b n ¼ 683 nm, both with narrow dispersity.
Self-seeding experiments carried out by sample annealing at 60 and 70 C gave relatively uniform rounded structures (Fig. S21 †) that were not as well dened in shape as those seen in Fig. 7a. For the 60 C sample, the mean long axis length was 450 nm and the mean short axis width was 293 nm (a n /b n $1.54). For the 70 C sample, the structures were larger, with a mean long axis length of 590 nm and a mean short axis width of 365 nm (a n /b n $1.62). Both aspect ratios were smaller than that formed at 80 C in Fig. 5 (a n /b n $1.85) and Fig. 7 (a n /b n $1.78).
"Seeded growth" using intact oval micelles as seeds. Seeded growth experiments with the same micelle fragment sample (Fig. S20a †) used for the self-seeding experiments did not give uniform structures (Fig. S22 †). More interesting results were obtained using intact oval micelles as seeds. A sample of the oval micelles shown in Fig. 5 were diluted to 0.05 mg mL À1 in 1 : 1 (v/v) octane/hexanol and placed in a series of vials. Then different aliquots (5, 10, 15, 25 mL) of unimer solution (10 mg mL À1 in THF) were added to each solution of oval micelles and allowed age at room temperature for 7 days. TEM images of these samples are presented in Fig. 8a-d. Values of the long and short axes as well as the area of the oval structures are presented in Table S3. † These ovals obtained by seeded growth show some internal structure in the TEM images in the form of dark spots or circles. At the highest amount of unimer added (m unimer / m seed ¼ 5) the oval structures show a fuzzy boundary. The Fig. 7 (a) TEM image of oval micelles by self-seeding of PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) seeds (0.05 mg mL À1 ) at 80 C in octane/hexanol (1 : 1 (v/v)). (b) AFM image and (c) height profile of oval micelles. The oval long axis a n ¼ 1471 nm, a w /a n ¼ 1.01. Short axis b n ¼ 826 nm, b w /b n ¼ 1.01. Area A n ¼ 960 400 nm 2 , A w /A n ¼ 1.03. The edge height is ca. 15 nm and center height is ca. 11 nm.
This journal is © The Royal Society of Chemistry 2020 Chem. Sci., 2020, 11, 4631-4643 | 4639 occlusions seen in the TEM images are not apparent in the corresponding AFM images (Fig. S23 †), where the most apparent feature is the small difference in height between the edges and the interior of the oval. Fig. 8e shows that the mean values of both the long axis a n and the short axis b n increased with the amount of unimer added, but the aspect ratio Scheme 2 (i) Schematic representation of the self-assembly of PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) to form oval micelles in octane/hexanol (1 : 1 v/v). (ii) Representation of two living CDSA routes for the formation of uniform 2D oval micelles. Upper pathway: micelle fragments generated by sonication generated oval micelles when subjected to the self-seeding protocol. Middle pathway: addition of unimer to the fragments did not lead to uniform structures. Bottom pathway: addition of unimer to a dispersion of intact oval micelles led to seeded growth in which the surface area increased linearly with the amount of unimer added. remained constant, and the size distribution remained narrow. Fig. 8f reveals that the surface area of these ovals increased in a linear fashion with the ratio m unimer /m seed . This is the expected result if all the unimer grows epitaxially off the perimeter of the ovals and the mean oval thickness does not change.
In Scheme 2 we summarize the processes that led to uniform and regular oval micelles, and the various experiments used to modify the size of the oval micelles generated by PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ) in 1 : 1 (v/v) octane/hexanol. Uniform oval micelles formed spontaneously when the BCP at 0.5 mg mL À1 in the mixed solvent was heated to 80 C and then slowly cooled to RT. Variation of the BCP concentration and of the annealing temperature prior to cooling led to well-dened changes in the oval size without signicant changes in the aspect ratio (ca. 1.8) as show in Fig. 6. Sonication of the oval micelles led to irregular fragments. Self-seeding experiments with these micelle fragments regenerated oval micelles, and curiously, these oval micelles reformed at a much lower concentration (0.05 mg mL À1 ) than was possible in the initial direct self-assembly step. Seeded growth experiments with the micelle fragments failed to give uniform structures. However, seeded growth experiments starting with intact oval micelles led to larger structures that maintained their aspect ratio and with a surface area that increased linearly with the m unimer /m oval ratio.
Conclusions
We report the synthesis of PFS 27 -b-P(TDMA 65 -ran-OEGMA 69 ), a PFS block copolymer with an amphiphilic random copolymer consisting of both hydrophobic C 14 H 29 pendant groups and hydrophilic OEG pendant groups of similar length. This polymer undergoes crystallization-driven self-assembly in a variety of different media, ranging in polarity from iPrOH to octane. In iPrOH, it undergoes the "normal" self-assembly expected for a PFS block copolymer with a short PFS block and a much longer corona, forming long narrow micelles of uniform width when a hot solution of the polymer is allowed to cool. These micelles have a ribbon-like shape (W TEM n ¼ 30 nm, H AFM n ¼ 6.5 nm). Upon mild sonication, they form micelle fragments that undergo both self-seeding and seeded growth to form micelles of uniform length and similar width.
In contrast, this BCP forms uniform rectangular platelets with mm dimensions upon cooling hot solutions of the BCP in octane (also decane). Hot solutions of the BCP in hexanol form more complex structures upon cooling. TEM images show both oval platelets and elongated bers. In a mixed solvent of octane/ hexanol (1 : 1 v/v), uniform oval platelet micelles are formed. The height of the micelles is consistent with the fully extended length of the PFS block. The size of the micelles can be varied by changing either the sample dissolution temperature (for a concentration of 0.5 mg mL À1 ), or for samples heated to 80 C, by varying the concentration of BCP. When subjected to seeded growth, the area increased linearly with the amount of unimer in THF added. The growth in size preserved the aspect ratio of the ovals (a n /b n ¼ 1.8).
These variations in morphology are most likely due to changes in solvency of the medium for the corona block. The quality of the solvent for the crystallizable block will, of course, affect the driving force for crystallization and the ease of nucleation of this block in solution. These effects are relatively well explored for PFS BCPs. The effect of solvent on the components of the corona forming chain and on the copolymer itself, as seen in the cloud point plots in Fig. 1, are much more striking. We suspect that hydrogen-bonding contributions from the alcohol-containing media as well as overall solvent polarity play major roles in affecting the dimensions of the corona chains in the micelles.
In summary, the introduction of an amphiphilic coronaforming block into coil-crystalline BCPs represents a new concept for self-assembly that appears to offer substantial exibility in manipulating the creation and shape of uniform 1D and 2D colloidal structures in solution. Since other types of core-crystalline micelles, for example with a conjugated polymer as the core-forming block, have interesting potential applications as electronic or optical materials, the concept of coil-crystalline BCPs with an amphiphilic corona increases the range of possibilities of the block copolymer toolbox.
Conflicts of interest
The authors declare no competing nancial interest.
|
2020-04-23T09:09:12.809Z
|
2020-04-21T00:00:00.000
|
{
"year": 2020,
"sha1": "8be3ef18acef22dcd0f642d01799a225bb06988f",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/sc/d0sc01453b",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e11f7a4ff575164422fe88e156589c3e2ae1569",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
237580372
|
pes2o/s2orc
|
v3-fos-license
|
Decreased door-to-balloon time in patients with ST-segment elevation myocardial infarction during the early COVID-19 pandemic in South Korea: An observational study
The coronavirus disease 2019 (COVID-19) resulted in a marked decrease in the number of patient visits for acute myocardial infarction and delayed patient response and intervention in several countries. This study evaluated the effect of the COVID-19 pandemic on the number of patients, patient response time (pain-to-door), and intervention time (door-to-balloon) for patients with ST-segment elevation myocardial infarction (STEMI) and non-ST-segment elevation myocardial infarction (NSTEMI). Patients with STEMI or NSTEMI visiting a hospital in South Korea who underwent primary coronary intervention during the COVID-19 pandemic (January 29, 2020, to December 31, 2020) were compared with those in the equivalent period from 2018 to 2019. Patient response and intervention times were compared for the COVID-19 pandemic window (2020) and the equivalent period from 2018 to 2019. We observed no decrease in the number of patients with STEMI (P = .88) and NSTEMI (P = 1.00) during the COVID-19 pandemic compared to that in the previous years. Patient response times (STEMI: P = .39; NSTEMI: P = .59) during the overall COVID-19 pandemic period did not differ significantly. However, we identified a significant decrease in door-to-balloon time among patients with STEMI (14%; P < .01) during the early COVID-19 pandemic. We found that the number of patients with STEMI and NSTEMI was consistent during the COVID-19 pandemic and that no time delays in patient response and intervention occurred. However, the door-to-balloon time among patients with STEMI significantly reduced during the early COVID-19 pandemic, which could be attributed to decreased emergency care utilization during the early pandemic.
INTRODUCTION
Healthcare resources have largely been focused towards response to the coronavirus disease 2019 (COVID-19) pandemic, and concerns regarding decreased availability of medical care for patients with acute myocardial infarction (AMI) have emerged globally. [1][2][3] Considering the possibility of a future pandemic or emergent health scenarios and that healthcare utilization and healthcare systems for acute medical conditions vary by country, identifying the effect of the COVID-19 pandemic on patient response and treatment for acute medical conditions such as AMI is crucial. 4 AMI can be divided into subgroups of ST-segment elevation myocardial infarction (STEMI) and non-ST-segment elevation myocardial infarction (NSTEMI). STEMI is an acute medical condition commonly caused by thrombotic occlusion of the coronary artery, and it is a fatal cardiovascular emergency that requires rapid percutaneous coronary intervention (PCI) within 90 minutes. 5 NSTEMI manifests varying symptoms on presentation and disparate clinical outcomes depending on whether the intervention is early or delayed; PCI has been recommended within 72 hours, depending on the patient's risk category. 6 In many countries, the time taken from pain onset to first medical contact (i.e., painto-door time) among patients with STEMI and in-hospital delivery of revascularization (i.e., door-to-balloon time at the coronary intervention laboratory) during the early COVID-19 pandemic differed drastically compared to the same period in the previous year [7][8][9] .
Here, to determine the effect of the COVID-19 pandemic on patient response and intervention for AMI, we examined the time to patient response and PCI among patients with STEMI and NSTEMI on 3 different epidemic waves of COVID-19 in South Korea.
. CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
METHODS
We conducted a single-centre, retrospective observational study including patients with STEMI and NSTEMI visiting the Jeonbuk National University hospital between January 1, 2018, to December 31, 2020. The hospital has a tertiary care cardiovascular centre that offers 24/7 coronary intervention covering the North Jeolla province, which has a population of 2 million people. Patients with AMI admitted through the emergency department of the hospital were included in this study. Patients were identified using ICD-10 codes for STEMI (I21.0, I21.1, I21.2, and I21.3), NSTEMI (I21.4 and I22.2), and unspecified (I21.9).
We used the pain-to-door time to assess the time to patient response. Pain-to-door time was identified as the interval from the onset of AMI-related symptoms, including chest discomfort, to the time of first medical contact. Furthermore, we used the door-to-balloon time to examine the time to PCI. The door-to-balloon time was defined as the interval from arrival at the emergency department to successful wire crossing of the culprit lesion through PCI. These proxies are commonly used to assess the patient's health-seeking behavior and to assess emergency cardiac care, respectively. 10 To determine the temporal changes in pain-to-door and door-to-balloon times, we divided the study duration into 3 periods (Period-1: epidemiologic weeks 4-19, Period-2: 20-33, and Period-3: 34-52) basis the official COVID-19 outbreak characteristics in South Korea. 11 12 The incidence rate ratio with 95% confidence intervals (CIs) of patients with STEMI and NSTEMI across the 3 periods was estimated using the weekly count of patients with STEMI and NSTEMI between January 2018 and December 2020. The pain-to-door and door-to-balloon times were tested for normality and presented as medians with interquartile ranges (IQRs). To evaluate the difference in the number of patients with STEMI and . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted September 21, 2021. ; https://doi.org/10.1101/2021.09.17.21263760 doi: medRxiv preprint NSTEMI and in patient demographics between pre-COVID-19 (2018 and 2019) and COVID-19 pandemic (2020) periods, we conducted an analysis of variance (ANOVA), Kruskal-Wallis one-way ANOVA or chi-square test, as appropriate. Furthermore, to determine the difference in the times between pre-COVID-19 and COVID-19 pandemic periods, we conducted a Welch 2-sample t-test or Mann-Whitney test, as appropriate. P<0.05 was considered statistically significant, and all analyses were performed using R version 3.6.1 (R Foundation for Statistical Computing, Vienna, Austria).
RESULTS
In total, 831 patients with acute myocardial infarction visited the cardiovascular centre via the emergency department between 2018 and 2020. We excluded 23 patients who did not have indications for STEMI or NSTEMI. A total of 808 patients were included, of which 439 (54.3%) experienced STEMI and 369 (45.7%) experienced NSTEMI.
We did not observe any significant differences in patient demographics, comorbidities, and history of myocardial infarction between patients with STEMI and NSTEMI (Table 1) Figure 3D).
DISCUSSION
A significant decrease in the number of patients with AMI was observed in many countries during the early COVID-19 pandemic. 13 This observation could be attributed to patient anxiety and desire to avoid contact with patients with COVID-19 in the medical facility during the COVID-19 pandemic, which likely affected the patients' care-seeking behavior. 14 Second, a lockdown strategy to enforce staying at home in these countries likely discouraged patients from seeking medical care. 14 15 Third, a substantially decreased incidence of viral . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted September 21, 2021. ; https://doi.org/10.1101/2021.09.17.21263760 doi: medRxiv preprint respiratory infections, including influenza, 16 might have decreased the incidence of AMI in these countries. 17 18 Unlike previous studies, we observed no significant decrease in the number of patients with STEMI and NSTEMI. Furthermore, we observed no significant delay in patient response time across the 3 different epidemic periods of COVID-19 in our study.
Demographic characteristics and patient comorbidities are known to contribute to AMI. 15 However, in the present study, we did not identify differences in demographic characterisitcs and comorbidities of patients between the COVID-19 pandemic period and those in the previous years. In South Korea, a lockdown strategy was not implemented for the overall COVID-19 pandemic period. 19 20 Our finding is consistent with that of a previous study, which reported that countries that implemented partial lockdowns against the COVID-19 pandemic did not observe an effect in the number of patients with STEMI. 13 Therefore, this finding suggests that the COVID-19 pandemic had a limited effect on the healthcare-seeking behavior of patients with STEMI and NSTEMI. Our finding is consistent with that of a previous report that the number of emergency department visits by patients with severe medical conditions did not reduce during the COVID-19 pandemic in South Korea. 21 The results of this study indicate that patient response time did not differ significantly across the 3 different epidemic periods of the COVID-19 pandemic, and this finding supports the hypothesis that the COVID-19 pandemic had a limited effect on the healthcare-seeking behavior of patients with STEMI and NSTEMI.
We observed that the door-to-balloon time significantly reduced (14%) during the eaely COVID-19 pandemic (Period-1). Previous Korean studies reported decreased number of patient-visit to emergency department (46%-77%) during the early COVID-19 pandemic. [21][22][23] Therefore, the reduction of the door-to-balloon time could be attributed to decreased . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review) preprint
The copyright holder for this this version posted September 21, 2021. ; https://doi.org/10.1101/2021.09.17.21263760 doi: medRxiv preprint emergency care utilisation during the early COVID-19 pandemic as patients with low severity defer emergency department visits owing to the fear of the infection in the medical facility. 22 23 This decrease in emergency care utilisation was also observed during the MERS outbreak in 2015 in South Korea. 24 We observed that the pain-to-door and door-to-balloon times among patients with NSTEMI varied in a greater range than for those among patients with STEMI. Patients with NSTEMI typically defer the intervention until other underlying conditions are managed or staff hours are available. 6 This could have affected the results of our study, possibly resulting in greater variation in door-to-ballonn time among patients with NSTEMI. However, we observed no significant difference between pre-COVID-19 and the COVID-19 pandemic years.
This study had limitations. First, we considered the number of patients in our study as a proxy of the incidence of AMI in the North Jeolla Province. Second, we did not account for patients that transferred to regions outside of the North Jeolla province. However, our study also had several strengths. First, this is the first study to assess changes in the number of patients with STEMI and NSTEMI in South Korea, where no lockdown strategy was incorporated as a measure against the COVID-19 pandemic. Furthermore, our study measured the time between symptom onset to first medical contact and first medical contact to PCI, which is the first evaluation of this kind during the specified time period for patients with STEMI and NSTEMI in South Korea.
CONCLUSION
Our study shows that the number of patients with AMI did not decrease and patient's response and intervention times did not increase. However, the time to intervention since first . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint
Acknowledgments
The authors thank Mir Jeon for her kind assistance in collecting the data during the current study.
Patient consent for publication
Not required.
. CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
|
2021-09-22T01:27:56.025Z
|
2021-09-21T00:00:00.000
|
{
"year": 2022,
"sha1": "08b7a561b604c7c58298de167067219f335ca890",
"oa_license": "CCBYNCND",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/09/21/2021.09.17.21263760.full.pdf",
"oa_status": "GREEN",
"pdf_src": "MedRxiv",
"pdf_hash": "08b7a561b604c7c58298de167067219f335ca890",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250644051
|
pes2o/s2orc
|
v3-fos-license
|
Roughness of molecular property landscapes and its impact on modellability
In molecular discovery and drug design, structure-property relationships and activity landscapes are often qualitatively or quantitatively analyzed to guide the navigation of chemical space. The roughness (or smoothness) of these molecular property landscapes is one of their most studied geometric attributes, as it can characterize the presence of activity cliffs, with rougher landscapes generally expected to pose tougher optimization challenges. Here, we introduce a general, quantitative measure for describing the roughness of molecular property landscapes. The proposed roughness index (ROGI) is loosely inspired by the concept of fractal dimension and strongly correlates with the out-of-sample error achieved by machine learning models on numerous regression tasks.
Introduction
Structure-activity relationships (SARs) and activity landscapes are important concepts in cheminformatics and medicinal chemistry, and they are often used to guide the navigation of chemical space during molecular optimization campaigns (e.g., lead optimization) [1][2][3] . Quantitative SAR (QSAR) modeling uses numerical representations of chemical matter with machine learning (ML) models for the prediction of biological activity. QSAR concepts have been adopted more broadly across chemistry research through the application of structure-property relationships (SPR) and associated QSPR 4 .
Roughness is one the most frequently discussed attributes of structure-property landscapes, perhaps owing to the interest in the identification of "activity cliffs" in drug design [5][6][7][8] . Activity cliffs are sharp changes in compound activity as a result of seemingly small structural changes, which can present a major obstacle in the development of accurate QSPR models 5,[9][10][11] . As a result, a number of studies have focused on their identification [12][13][14] and prediction [15][16][17] , typically by analyzing or predicting affinity differences in matched molecular pairs. It is clear that the presence (or absence) of activity cliffs is intrinsically linked to the roughness (or smoothness) of the property landscape. Smooth landscapes are generally favored because they lead to better interpretability, as well as predictability, by chemists; they are more easily modeled by ML algorithms; and they facilitate similarity-based virtual screening 18 . These benefits may thus affect strategic decisions during the discovery process, such as which compounds to prioritize for lead optimization.
Given the interest in quantitatively describing structure-property landscapes, different approaches have been developed to analyze their topography. To visualize property landscapes, Peltason et al. 19 have proposed to use multidimensional scaling to project high-dimensional representations onto the 2D plane and display SPRs as three-dimensional landscapes. These 3D landscapes have been combined with SPR matrices 20 and molecular grid maps to provide a tool for their organization and analysis 21 . Image analysis techniques have also been used to classify 3D property landscapes based on their degree of ruggedness 22 , and to define a measure of similarity between them 23 .
Among the indices developed to capture characteristics of property landscapes quantitatively, there is the Structure-Activity Landscape Index (SALI) 14 . SALI is a pairwise score that captures the magnitude of the property change with respect to distance of two compounds in chemical space, , where is a numerical representation of a molecule, is its property, and is a distance metric. SALI effectively corresponds to the slope of a straight line connecting the points and in the metric space defined by . The largest value in a dataset corresponds to the observed Lipschitz constant of the SPR function given the available data. Heatmaps and graph representations can be obtained from the full SALI matrix and can be used to identify the most significant property cliffs in the dataset. This index is not upper bounded, taking values between zero (when ) and infinity (when tends to zero).
As a global, rather than local, measure of roughness for a given molecular dataset, Peltason and Bajorath have proposed the SAR index (SARI) 13 . It is defined as the average of a continuity and a discontinuity scores, .
The continuity score is derived from the property-weighted average of pairwise compound similarity, while the discontinuity score is defined as the average potency difference between ligands with Tanimoto similarity greater than 0.6, multiplied by the pairwise ligand similarity 13 . SARI is conveniently defined between zero (rougher landscape) and one (smoother landscape). However, the raw scores are first standardized based on the mean and standard deviation of the scores obtained for a set of 16 reference datasets. A local version of SARI has also been developed, and has been used to create molecular networks to organize and display similarity and potency relationships within compound datasets 24 .
Another quantitative measure that has been proposed to describe property landscapes is the modellability index (MODI) 25 . MODI tries to predict whether an accurate classification model is achievable, for a given training set, on the basis of the agreement/disagreement in label between nearest neighbor pairs, , where is the number of classes, is the number of compounds in each class, and is the number of compounds in each class having their nearest neighbor belonging to the same class. MODI is defined between zero and one; the more activity cliffs that are present, the closer to zero it is. The original formulation was later expanded by the same authors 26 , as well as by Ruiz and Gómez-Nieto, who considered within-and between-class nearest neighbor pairs, as well as k-neighbors 27 . The approach was further generalized in order to be applied to regression tasks. Golbraikh et al. 26 did so by considering the performance of k-nearest neighbor models, and Ruiz and Gómez-Nieto by binarizing the dataset 28 . A different approach was instead taken by Marcou et al. 29 , who use kernel-target alignment 30 as a measure of similarity between the descriptor and the property spaces.
Despite the development of the quantitative measures mentioned above, a truly general measure of roughness for molecular property landscapes is still missing. Being a local measure, SALI cannot capture the roughness of a molecular property landscape in a single scalar value. While SARI can do so, it relies on user-defined hyperparameters that need to be set heuristically, such as a similarity threshold for the discontinuity score, and reference datasets for standardization. Finally, MODI applies to primarily classification tasks, and extensions to regressions have been challenging. Contrary to SALI and SARI, MODI and related indices have been tested for their ability to anticipate the predictive performance of ML models, which is the primary evaluation approach we adopt in this work. There is a clear relationship between the geometry of structure-property landscapes and the ability of ML algorithms to model it, as observed already by Maggiora 9 .
In this work, we propose a new measure of roughness for metric spaces that is directly applicable to molecular datasets. This Roughness Index (ROGI) captures the global ruggedness character of a normalized dataset as a single scalar value between zero and one, where zero corresponds to a flat surface and one to a surface in which all nearest neighbors display property values at the opposite extremes. Contrary to most of the approaches described above, it has no hyperparameters once a molecular representation and a metric have been defined. It naturally applies to regression tasks for any property of interest, as well as to binary classification tasks. To test the reliability and informativeness of ROGI, we evaluated its ability to anticipate the predictive performance of various ML models on a number of regression tasks, and found that it correlates with out-of-sample model error more reliably than existing indices.
Methods
Chemical spaces, for example stable molecules at ambient temperature and pressure or more well-defined subsets like the set of drug-like molecules, can be defined as metric spaces, where each molecule is associated with a representation , and a distance metric defines molecular dissimilarity. is non-negative, symmetric, and generally satisfies the triangle inequality for most metrics used in practice. It may be the case that even if molecules and are distinct (e.g., binary fingerprints with finite radius or bit collisions), such that the space could be more appropriately described as pseudo-metric. Nevertheless, the above is typically all one can assume about a chemical space, which makes it challenging to define geometric properties, such as roughness, using measures that have been conceived for Euclidean spaces, like those used in topography, geology, and materials science [31][32][33][34][35][36][37][38][39] . The properties of molecules and materials we are interested to predict are described by continuous variables, as in regression tasks, with ML algorithms trying to model the underlying function that maps molecules to properties, . ROGI is loosely inspired by the concept of fractal dimension, which is an index of complexity comparing how some property of an object changes with the scale at which it is measured [40][41][42][43] . For example, by measuring the rate at which the observed coastline length increases as a function of a decreasing measuring unit (e.g., by using an increasingly short measuring stick), the roughness of a coastline can be quantified by its fractal dimension 40,44 . Essentially, an increasingly coarse-grained view of a certain object (e.g., a coastline) is taken, and the rate at which some of its properties change relates to the object's complexity. In the same vein, to describe the roughness of a molecular property landscape, we progressively coarse-grain a molecular dataset and observe how the dispersion of a molecular property of interest is affected.
Formulation of the roughness index
The intuition behind the proposed approach is depicted in Figure 1. For this example, consider a dataset of molecules and associated property values , where and . Assume normalized, pairwise distances between all molecules in the dataset such that . We then cluster the dataset given different distance thresholds using complete-linkage clustering, such that the distance of any two elements in a cluster is at most (Figure 1a). Given is a continuous property, we measure dispersion using the second central moment of its distribution, and more specifically we take its standard deviation . For every distance threshold , we consider a dataset where is the number of clusters, is the average molecular property within the cluster , and is the cluster size. The weighted standard deviation, , of is computed based on the weights (Figure 1b). This is equivalent to assigning the average property value to all members of each cluster and then computing the standard deviation for the whole dataset. At , each molecule belongs to its separate cluster, and is the standard deviation of values in the original dataset. When , the dataset is described by a single cluster with zero standard deviation. At intermediate values of , we effectively have a coarse-grained version of the dataset where each cluster is represented by a fictitious average molecule with an average property value ( Figure 1a).
is guaranteed to decrease monotonically, from its original value to zero, as goes from zero to one. . The rate at which we lose property dispersion is measured by the area under the curve plotting dispersion loss as a function of clustering distance . Dispersion loss is defined as , with the factor of 2 used for normalization.
As we coarse-grain the molecular dataset, we monitor the loss in dispersion ( Figure 1). Intuitively, if similar molecules have extremely different property values, they will be clustered at low values and the dispersion across clusters will decrease rapidly. Conversely, if similar molecules have similar property values, replacing these by their average will have a small effect on the overall dispersion of the property across clusters, such that will increase slowly. To measure how quickly dispersion is lost as increases we integrate between zero and one. For normalized property values and pairwise distances, ROGI is thus defined as , where and are here the standard deviations obtained from normalized property values (Figure 1c). Note that, while the ROGI was primarily devised for regression, it may also be applied to binary classification as is. In the future, expanding ROGI to multi-class classification may also be possible by considering, e.g., information entropy as a measure of dispersion.
The above is equivalent to computing ROGI for the original dataset, before scaling it according to the largest molecular distance and property ranges, , where is the largest distance between any two elements in . The term before the integral normalizes the index between zero and one. In fact, given any set of property values, the largest standard deviation achievable is . And given that is the largest distance attainable based on the chosen representation and metric, it also the largest value of , for which only one cluster exists. When the metric used is the Jaccard distance between binary fingerprints (i.e., , where is the Tanimoto similarity widely used for structure comparison 45,46 ), , such that distances are already normalized. When , such as for p-norm distances when using descriptors, is the largest distance in principle attainable between any two molecules in chemical space. As this information is usually not available, can be approximated by the largest distance within the hyper-rectangle defined by the descriptor values.
For any dataset, the above integral can be approximated numerically according to the available resolution of , as only a finite set of pairwise distances is available, which poses a limit on how often the clusters of the dataset change. In addition, distances are unlikely to be uniformly distributed. In our implementation, we use the trapezoidal rule (Figure 1c) with . The number of potential clustering thresholds grows quadratically with dataset size, so bounding can significantly reduce the cost of computing the ROGI for large datasets without losing much accuracy. Overall, the computation of the ROGI inherits the quadratic scaling of hierarchical clustering, , where is the number of clusters and is the number of elements in the molecular dataset.
Datasets and data analysis
Evaluation of the roughness index. As there is no unambiguous definition or ground-truth value for molecular dataset roughness, we rely on the connection between the roughness of a property landscape and the ability of a ML model to accurately model it. Given a certain structure-property landscape, the more data and/or the more expressive the ML model used, the more accurate predictions one should be able to achieve in a random cross-validation. Given the same amount of data available and the same ML model, smoother landscapes should be more easily modeled than rougher ones. We thus evaluated the ability of ROGI to predict out-of-sample model error on a variety of datasets. For regression tasks, we examine the relative, i.e., normalized, root-mean-square error (RMSE) in a random cross validation. For classification, we examine the correlation between the ROGI and binary accuracy.
Toy datasets. Six two-dimensional analytical functions (F1 to F6), for which roughness can be qualitatively assessed visually, were used to validate the ROGI approach. Datasets were created by sampling uniformly from the domain [0,1] 2 of these functions. Details about these analytical functions and their implementation are provided at https://github.com/coleygroup/rogi-results.
Chemistry datasets. Three sets of regression tasks were used in this work. Structure-property landscapes related to regression tasks were retrieved from the Therapeutic Data Commons (TDC) 47 using the Python library PyTDC (v. 0.3.6), and from the previous work of van Tilborg et al. 11 . A total of 55 regression datasets, split across three groups, were considered. (1) The group of datasets referred to as "ZINC+GuacaMol" comprised 13 datasets with 2000 molecules randomly sampled from ZINC 48 11 . To reduce the computational cost of performing these tests, dataset sizes were capped at 10,000 molecules; datasets containing a larger number of entries were subsampled at random (using a fixed seed for reproducibility). 50 pharmaco-kinetic and toxicological datasets associated with binary classification tasks were also taken from the TDC 47 76 , and 19 datasets from ToxCast 77 selected reproducibly at random. As with regression tasks, dataset sizes were capped at 10,000 molecules.
Machine learning models. The above datasets were modeled with a range of baseline ML algorithms available in scikit-learn (v1.1.1) 78 . We selected five approaches to cover nearest neighbor, linear, tree-based, kernel, and deep learning methods. More specifically, for regression we used k-nearest neighbor (KNN) regression, partial least squares (PLS) regression, random forest (RF) regression, support vector regression (SVR), and a multi-layer perceptron (MLP). Similarly, we used KNN classification, logistic regression (LR), RF classification, support vector classification (SVC), and an MLP for classification tasks. In all cases, we used the default hyperparameters in scikit-learn, with the exception of RF for which we used 50 trees.
Chemical representations. Molecules were represented either by fingerprints or a set of descriptors. We used Morgan binary fingerprints as implemented in RDKit 79 (v2022.03), with 2048 bits and radius 2. As descriptors, we chose a set of 16 physico-chemical properties generally applicable across tasks: molecular weight, fraction of sp 3 carbons, number of hydrogen bonds acceptors and donors, number of nitrogen and oxygen atoms, number of NH and OH groups, number of aliphatic and aromatic rings, number of aliphatic and aromatic heterocycles, number of rotatable bonds, total polar surface area, LogP 51 , and QED 50 . The descriptors chosen were not meant to be an ideal molecular representation for all prediction tasks studied, but simply a hypothetical one.
Validation on toy datasets
A set of six toy surfaces are used to demonstrate the behavior of ROGI (Figure 2a). Intuitively, roughness should increase between the continuous surfaces F1 and F3, and between the binary surfaces F4 to F6. To test this, we draw a uniform sample of size 100 from these two-dimensional surfaces, simulating a discrete molecular dataset (Figure 2b). Clustering was performed based on pairwise Euclidean distances, and ROGI was computed as the area under the curve (Figure 2c), as described above. The roughness ranking suggested by the ROGI values closely follows the intuition one might have from visual inspection. Properties that change slowly with respect to the input representation are smoother than those that tend to change more abruptly, and the more minima and maxima are present in the landscape, the rougher it tends to be (larger ROGI values). This trend is especially noticeable for the surfaces with binary property values. These surfaces display sharp cliffs, and some are highly multimodal resulting in the presence of many cliffs. The larger the cliff area in the property landscape, the rougher it is, as reflected by their ROGI values. Note that extreme ROGI values are obtained for flat landscapes (ROGI of zero), and for landscapes in which all nearest neighbor pairs have opposite property values (ROGI of one; Figure S1). Figure 3 shows parity plots that compare the average ROGI to the average relative, i.e., normalized, root-mean-square error (RMSE) obtained with a RF model for datasets of different sizes, sampled 50 times uniformly from the surfaces in Figure 2a. ROGI was able to correctly rank the difficulty of each regression task, with linear and monotonic correlations (Pearson correlation coefficients >0.9), with F1-F4 being deemed considerably less challenging than F5 and F6. A trend where the slope of the line of best fit decreases with increasing dataset size is visible. This effect seems logical. The ROGI of a property landscape has a well-defined value, even though the ROGI computed from a small sample is only an estimate ( Figure S2). However, the accuracy of a model should increase (hence RMSE decrease) with increasing training set size. The effect of dataset size is thus a potentially confounding factor when comparing the roughness of different properties using different datasets, as we later demonstrate for datasets of binary molecular properties ( Figure 5).
Figure 3 | Correlation between the roughness index
and cross-validated model error for toy datasets of different sizes. The performance of a random forest model was evaluated as the relative (normalized) RMSE, using 10-fold cross validation. Datasets of different sizes (10, 50, 100, 250, 500, 1000) were sampled from the surfaces in Figure 2a. For each dataset size, 50 datasets were sampled, and both the relative RMSE and the ROGI were computed. The parity plots display the average relative RMSE against the ROGI across the 50 sample sets for F1-F6 and for the different dataset sizes tested.
Roughness of continuous molecular properties
As done above for the toy surfaces, we evaluated the ability of ROGI to capture the roughness of a property landscape by testing its correlation with cross-validated model error. Here, we do so on a suite of 55 regression tasks (Methods). Because ROGI is model-independent, we performed these tests using five different ML algorithms, covering five different classes of supervised learning methods: k-nearest neighbors (KNN), linear models (partial least squares, PLS, for regression; logistic regression, LR, for classification), random forest (RF), support vector machines (SVR for regression; SVC for classification), and deep learning (multi-layer perceptron, MLP). Molecules in all datasets were represented either via Morgan fingerprints or a set of physico-chemical descriptors, as described in the Methods section.
ROGI positively, and often strongly, correlates with the predictive error of all regression ML models tested (Figure 4). This is particularly true for the datasets based on a randomly sampled subset of ZINC+GuacaMol and the pharmaco-kinetic and toxicology datasets from the TDC 47 , both for fingerprint and descriptor representations. With the exception of ZINC+GuacaMol with RF and descriptors, correlations between ROGI values and model errors are above 0.8, and typically around or above 0.9. While we expected the range of dataset sizes (from 642 to 10,000) for the TDC tasks to worsen the correlation due to the size-dependent performance of ML models, correlations for the TDC datasets were generally strong (minimum correlation of 0.88; Figure 4).
The ZINC+GuacaMol dataset is particularly informative because it considers the same set of 2000 molecules but different properties thereof. For this group of datasets, the only case in which we did not observe a correlation between ROGI and cross-validated model error was when a RF model was used with molecules represented via descriptors. In this case, RF was able to predict the smoother and rougher properties with a similar degree of accuracy. These properties are defined as a combination of physico-chemical descriptors. As these descriptors were also used as input for the regression tasks, much of the roughness may be due to the presence of uninformative descriptors among informative ones, which RF was able to filter out more efficiently than other ML models. 11 . The relative RMSE is the average, normalized RMSE obtained from 5-fold cross validation. Details of the fingerprints and descriptors used as molecular representations can be found in the Methods. In each plot, the Pearson correlation coefficient (r) between roughness and model error is reported. KNN = k-nearest neighbor regression; PLS = partial least square regression; RF = random forest; SVR = support vector regression; MLP = multi-layer perceptron.
For the ChEMBL datasets, ROGI displayed moderate-to-strong (r = 0.56-0.89) correlations with model error when representing molecules with physico-chemical descriptors, but only weak correlations (r = 0.15-0.39) when representing molecules with fingerprints. One possible reason for the lower correlations observed for the ChEMBL dataset is the much smaller range of both ROGI values and model errors obtained ( Figure S4). The smaller the range of RMSEs, the more accurate ROGI estimates need to be to linearly correlate with model error. When using fingerprints, the KNN model returned RMSEs between 0.09 and 0.16 for ChEMBL, between 0.02 and 0.33 for TDC, and between 0.00 and 0.31 for ZINC+GuacaMol. Similar RMSE ranges were observed when using descriptors, yet the tight distribution of ROGI values associated with fingerprints might have exacerbated the issue (ROGI values of 0.04-0.11 for fingerprints, 0.18-0.38 for descriptors) ROGI values for fingerprint representations were observed to be smaller, and in a tighter range, than those obtained for descriptors across all datasets. This effect is caused by a different distribution of pairwise distances. Tanimoto distances between molecules described with fingerprints are generally larger than distances obtained with the Euclidean metric applied to descriptors ( Figure S5). Given the same set of molecules and property values, smaller distances between molecules imply a rougher surface, which is captured by ROGI. With larger distances between molecules instead, the ROGI estimate will tend toward lower values suggesting a smoother surface. In the limit of all molecules being maximally distant from each other (i.e., all pairwise distances equal to one), ROGI will be equal to zero, regardless of how the property values are distributed across molecules. In this case, however, a ROGI value of zero indicates a lack of sufficient information to assess the roughness of the structure-property landscape, as opposed to being evidence of a smooth surface. While these are extreme scenarios, it is important to interpret the ROGI value in the context of the dataset and distribution of distances between molecules.
As a comparison to existing approaches, the correlation of SARI to model error was evaluated on the ChEMBL dataset. Only this dataset was considered because SARI was developed specifically for protein-ligand binding affinity. We generally found significantly lower correlation between SARI and model error than were observed for ROGI ( Figure S6). We also performed the same analysis, on all datasets, with the regression MODI (RMODI) described by Ruiz and Gómez-Nieto 28 . RMODI returned correlations with model errors higher than those obtained with SARI. However, with the exception of 5/30 instances (4 for ChEMBL with fingerprints, 1 for ZINC+GuacaMol with descriptors and RF), ROGI consistently provided stronger correlations (Table 1 and Figure S7). To reiterate an earlier observation, ChEMBL with fingerprints exhibits a narrow range of both ROGI scores and model RMSEs, which worsened the quantitative correlation, suggesting that consideration of multiple metrics in a consensus approach may be beneficial.
Roughness of binary molecular properties
While ROGI was developed primarily for continuous structure-property landscapes (regression), it may be applied as is to discontinuous ones (e.g., binary classification). Figure 5 shows the correlation between ROGI values and the binary accuracy of classification models. These correlations were above 0.6 for all ML models using fingerprints as input, and above 0.8 for all models using descriptors ( Figure 5). Here too, the datasets considered had a wide range of sizes, from 280 to 10,000. In this case, dataset size is a confounding factor. Most of the outliers-datasets that returned lower accuracy than expected given their ROGI value-were due to the smallest datasets (small circle markers in Figure 5).
The same analysis was performed with MODI, an established index for binary classification. While virtually no correlation between MODI and binary accuracy was observed ( Table 2 and Figure S8), strong correlations were observed when balanced accuracy was used as the performance measure (Table 2 and Figure S9). The opposite was observed for ROGI, which is negatively correlated with balanced accuracy (Table 2 and Figure S10). These observations are consistent with how the two indices are defined. ROGI is a measure of global roughness and weights all instances equally, leading to lower values for more imbalanced datasets. (e.g., S4 in Figure 2). This bias is in line with how binary accuracy is defined. On the other hand, MODI considers all instances of the positive and negative class separately, takes the fraction of their nearest neighbors being in the same class, and averages them; thus effectively upweighting the importance of the minority class. Balanced accuracy similarly upweights the minority class by averaging true positive and true negative rates.
In summary, on balanced datasets ( Figure S11) or for information retrieval tasks on imbalanced datasets, MODI is likely to be a better index of modellability. If the analysis of roughness or the overall fraction of correctly classified instances is instead the main goal, ROGI is likely to be more suitable.
Visualizing structure-property landscapes
Visual inspection of structure-property landscapes can provide qualitative insight into its roughness. While many dimensionality reduction approaches have been used to project molecular datasets onto the 2D plane, multidimensional scaling (MDS) is a natural choice 19 when the size of the datasets are relatively small (up to a few thousand molecules) because it tries to preserve the proportionality of pairwise distances.
When large enough, differences in landscape roughness are evident by visual inspection. Figure 6 shows 3D landscapes for three different datasets available from the TDC with increasing roughness. In these plots, the first two dimensions are unitless coordinates used by MDS for the two-dimensional projection, while the third dimension is the property associated with each molecule. The ROGI value obtained for these datasets is in agreement with the degree of roughness that can be visually evaluated. Half life is the smoothest landscape among the three, with a ROGI of 0.09. Most of the landscape is relatively flat, with only one noticeable peak of high half life, and two minor ones. Hepatocyte clearance is instead the roughest landscape, with a ROGI of 0.56. Aside from a flat region of low clearance with low data density, the landscape is highly rugged with seemingly very similar molecules having very different clearance profiles. Finally, hydration free energy sits in the middle, with a ROGI of 0.15. This landscape does not have large regions being particularly flat or rough, but is somewhat rugged throughout. Three-dimensional landscapes for all the 12 TDC datasets studied here are shown in Figure S12.
Discussion
Despite its quadratic scaling with dataset size, ROGI has a few advantages over modellability and SAR indices currently used. First, it is a general approach that may be applied to any structure-property relationships, rather than being confined to biological activity or requiring calibration. While it was primarily conceived for regression tasks, it may be applied to binary classification tasks too. Second, it more strongly correlates with model error than existing indices for regression, suggesting it better captures the roughness of molecular datasets by this measure. We reiterate that there is no ground-truth roughness value for molecular datasets to compare to. However, roughness is expected to relate to modellability. Hence we used model error in out-of-sample predictions as a way to quantitatively evaluate ROGI.
It is important to keep in mind that ROGI strongly depends on the molecular representation and the distance metric used. To some extent, this requirement is made necessary by the nature of chemical space, which does not have an inherent or obvious metric. This dependence may make comparing ROGI values obtained for different representations challenging. In particular, shifts in pairwise distance distributions between representations result in accompanying shifts in ROGI values ( Figure S5). While in specific cases it may be possible to match distance distributions to make ROGI values for different representations comparable, a general solution may be elusive given how the notion of proximity is tightly linked to the definition of the metric being applied. Yet, it might be possible to use ROGI to compare variants of the same representation, such as different different types of fingerprints and different sets of physico-chemical descriptors. The concept of intrinsic dimensionality, the smallest number of variables needed to faithfully represent a dataset, is also dependent on the metric being applied and has been studied in the context of molecular simulations 80 and QSAR feature selection 81 .
The main difference with ROGI is that the structure of the dataset is considered on its own and not in relation to a molecular property. However, further study on intrinsic dimensionality for molecular datasets may help quantify how the notion of proximity depends on the metric chosen and how it can affect ROGI values.
In practice, ROGI values are always estimates based on a finite sample of molecules. Therefore, ROGI also depends on the size of the dataset considered. In general, rougher landscapes are expected to require larger datasets for ROGI to converge to its true value ( Figure S2). For the ZINC+GuacaMol datasets, 1000 molecules were sufficient to accurately estimate ROGI, while 100 were enough only for some properties ( Figures S13 and S14). Analyzing the distribution of ROGI values for subsets of the data of various sizes is a simple way to assess whether the estimate may have converged. Note, however, this is a necessary but not sufficient condition to guarantee convergence. An approach to quantitatively assess convergence is to estimate the uncertainty of the ROGI estimate. While we do not provide a statistical estimator of ROGI uncertainty, a general approach to estimate uncertainty is by bootstrap 82 . Indeed, we find that bootstrap estimates of ROGI uncertainty correlate with its error, despite a general tendency to underestimate it ( Figures S15 and S16).
Finally, ROGI is affected by noise in the molecular property for which roughness is being computed.
Because ROGI does not model aleatoric noise, all property values are assumed to be exact. As such, noise (e.g., due to measurement error) introduces biases in the estimate based on roughness of the original landscape and the details of the noise. In general, ROGI is overestimated for smooth, especially flat, landscapes, while it is underestimated for rough landscapes ( Figure S3). In specific fortuitous cases, in which the landscape displays a medium degree of roughness, or in which smooth and rough areas exist in different areas of the landscape, cancellation of error may lead to unbiased ROGI estimates despite the presence of noise. A possible approach to mitigate the effect of aleatoric noise could be to build a regression model that is able to take noise into account, and replace the measured property values with modeled ones.
Rather than evaluating correlation with regression error, it is natural to ask how the roughness of a molecular landscape affects molecular optimization performance. However, this is not a straightforward question to answer, as optimization introduces several considerations and confounding variables. First, optimization performance is expected to depend on the optimization strategy adopted (e.g., gradient, evolutionary, model-based) more strongly than regression performance does on the ML model chosen. Second, contrary to regression, optimization difficulty depends on the distribution of property values in an asymmetric fashion. Specifically, optimization difficulty depends on how skewed a property distribution is toward the optimal/preferred values. A "needle in a haystack" scenario is challenging partly because there are only a few solutions among many available options. If the situation were reversed (e.g., very few bad molecules, and mostly good ones), the optimization task would become trivial. Roughness and regression difficulty are not affected by whether the distribution of property values is skewed toward larger or smaller values. Finally, multiple performance measures for optimization are available, each giving more weight to different aspects of the optimization depending on the application (e.g., best observed value, sample efficiency), such that a universally accepted measure of performance is harder to establish. For these reasons, the development of a quantitative measure of molecular optimization difficulty requires substantial modifications to ROGI and is thus left for future work. One possibility may be to consider higher, odd-order moments of the property distribution rather than its variance.
Conclusion
We have developed and presented ROGI, a quantitative measure of structure-property landscape roughness for molecular datasets that may be used for exploratory data analysis in molecular design campaigns. ROGI is applicable to any structure-property relationship for which property values and pairwise distances between molecules are available. We have tested the ability of ROGI to correlate with the error of regression ML models on 55 datasets covering a broad range of molecular properties, and have generally found strong correlations between ROGI values and model errors. For regression tasks, ROGI correlated to model error better than existing indices of landscape modellability and roughness. For binary classification tasks, both ROGI and existing indices proved valuable depending on the degree of dataset balance and whether accuracy or balanced accuracy is the performance measure of interest. Future work will focus on expanding ROGI to multi-label classification, and to molecular optimization. Finally, we note that ROGI can be applied to (bio)chemical systems beyond small organic molecules, such as macromolecules and crystalline materials. Figure S2 | Convergence of ROGI values with respect to dataset size on toy surfaces. By necessity, ROGI is an estimate based on a finite sample of chemical space. Depending on the details of the structure-property landscape, ROGI might converge faster or slower toward its true value with increased sample size. Landscapes like those of S4 are particularly susceptible to small sample sizes due to having small areas with property values very different from the average ones. Rough surfaces, like S5 and S6, also require larger sample sizes to capture the presence of a large number of sharp cliffs. Smoother landscapes (S1, S2) and those with a more uniform distribution of property values (S3) are generally expected to display faster convergence. Unfortunately, the convergence properties of ROGI cannot be known a priori for any given structure-property landscape, as it depends on the landscape itself, which is unknown. Figure S3 | Effect of noise on ROGI estimates. Here, we added Gaussian noise to the property values of datasets with 100 samples, which were constructed as shown in Figure 2b. Different levels of noise, relative to the range or property values observable in S1-S6, was added.
Figure S7 | Correlation between RMODI and cross-validated model error for a set of 55 regression tasks.
The relative RMSE is the average, normalized RMSE obtained from 5-fold cross validation. These results may be compared to those obtained with ROGI and shown in Figure 4. Figure S8 | Correlation between the MODI and cross-validated binary accuracy for a set of 50 classification tasks. These results may be compared to those obtained with ROGI and shown in Figure 5. Figure S9 | Correlation between the MODI and cross-validated balanced accuracy for a set of 50 classification tasks. These results may be compared to those obtained with ROGI and shown in Figure S10.
Figure S10 | Correlation between the ROGI and cross-validated balanced accuracy for a set of 50 classification tasks. These results may be compared to those obtained with MODI and shown in Figure S9.
Figure S11 | Correlation between the ROGI / MODI and cross-validated binary accuracy for a set of 55 classification tasks with perfect balance. These datasets were artificially created by binarizing the regression datasets (Methods) such that half of the molecules had positive and half negative labels. In this way, ROGI and MODI can more directly be compared using binary accuracy. While both indices returned good correlations between their values and model error, MODI systematically provided higher correlations. ROGI returned moderate-to-strong linear correlations with binary accuracy, between 0.56 and 0.93. MODI returned stronger correlations, between 0.75 and 0.99.
Figure S12 | Visualization of molecular landscapes with different roughnesses. Three high-dimensional datasets are projected onto the 2D plane (coordinates z 1 and z 2 ) by multidimensional scaling (MDS), with the third dimension being the property values. The landscapes are visualized as three-dimensional surfaces and two-dimensional contour plots. The molecules in these datasets were described by a set of physico-chemical descriptors (Methods) and distances between them were computed as Euclidean distances. The ROGI for each dataset is shown.
Figure S13 | Convergence of ROGI values of ZINC+GuacaMol datasets of increasing size and fingerprint representations. Shown are the distributions of ROGI values obtained for 10 random subsets of size 10, 100, 1000, 2000, 5000 of a dataset for 10,000 molecules sampled from ZINC. A black, horizontal dashed line indicates the ROGI value for the whole set of 10,000 molecules. These results were obtained for molecules described by binary Morgan fingerprints (Methods). Results for Valsartan_SMARTS are not shown, because both property values and ROGI are always zero (i.e., completely flat landscape for which ROGI immediately converges to zero). In general, subsets with at least 100-1000 molecules were needed for ROGI values to be close to the reference value obtained with 10,000 molecules.
Figure S14 | Convergence of ROGI values of ZINC+GuacaMol datasets of increasing size and descriptor representations. Shown are the distributions of ROGI values obtained for 10 random subsets of size 10, 100, 1000, 2000, 5000 of a dataset for 10,000 molecules sampled from ZINC. A black, horizontal dashed line indicates the ROGI value for the whole set of 10,000 molecules. These results were obtained for molecules described by a set of physico-chemical descriptors (Methods). Results for Valsartan_SMARTS are not shown, because both property values and ROGI are always zero (i.e., completely flat landscape for which ROGI immediately converges to zero). In general, subsets of size >1000 were needed for ROGI values to be close to the reference value obtained with 10,000 molecules.
Figure S15 | ROGI uncertainty estimation by bootstrap for ZINC+GuacaMol datasets and fingerprint representations. 10 subsets of size 10, 100, 1000, 2000, 5000 of a dataset for 10,000 molecules sampled from ZINC+GuacaMol were considered. For each of these 50 subsets, ROGI, its uncertainty (computed with 20 bootstrap samples), and its error with respect to ROGI for the full dataset of 10,000 molecules, were computed. The parity plots compare the ROGI error on the x-axis to the size of the bootstrapped 95% confidence interval (CI) on the y-axis. These results were obtained for molecules described by binary Morgan fingerprints (Methods). Results for Valsartan_SMARTS are not shown, because both property values and ROGI are always zero (i.e., completely flat landscape for which ROGI immediately converges to zero). The size of the 95% CI generally correlates with, but also underestimates, the ROGI error.
Figure S16 | ROGI uncertainty estimation by bootstrap for ZINC+GuacaMol datasets and descriptor representations. 10 subsets of size 10, 100, 1000, 2000, 5000 of a dataset for 10,000 molecules sampled from ZINC+GuacaMol were considered. For each of these 50 subsets, ROGI, its uncertainty (computed with 20 bootstrap samples), and its error with respect to ROGI for the full dataset of 10,000 molecules, were computed. The parity plots compare the ROGI error on the x-axis to the size of the bootstrapped 95% confidence interval (CI) on the y-axis. These results were obtained for molecules described by a set of physico-chemical descriptors (Methods). Results for Valsartan_SMARTS are not shown, because both property values and ROGI are always zero (i.e., completely flat landscape for which ROGI immediately converges to zero). The size of the 95% CI generally correlates with, but also underestimates, the ROGI error.
|
2022-07-20T06:43:02.299Z
|
2022-07-19T00:00:00.000
|
{
"year": 2022,
"sha1": "bd3945b6e78854a91189daa9aa667c86df50e541",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bd3945b6e78854a91189daa9aa667c86df50e541",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
17934608
|
pes2o/s2orc
|
v3-fos-license
|
Pomeron-Odderon interference in production of pi^ + pi^- pairs in ultraperipheral collisions
In this contribution we discuss the production of two pion pairs in high energy photon collisions as they can be produced in ultraperipheral collisions at hadron colliders such as the Tevatron, RHIC or LHC. We find that charge asymmetries may reveal the existence of the perturbative Odderon.
INTRODUCTION
At high energies amplitudes of hadronic reactions with rapidity gaps are dominated by the exchange of a color singlet, C-even state -called the Pomeron. In the language of perturbative QCD the Pomeron can be described at lowest order as the exchange of two gluons in the color singlet state. In contrast to the very well settled notion of the Pomeron, the status of its C-odd partner -the Odderon -is less safe. Although it is needed e.g. to describe properly the different behaviors of pp andpp elastic cross sections [1], it still evades confirmation in the perturbative regime, where, again at lowest order, it can be described by the exchange of three gluons in the color singlet state.
The difficulty is rooted in the smaller amplitude for Odderon exchange in comparison to the Pomeron exchange. Hence, in cross sections after squaring the amplitude, the Odderon contribution is always covered by the Pomeron. In this contribution we study charge asymmetries in the production of two pion pairs in photon-photon collisions In such asymmetries, due to interference effects, the Odderon amplitude enters linearly and not quadratically the observable. This approach has been initiated in Ref. [2]. In our specific case we consider the momentum transfer t = (q − p + − p − ) 2 to provide a hard scale of a few GeV 2 justifying a perturbative calculation within k T -factorization since at the same time we impose s ≫ |t|.
Kinematics of the reaction γγ → π + π − π + π − in a sample Feynman diagram of the two gluon exchange process.
KINEMATICS, AMPLITUDES AND GDAS
A sample diagram of the two gluon exchange is given in Fig. 1. Due to high energy factorization, the amplitudes can be expressed as convolutions of two impact factors over the transverse momenta of the exchanged gluons. The impact factors themselves consist of a perturbatively calculable part -describing the transition of a photon into a quark-antiquark pair -and a non-perturbative part, the two pion generalized distribution amplitude (GDA) which parametrize the quark-antiquark to hadron transition.
One key point to our final predictions is the choice of the phenomenological input: the GDA [3,4,5] which are functions of the longitudinal momentum fraction z of the quark, of the angle θ (in the same rest frame of the pion pair) and of the invariant mass m 2π of the pion system. After an expansion in Gegenbauer polynomials C m n (2z − 1) and in Legendre polynomials P l (β cos θ ) (where β = 1 − 4m 2 π /m 2 2π ) [6], it is believed that only the first terms give a significant contribution: where f 1 (m 2π ) can be identified with the electromagnetic pion form factor F π (m 2π ).
For the I = 0 component we use different models. The first model follows Ref. [3] and expresses the functions f 0/2 in terms of the Breit-Wigner amplitudes of the according resonances. A second model has been elaborated in Ref. [5] and interprets the functions f 0/2 as corresponding Omnès functions for S− and D−waves constructed by dispersion relations from the phase shifts of the elastic pion scattering. It has been argued [5,7] that the actual phases of the GDA might be closer to the phases of the corresponding T matrix elements η l e 2iδ l −1 2i . The third model for the I = 0 component of the GDA takes this into account by using the technique of model 2 with these phases δ T,l of the T matrix elements. Indeed, measurements at HERMES [8] do not observe a resonance effect at the f 0 -mass, but concerning the f 2 both phases (δ 2 and δ T,2 ) are compatible with data [5]. Having this in mind, we consider also a fourth model -a mixed description with the f 0 contribution from model 3 and the f 2 contribution from model 2.
CHARGE ASYMMETRIES
The GDAs for C-even pion pairs (Φ I=0 ) enter the Odderon exchange amplitude, while those for the C-odd pion pairs (Φ I=1 ) enter the Pomeron exchange. They are orthogonal to each other in the space of Legendre polynomials in cos θ such that only the interference term survives, when the amplitude squared is multiplied by cos θ before the angular integration. Thereby we define a charge asymmetry in the following way: where also the C-odd photon exchange has been included. Since in the kinematic region of interest it is much smaller than the Odderon contribution, the asymmetry is driven by the Odderon/ Pomeron-interference. The obtained landscape as a function of the two invariant masses would be difficult to measure. To reduce the complexity, we integrate over the invariant mass of one of the two pion systems to obtain An analytic calculation of the Odderon matrix element would demand the notion of analytic results for two-loop box diagrams, whose off-shellness for all external legs is different. With the techniques available on the market such a calculation is beyond the scope of this work. Instead we rely on a numerical evaluation by Monte Carlo methods. In particular we make use of a modified version of VEGAS as it is provided by the CUBA library [9].
Although the asymmetry A will problably not be measured, it is illustrative to display it for completeness in Fig. 2. The result for the asymmetry at t = −1 GeV 2 is shown in Fig. 3. Since our framework is only justified for m 2 2π < −t, (in fact strictly speaking, one even needs m 2 2π ≪ −t ), we keep m 2π below 1 GeV.
CONCLUSION
We have presented charge asymmetry estimates in production of pion pairs in γγ collisions. This asymmetry is linearly dependent on the Odderon amplitude and moreover is sizable but GDA-model dependent. HERMES measurements of two pion electroproduction [8] disfavor models with a strong f 0 coupling to the π + π − state but to our minds higher statistics data, which may come from a JLab experiment at 6 or 12 GeV, are needed before a definite conclusion. As we argue in Ref. [10], in pp collisions at the LHC one can expect of the order of 10 3 events per year. While the rates at RHIC would be far too low, at Tevatron a first search could be possible.
|
2019-04-17T15:52:13.171Z
|
2008-11-03T00:00:00.000
|
{
"year": 2008,
"sha1": "7f671f87811ba7eacae4657b7f59ecda48fe17ab",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0811.0255",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "39c6e3078ff9d9676029f33ada7d04acee181bfa",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
214778301
|
pes2o/s2orc
|
v3-fos-license
|
Effect of KOROPASS, an extruded jack bean (Canavalia ensiformis)-derived supplement, on productivity and economic performance of beef cattle
Aim: This study evaluated the effect of feeding a graded amount of extruded jack bean (Canavalia ensiformis) on nutritional status, production performances, and economic performance of beef cattle. Materials and Methods: The supplement called “KOROPASS” was prepared from the extruded jack bean (according to the extrusion heating process). Sixteen male Friesian-Holstein crossbred cattle were divided into four groups and fed on KOROPASS as per the regimen: R0 (total mixed ration [TMR] without KOROPASS), R1 (TMR supplemented with 3% KOROPASS), R2 (TMR supplemented with 6% KOROPASS), and R3 (TMR supplemented with 9% KOROPASS). The in vivo experiment lasted 44 days. TMR contained 12% crude protein and 60% total digestible nutrient. The consumption and digestibility of dry matter (DM), organic matter (OM), and total protein (TP), feed efficiency, average daily gain, and income over feed cost (IOFC) were evaluated. Results: KOROPASS supplementation significantly increased (p<0.05) beef cattle consumption of DM (from 7.83 [R0] to 8.33 [R1], 8.91 [R2], and 9.69 kg/day [R3]), OM (from 6.72 to 7.17, 7.69, and 8.38 kg/day, respectively), and TP (from 892 to 1020, 1182, and 1406 g/day, respectively). The elevated levels of KOROPASS significantly increased (p<0.05) digestibility in terms of the levels of DM (from 42.9 [R0] to 50.6 [R1], 58.0 [R2], and 63.6% [R3]), OM (from 54.3 to 59.6, 66.3, and 70.6%, respectively), and TP (from 65.0 to 67.1, 75.0, and 80.7%, respectively). Dietary supplementation of KOROPASS significantly increased (p<0.05) metabolizable protein, average daily weight gain, and feed efficiency of beef cattle. Finally, dietary KOROPASS supplementation, especially at 9%, resulted in the highest (p<0.05) IOFC value of beef cattle. Conclusion: Dietary supplementation of KOROPASS improved feed utility, as reflected by the increase in consumption and digestibility of DM, OM, and TP. Further, KOROPASS supplementation improved feed efficiency, growth, and economic performance of beef cattle. The findings indicate the potential value of KOROPASS as a feed supplement for beef cattle.
Introduction
The increasing demand for beef in Indonesia has outpaced local beef production. In 2018, Indonesia had to import 400,000 heads of beef cattle and 93,000 tons of beef [1]. Low livestock productivity, which leads to low economic performance, is one of the main factors inhibiting the expansion of cattle farming in Indonesia. The low quality and quantity of the feed consumed by beef cattle are linked to their low growth features. In general, the inability of farmers to provide standard feed for beef cattle is mainly caused by the high prices of quality feed, especially feed ingredients that contain high levels of protein, such as soybeans, which are still imported and are not affordable for farmers.
Indonesia has diverse and readily available vegetation, such as jack bean (Canavalia ensiformis), that can be a source of the protein needed for feed supplementation [2]. However, the dietary incorporation of jack bean in beef cattle feed has not been explored.
Jack bean contains relatively high levels of protein (34.6%) [3]. However, the rate of protein degradation in the rumen of beef cattle is also high (approximately 56.7%) [2]. In addition, the hydrogen cyanide content of jack beans is approximately 11.05 mg/100 g, which may harm the rumen ecosystem of cattle [4]. An in vitro study reported that the extrusion heating process could improve the rumen-protected protein (RPP) of jack bean [2]. The authors described that extrusion heating increased the RPP level from 43.35% to 59.16% and decreased the rumen level of NH 3 from 5.28 mM to 2.71 mM. In general, heating of protein-rich feed ingredients using extrusion heating techniques results in the Maillard reaction (browning reaction) between the reducing sugars and protein [5]. The reaction protects the extruded feedstuffs from degradation in the rumen and, therefore, increases the availability of nutrients for absorption in the small intestine [6,7]. This would facilitate the efficiency of protein biosynthesis, which is reflected in the improved growth of beef cattle. To the best of our knowledge, the use of extruded jack bean to improve the growth, productivity, and economic performance of beef cattle has never been reported.
In the present study, jack bean was used as the source of RPP and was extruded before incorporation into a corncob-based total mixed ration (TMR). The effects of feeding a graded level of the extruded jack bean on nutritional status, growth, feed cost and income over feed cost of beef cattle were investigated.
Ethical approval
The in vivo experiment was approved by the animal ethics committee of the Faculty of Animal and Agricultural Sciences, Diponegoro University (No. 3084/UN7.5.5/KP/2017, 22 May 2017).
Materials
Jack bean was purchased from Temanggung Regency, Central Java Province, Indonesia. The jack bean-based preparation designated KOROPASS was obtained following a previously described extrusion heating process using jack bean [2].
Experimental design
Sixteen male Friesian-Holstein crossbred cattle (approximately 1.5 years old) were divided according to body weight into four treatment groups (n=4 per group). The cattle were placed in individual pens disinfected and treated with albendazole. The treatment groups included TMR without KOROPASS as control (R 0 ), and TMR supplemented with 3% KOROPASS (R 1 ), 6% KOROPASS (R 2 ), and 9% KOROPASS (R 3 ). The quantity of TMR was 9.11, 9.41, 9.78, and 10.3 kg/day (as-fed basis) for R 0 , R 1 , R 2 , and R 3 , respectively. The quality of KOROPASS used to supplement TMR was 0, 0.27, 0.56, and 0.89 kg/day (as-fed basis) for R 0 , R 1 , R 2 , and R 3 , respectively. The in vivo experiment lasted for 44 days. The cattle were in the growth phase and were very responsive to protein supplementation. The 44-day duration of the experiment was considered sufficient to study the effect of KOROPASS on the performance parameters, as previously conducted by Prasetiyono et al. [8]. All the beef cattle were adapted to TMR for 2 weeks before the in vivo experiment. The ingredients and chemical composition of TMR are listed in Table-1. The ration contained 12% crude protein and 60% total digestible nutrient (TDN). The consumption and digestibility of dry matter (DM), organic matter (OM), and total protein (TP); feed efficiency; and average daily gain were determined as previously described [9]. In addition, income over feed cost (IOFC) was also measured based on Prasetiyono et al. [8].
Statistical analysis
The data collected were analyzed using analysis of variance on the basis of a randomized complete block design [10].
Results and Discussion
In this study, the effect of block was not significant, and therefore the block effect was not considered. KOROPASS supplementation as the source of RPP significantly increased (p<0.05) the consumption of DM, OM, and TP in the beef cattle (Table-2). The findings suggest that dietary supplementation by KOROPASS improved the palatability of TMR derived from corncobs, an agricultural by-product. The increased protein content of the KOROPASS supplemented TMR seemed to be responsible for the increased palatability and better feed consumption by the beef cattle. The findings support earlier study which reported that feed consumption can be affected by dietary supplementation, feed quality, and the availability of particular food components, such as protein [11]. Consistent with this, dietary supplementation with urea (non-protein nitrogen) increased feed consumption in beef steers [12]. The increased levels of the KOROPASS supplementation attributed to the increased contents of protein in the rations and thus the improved intake of DM, OM, and TP of beef cattle.
The degree of DM and OM digestibility increased significantly (p<0.05) in relation to the increased KOROPASS content in the TMR (Table-2). It is likely that dietary supplementation with the protein-rich KOROPASS increased rumen microbial proliferation and activity, leading to the increased fermentation rate in the rumen [13], which, in turn, may contribute to improving the digestibility of DM and OM in cattle [13,14]. In addition, increased KOROPASS supplementation significantly improved the digestibility of crude protein (p<0.05). Moreover, KOROPASS supplementation increased the availability and utilization of protein in the intestine, as most of the jack bean protein could escape ruminal fermentation. These findings indicate that the KOROPASS could increase the supply of nitrogen to rumen microbes and support the findings of an earlier [15]. Dietary supplementation of KOROPASS significantly increased (p<0.05) the metabolizable protein of cattle (Table-2). Theoretically, the metabolizable protein is the total amount of protein available for digestion in the post-rumen digestive tract, which includes feed protein that escaped rumen degradation as well as microbial protein (bacterial biomass) [16]. Therefore, the increased metabolizable protein in the cattle fed on KOROPASS supplemented feed might be contributed by the increased microbial protein (bacterial biomass) as well as protein from the KOROPASS escaping from rumen fermentation.
KOROPASS supplemented TMR significantly increased (p<0.05) the average daily weight gain of beef cattle (Table-2). The results imply that KOROPASS supplementation increased tissue biosynthesis in beef cattle. Several factors may contribute to the improved daily gain, such as the increased consumption and digestibility of DM, OM, and protein. Furthermore, the increased metabolizable protein is likely to increase the growth performance of cattle. Protein is the most important nutrient for tissue biosynthesis. Thus, the increased intake and digestibility of protein is expected to positively affect the daily gain of cattle [13,17]. Energy is another factor that may determine the rate of growth of cattle [18]. The increases in DM and OM consumption and digestibility in the KOROPASS treated cattle could be attributed to the increased energy supply for growth.
Dietary supplementation of KOROPASS was associated with significantly improved (p<0.05) feed efficiency of the cattle. In accordance with our findings, Uddin et al. [13] documented that protein supplementation may have been associated with increased nutrient utilization and growth and thus improved feed efficiency of cattle.
IOFC is used to evaluate the profitability and sustainability of cattle farms. In the present study, dietary supplementation with KOROPASS, especially at 9%, resulted in a significantly higher (p<0.05) IOFC value of the cattle. The measured parameters convincingly demonstrated that RPP derived from KOROPASS increased feed utilization and efficiency, as well as growth performance of cattle. Jack bean is abundantly available in Indonesia. However, it remains underutilized and unexplored as an affordable feed component for cattle. Given its' relatively low price and high nutritional value, the use of extruded jack bean as an RPP source is an attractive option to improve the IOFC of cattle farms.
Conclusion
Dietary supplementation of KOROPASS jack bean-based RPP improved feed utility, as reflected by the increased consumption and digestibility of DM, OM, and TP, and improved feed efficiency, growth, and economic performance of beef cattle. Numbers with different letters on the same row show difference at p<0.05. "a" represents the highest value, and "d" represents the lowest values. Price (at the time of study) per kg of TMR=IDR 2900, KOROPASS=IDR 7000, Beef cattle=IDR 46,000 (price per kg live weight). DM=Dry matter, OM=Organic matter, TP=Total protein, IOFC=Income over feed cost, TMR=Total mixed ration, IDR=Indonesian rupiah (Indonesian currency), SEM=Standard error of the mean
|
2020-04-02T09:22:24.930Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3da1f3a26a6fab0910be84f9556061322f503354",
"oa_license": "CCBY",
"oa_url": "http://www.veterinaryworld.org/Vol.13/March-2020/29.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df2d26b5719d2525f34487719421567113a23208",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
258780534
|
pes2o/s2orc
|
v3-fos-license
|
A Classification of Fitness Components in Elite Alpine Skiers: A Cluster Analysis
The current study is an exploratory, secondary data analysis of a selection of physiological and biomechanical fitness components used to assess elite alpine skiers. The present study will provide new knowledge that can be used to aid training prescription and talent identification. A hierarchical cluster analysis was used to identify groups of variables that are crucial for elite alpine skiers and differences based on sex and competition level. The key findings of the study are the patterns that emerged in the generated dendrograms. Physiological and biomechanical fitness components are differentiated in the dendrograms of male and female world-cup-level alpine skiers, but not in non-world-cup athletes. Components related to the aerobic and anaerobic capacity tightly cluster in male athletes at world cup and non-world-cup level, and female world cup athletes. Lower body explosive force production appears to be more critical in male world cup athletes than female world cup athletes. More research is needed into the importance of isometric strength in the lower body. Future research should use larger sample sizes and consider other alpine ski demographics.
Introduction
Alpine ski racing has been part of the Winter Olympics since 1936. It traditionally consists of two speeds and two technical events, which vary in turns, terrain, and jumps (Gilgien et al., 2018), but all use gravity as the driving force [1]. Downhill and super-giant slalom are considered the speed events, reaching speeds of up to 130 km/h, lasting 2-3 and 1-2 min, respectively. Slalom and giant slalom are technical events and reach speeds of approximately 20-60 km/h. The technical events typically last 45-90 s, occur on steeper terrain, and include short and narrow turns [2].
All alpine ski events require physical and technical competence [3], meaning no distinct feature can identify the potential for success [4]. Small winning margins (often less than half a second) highlight the need for a deep understanding of the components of fitness that influence alpine ski performance [5]. Furthermore, Nygaard et al. [6] determined that the technical aspect of skiing makes "time on the snow" a critical factor in athlete development. During periods of intense competition, physical and technical training time can become limited due to a lack of training facilities, time, or finance. It becomes vital that the planned training is evidence-based and relevant to develop the crucial components of fitness and minimise wasted effort [4,7].
Lower body strength and power are crucial performance components for elite skiers [8][9][10]. Over the years, the improvements made to ski equipment have made alpine skiing more dynamic; carved turns have become tighter, which has increased the centrifugal force placed on the lower bodies of ski racers [10]. Repeated sharp turns and high accelerations have also increased the intensity of the load on the athletes' lower body, as differences based on competition level and sex. This analysis will aid coaches in optimising training prescriptions for elite alpine ski racers, aid in talent identification, and guide future research in the field.
Participants
This study was based on archived data collected and stored by United States Ski and Snowboard Association (USSA) sport science team between 27 January 2010 and 7 October 2015. All names and personal identification were removed, and only group data were used for the analysis.
The original data set contained 134 participants affiliated with the USSA national team, 85 males (29 world cup and 56 non-world cup) and 49 females (19 world cup and 30 non-world cup). Athletes were classified as "world cup" or "non-world cup" based on whether they competed at a world-cup-level competition within the six-year duration of the data collection. All athletes competed in one or more alpine ski disciplines.
Eastern Washington University was previously awarded ethics for the primary data collection. Five testing batteries were conducted per year on alpine skiers affiliated with the USSA national team [27]. The original data set contained nearly 1500 variables, including sex, age, and competition level, as well as results measured or calculated from the testing batteries relating to anthropometry, body composition, strength, power, and other physiological fitness components. Missing data and non-participation resulted in large gaps in longitudinal time-series data and many incomplete test results; missing data ranged from around 5% to 80% per variable [27].
Due to varying amounts of missing data, and based on the recommendations of previous research, the following tests from the original data set were selected for analysis:
1.
Incremental Cycle Ergometer Test The incremental cycle ergometer test consists of any number of five-minute stages with constant workloads during each stage. The workloads increase by 40 W at the beginning of each stage with males starting at 80 W and females starting at 40 W. The test continues until the athlete reaches or exceeds a blood lactate concentration of 4.0 mmol.L −1 [27]. A cycling pace of around 80-90 rpm is maintained by the athlete. The test was conducted on a LodeTM electronically braked ergometer, enabling the workload to be held constant with varying pedalling cadences [27]. The variables collected from this test and used in the current study were heart rates (bpm) and workloads (W) at 2, 3, and 4 mmol.L −1 blood lactate concentrations and related to submaximal aerobic and anaerobic capacity.
Strength Tests
The isometric squat, squat jump, and countermovement jump test were used to assess different components of strength [27]. The isometric squat test requires that athletes stand on a force platform below an immoveable horizontal bar, placed above the posterior deltoids at the base of the neck. Athletes stand with their feet shoulder-width apart, knees bent, and trunk almost vertical. When instructed, athletes push against the ground as hard as possible. A one-dimensional force platform was used to detect and record vertical ground reaction force [27]. Two trials were performed, and the mean was calculated. Relative strength (kg/body mass) was calculated using the athletes' body mass and results from the isometric squat. Squat and countermovement jumps were conducted on an AMTI force platform sampling at 1000 Hz [27]. The squat jump is performed by the athlete lowering themselves to the bottom position of their jump, pausing, and then jumping as high as possible. The countermovement jump involves starting in a standing position, dropping down quickly and immediately jumping as high as possible. Both jumps require the arms to maintain a neutral position, and the legs to remain extended after take-off. Peak force (N), power (W), displacement (cm), and velocity (m.s −1 ) were measured using the force platform. Countermovement jump relative power (W/body mass) was also calculated, using power (W) and the athletes body mass.
Data Screening
First, the original six-year data set was trimmed to only contain data from the incremental cycle ergometer and strength tests. For each year of testing, athletes with missing or erroneous data in any of the variables from the incremental cycle ergometer or strength tests were removed from the data set. Each of the tests was compared to identify the one with the most data. The fourth year of testing (2013) contained the most athletes with complete data sets in the incremental ergometer cycle and strength tests, so it was selected for analysis.
Participants: Post-Screening
Post-screening, the data set was split based on sex and competition level. In total, there were 45 athletes, 28 males (14 world cup and 14 non-world-cup) and 17 females (12 world cup and 5 non-world-cup). Male world cup athletes had a mean age of 24 ± 3 years and mean body mass of 84.50 ± 4.04 kg, whilst male non-world-cup athletes had a mean age of 20 ± 2 years and mean body mass of 86.82 ± 8.97 kg. Female world cup athletes had a mean age of 23 ± 3 years and a mean body mass of 70.65 ± 4.29 kg, whilst female nonworld-cup athletes had a mean age of 19 ± 1 years and a mean body mass of 69.79 ± 5.26 kg.
Statistical Analysis
Each variable was found to be normally distributed using the Shapiro-Wilk test for normality and/or visual inspection of the bell curves and box plots relating to each variable. All variables were reported as mean ± standard deviation for each sex and competition level group. Variables were then transformed into the zeta score to analyse the presence of outliers and negate the effects of different measuring scales. A series of hierarchical cluster analyses were used to cluster the variables for each sex and competition level group. As a measure of similarity, the squared Euclidean distances agglomeration technique was used. Ward's clustering method was used, and the data were grouped by variable. Each hierarchical cluster analysis generated a dendrogram, which is inspected below. All statistical analysis was performed using SPSS software (IBM SPSS Statistics Version 26).
Descriptive Statistics
Descriptive statistics were calculated using SPSS software. Tables 1 and 2 provide the means (±standard deviation), minimum, and maximum values for the six variables (top six) from the incremental cycle ergometer test and ten variables (bottom ten) from the various strength tests, for both male and female, world cup and non-world-cup alpine skiers. Table 1. Descriptive statistics recorded (provided as mean, standard deviation, minimum, and maximum) for male world cup and non-world-cup alpine skiers.
Dendrograms
The dendrograms generated by the hierarchical cluster analysis of fitness components in the four testing groups (world cup males, world cup females, non-world-cup males, and non-world-cup females) are provided and described below. Figure 1 shows the fitness component classification generated by the hierarchical cluster analysis of male, world-cup-level alpine skiers. This dendrogram can fully differentiate between the physiological and biomechanical fitness components. At level one, two large clusters form; the first contains all variables from the incremental cycle ergometer test (physiological variables) and the second contains all variables from the various strength tests (biomechanical variables), no outliers are present at this level.
World Cup Males
At level two, the dendrogram splits further into four distinct clusters. Two of the four clusters present here separate the heart rate variables from the workload variables at all three blood lactate concentrations. The next two clusters are able to distinguish variables measuring displacement, power, and force, from those measuring isometric strength, relative power, and velocity. Figure 1 shows the fitness component classification generated by the hierarchical cluster analysis of male, world-cup-level alpine skiers. This dendrogram can fully differentiate between the physiological and biomechanical fitness components. At level one, two large clusters form; the first contains all variables from the incremental cycle ergometer test (physiological variables) and the second contains all variables from the various strength tests (biomechanical variables), no outliers are present at this level. At level two, the dendrogram splits further into four distinct clusters. Two of the four clusters present here separate the heart rate variables from the workload variables at all three blood lactate concentrations. The next two clusters are able to distinguish variables measuring displacement, power, and force, from those measuring isometric strength, relative power, and velocity.
At level three, workload and heart rate variables maintain their clusters from level two, whereas variables relating to displacement and power form their own new clusters. Relative power clusters with both measures of velocity. Isometric strength, squat jump force, and countermovement jump force do not cluster with any other variables at level three. Despite this, it appears that squat jump force and countermovement jump force are At level three, workload and heart rate variables maintain their clusters from level two, whereas variables relating to displacement and power form their own new clusters. Relative power clusters with both measures of velocity. Isometric strength, squat jump force, and countermovement jump force do not cluster with any other variables at level three. Despite this, it appears that squat jump force and countermovement jump force are closely related to the measures of displacement and power in squat and countermovement jumps.
World Cup Females
Similar to Figure 1, Figure 2 shows that the dendrogram generated by the hierarchical cluster analysis of female world cup alpine skiers can fully differentiate between physiological and biomechanical components of fitness. At level one, two large clusters form, separating variables related to the incremental ergometer cycle test and the various strength tests.
At level two, four more clusters form. The first two clusters that form at level two are identical to those that form at the same level in the male world cup dendrogram; workloads and heart rates are separated for all three blood lactate concentrations. The next cluster that forms at level two groups variables relates to displacement, power, and velocity. The final cluster at level two groups both measures of force with relative strength in the isometric squat.
World Cup Females
Similar to Figure 1, Figure 2 shows that the dendrogram generated by the hierarchical cluster analysis of female world cup alpine skiers can fully differentiate between physiological and biomechanical components of fitness. At level one, two large clusters form, separating variables related to the incremental ergometer cycle test and the various strength tests. At level two, four more clusters form. The first two clusters that form at level two are identical to those that form at the same level in the male world cup dendrogram; workloads and heart rates are separated for all three blood lactate concentrations. The next cluster that forms at level two groups variables relates to displacement, power, and Level three in this dendrogram reveals four distinct clusters. Again, workloads and heart rate variables retain their clusters from level two. Power and velocity from both the squat jump and countermovement jump form one large cluster, while displacement from both jumps form their own. Isometric squat relative strength and force variables do not cluster with any other variables at this level. Figure 3 shows the fitness component classification for non-world-cup males. This dendrogram is different from world cup males and females, as it is unable to separate all physiological and biomechanical fitness components at level one. Alternatively, variables relating to workload at different blood lactate concentrations, force, and power cluster together. The remaining variables, heart rate at different blood lactate concentrations, displacement, velocity, relative power, and isometric strength, cluster together. Figure 3 shows the fitness component classification for non-world-cup males. This dendrogram is different from world cup males and females, as it is unable to separate all physiological and biomechanical fitness components at level one. Alternatively, variables relating to workload at different blood lactate concentrations, force, and power cluster together. The remaining variables, heart rate at different blood lactate concentrations, displacement, velocity, relative power, and isometric strength, cluster together. At level two, workload and heart rate at different blood lactate concentrations form their own clusters. Power and force in the squat jump and countermovement jump At level two, workload and heart rate at different blood lactate concentrations form their own clusters. Power and force in the squat jump and countermovement jump continue to cluster together. Velocity and countermovement jump power separate from displacement and isometric strength variables.
Non-World-Cup Males
Several variables do not cluster at level three; force, isometric strength, and countermovement jump velocity do not cluster with any other variables. Workload and heart rate at different blood lactate concentrations maintain their clusters from level two. Measures of jump power form a tighter cluster, as do measures of displacement. Countermovement jump, relative power, and squat jump velocity cluster together at this level.
Non-World-Cup Females
The final dendrogram, shown in Figure 4, is also unable to separate all physiological and biomechanical components of fitness at level one. The dendrogram shows that heart rate at different blood lactate concentrations, isometric strength, displacement, and squat jump velocity cluster together at level one. The second cluster at level one contains variables relating to power, force, workload at different blood lactate concentrations, relative power, and countermovement jump velocity.
The final dendrogram, shown in Figure 4, is also unable to separate all physiological and biomechanical components of fitness at level one. The dendrogram shows that heart rate at different blood lactate concentrations, isometric strength, displacement, and squat jump velocity cluster together at level one. The second cluster at level one contains variables relating to power, force, workload at different blood lactate concentrations, relative power, and countermovement jump velocity. Level two reveals a further four clusters, none of which follow the patterns of the previous dendrograms. Firstly, heart rate at different blood lactate concentrations clusters with squat jump velocity, while workload at different blood lactate concentrations clusters with countermovement jump velocity, relative power, and squat jump force. This is the only dendrogram in which heart rates and workloads do not form their own clusters. Measures of maximal power group with countermovement jump force. Isometric strength clusters with measures of displacement.
Finally, level three highlights four clusters and four variables that do not cluster. In contrast to previous dendrograms, heart rate at 4 mmol.L −1 blood lactate concentration does not cluster with the other heart rate measurements and is an outlier at this level instead. Heart rate at 2 and 3 mmol.L −1 blood lactate concentrations clusters with squat jump velocity. All workload measurements at different blood lactate concentrations cluster together with countermovement jump velocity. Measurements of maximal power and countermovement jump force retain their cluster from level two. Squat jump force clusters with countermovement jump relative power, while measurements of displacement and isometric strength do not cluster at this level.
Discussion
The present study is an exploratory, secondary data analysis aiming to classify a selection of physiological and biomechanical fitness components used to assess elite alpine skiers. To the best of the knowledge of the authors, this is the first study to classify the fitness components of alpine skiers in this way and aims to build on existing knowledge to aid in the training prescription and talent identification of elite alpine skiers, and guide future research in the field. The key findings of the current study are the patterns that emerge in the dendrograms generated when classifying fitness components in different groups of elite alpine skiers.
Firstly, no outliers are present in any of the four dendrograms at level one. This indicates that all the fitness components analysed are somewhat related. This seems intuitive, given the similarity in the athletes' ability level in each group, and, therefore, the performances in each of the tests conducted.
The generated dendrograms display a clear differentiation between all physiological and biomechanical fitness components in both male and female world-cup-level athletes. This clear differentiation is not a feature in the dendrograms of non-world-cup males and females. Furthermore, the clustering displayed for the physiological variables (workloads and heart rates) is identical between male and female world cup athletes. Hierarchical cluster analyses cluster variables based on similarity, indicating that all the fitness components that cluster together share a certain level of similarity. Given that the variables involved in the study were converted to zeta score to negate the effects of their units, one interpretation could be that the similarity between clustered variables is due to the importance of the component to the performance of an alpine skier. This would mean that all fitness components analysed have some relationship with the performance of world-cup-level alpine skiers. This interpretation is consistent with previous research, which suggests that a high level of all-round physical fitness is essential for elite alpine skiers. Alpine skiing is a multifaceted sport that is not extreme in terms of the physical demands on the body, meaning a wide variety of fitness components can be indicative of elite alpine skiers [3]. Moreover, highly trained strength, power, aerobic and anaerobic capacity, balance, and coordination have all been found in elite alpine skiers [13], highlighting how no one feature can differentiate the elite athletes from the non-elite [4].
In male and female world cup athletes, heart rate measurements at various blood lactate concentrations cluster very tightly, only separating at level three of the dendrogram, as do the workload measurements at varying blood lactate concentrations. Both variables were measured on the incremental cycle ergometer test and related to the capacity of the aerobic and anaerobic energy systems to produce power. Blood lactate accumulates due to muscle activation and coincides with the onset of muscular fatigue [28]. Heart rate increases proportionally to clear the blood lactate and avoid muscle acidosis, which impairs performance. The point at which blood lactate production surpasses the clearance rate (4 mmol.L −1 ) is termed the anaerobic threshold. It marks the point at which the body becomes reliant on anaerobic energy systems [29]. A lower heart rate and higher workload at the submaximal exercise intensities in the present study indicate a higher aerobic and anaerobic capacity [29]. The tight clusters in both male and female world cup athletes at all blood lactate concentrations for heart rate and workload could indicate the importance of the aerobic and anaerobic metabolic pathways to elite alpine skiers. The findings of Ferguson, Spörri et al. [21,22], and White and Wells [23] found the anaerobic capacity to be critical to elite alpine skiing. Given that alpine skiing predominantly utilises the anaerobic pathway [19], it would seem logical that a high anaerobic capacity would indicate elite alpine skiers.
On the other hand, the current findings disagree with those of White and Johnson [24], who found that aerobic power is not as important to alpine skiing. The current hierarchical cluster analysis is unable to differentiate between the heart rates and workload at given submaximal aerobic (2 and 3 mmol.L −1 blood lactate concentrations) and anaerobic (4 mmol.L −1 blood lactate concentrations) exercise intensities, and this could indicate that aerobic capacity is also important to elite skiers. Despite the current study and others [3] suggesting the importance of aerobic capacity of elite alpine skiers, it has been proposed that this is a side effect of the high training load common for alpine ski racers, and not a fitness component especially important to enhancing the performance of alpine skiers [25].
The key difference between the dendrograms generated for male and females at worldcup-level relates to the biomechanical variables. In male world cup athletes, squat and countermovement jump force show a closer relationship with power and displacement of the same jumps than in female world cup skiers. Both the countermovement jump and squat jump are measures of lower body explosive strength [30]. This finding could indicate that explosive force production is more important to male world cup athletes than female world cup athletes. It has previously been determined that explosive strength is critical to success in alpine skiing due to its importance in counteracting the centrifugal forces placed on skiers whilst turning at high velocity [16,17]. The velocity of a skier is influenced by the balance of external forces, one of which is gravity [18]. The force of gravity acting on an object is a product of the object's mass and the gravitational acceleration constant (approximately 9.81 m.s −2 ). Thus, assuming the other forces acting on the skiers are approximately equal, the skier's velocity approaching a turn is proportional to their body mass. A study by Hogstrom et al. [31] found statistically significant differences in the body mass of male and female elite skiers. This higher body mass could suggest that male alpine skiers experience higher velocities and higher centrifugal forces while turning. This would mean the ability of skiers to generate explosive force is more important to male athletes to turn effectively and avoid injuries due to high centrifugal force.
Following force, the clustering of power variables from countermovement and squat jumps are very similar in males and females at the world cup level. Maximal power has consistently been shown to be predictive of successful alpine ski performance at the elite level [16,32] and jump tests have been shown to successfully determine maximal power in alpine ski racers, as they attempt to simulate the physical demands of ski racing [30]. Maximal power reflects the power developed during all out, short-term effort and indicates the energy output capacity of the muscles [10]. Neumayr et al. [25] found that beyond a certain point, power is not a determining factor of alpine ski performance, and this could mean that maximal power output is more relevant to avoiding injury, but further research is needed. Irrespective of the reasoning, maximal power output appears to be similarly important to males and females at world cup level.
At level three, isometric strength does not cluster with any other variables in any of the four dendrograms. This could mean that it is not a crucial component of fitness in alpine skiing. A high level of isometric strength allows the skier to maintain the tucked position where necessary for the duration of the race [16], reducing aerodynamic drag [18]. Despite this, other fitness components such as core strength have also been important for maintaining the tucked position [16]. This means that while lower body isometric strength may be somewhat important, it may not be one of the more crucial fitness components to alpine skiers. It is also possible that the lack of clustering for this variable results from the testing type. Isometric strength was the only component of strength that was not measured using a jump test. This may have led to the variable becoming an outlier. Further research is needed to determine the importance of lower body isometric strength.
Before discussing the female non-world-cup group results, it is important to note the relatively small sample size. This group contained only five athletes from the 45 athlete sample, so the results from the hierarchical cluster analysis of this group must be interpreted with caution. Non-world-cup females are the only group in which the heart rate variables do not all cluster together. As these variables are found to cluster in world cup females, this could indicate that aerobic and anaerobic capacity is a critical fitness component to train in non-world-cup female athletes. However, with a larger sample size it is very possible that non-world-cup females would display similar cluster patterns to non-world-cup males.
Cluster analysis is a statistical technique that aims to classify data points into groups based on their similarity, while dendrograms provide a visual representation of the hierarchical structure of these groups. In sport science, hierarchical clustering has been used to provide valuable insights into patterns and relationships among performance variables [33], grouping athletes based on their physical and physiological characteristics [34], and clustering individuals based on their movement patterns [35,36]. In the context of sport talent identification and training design, identifying the relevant variables is essential to achieving functional data use. This is because the success of talent identification and training programs is highly dependent on the accuracy and validity of the data used to inform them [37][38][39][40]. When designing a talent identification program, it is essential to identify and prioritise the variables that are most relevant to the sport in question. Similarly, when designing a training program, identifying the most relevant variables can help coaches to tailor their interventions to the specific needs and abilities of their athletes.
As this was the first study of its kind in world-class alpine skiers, it was unknown whether this type of analysis would result in any useful output. As a result, only a small selection of fitness components were selected for analysis in this study. The small selection of fitness components could mean that each has an exaggerated influence over the hierarchical cluster analysis and, therefore, the resulting dendrogram that formed. This may have caused results that are not truly representative of an alpine skier.
The variables selected for use in this study led to very small sample sizes (male world cup = 14, male non-world-cup = 14, female world cup = 12, female non-worldcup = 5), particularly in non-world-cup females. Other studies on alpine ski racing have also acknowledged the limitation of small sample sizes when working with world-class athletes [10,41]. The small sample size could mean the results of this study are not representative of the alpine ski racing population. Furthermore, athletes involved in this study were all affiliated with the USSA national team. Training programs and practices specific to these athletes could affect the testing results, again meaning the results may not be representative.
Finally, the visual inspection of the dendrograms is slightly subjective. This means different researchers could have very different interpretations of the meaning behind each dendrogram. The author has highlighted possible interpretations but acknowledges that there are likely to be other possible interpretations. A framework may be necessary to standardise the interpretation of dendrograms generated through the classification of fitness components of elite alpine skiers if they are to be used regularly for research purposes.
Conclusions
Overall, the results of this study suggest the classification of fitness components can be used to further understand the fitness components relevant to alpine ski performance.
Some elements of the dendrograms generated show remarkable similarity whilst other elements differ between the sex and competition level groups analysed. Key similarities are found between the males and females at the world cup level, and a few insightful differences are highlighted between world cup males and females and their respective non-world-cup counterparts. World-cup-level male and female athletes display a greater order level in their dendrograms, while non-world-cup dendrograms appear more chaotic. Aerobic and anaerobic fitness components appear to be similarly important to world-cup-level males and females, but components of explosive strength could be more critical to males. The clustering of maximal power is also very similar in males and females, but isometric strength does not seem to be closely related to any of the other fitness components analysed.
The clear differentiation of physiological and biomechanical fitness components by the hierarchical cluster analysis appears to identify both male and female world-cup-level alpine skiers and, thus, could be helpful in talent identification. The analysis also identifies other components that could be useful for training prescription and talent identification. However, due to the small sample sizes and lack of supporting research on this subject, further research is needed. Future research could focus on expanding the knowledge base in this area by using hierarchical cluster analysis to assess a larger sample of athletes and a larger selection of fitness components. Body composition measures, functional movement screens, and other components of fitness could be included. Furthermore, future studies could analyse how fitness components of specific age groups or other competition levels cluster, such as adolescent or novice-level skiers. The results of our study demonstrate the value of hierarchical clustering for functional data in simplifying the variable selection process needed for sport talent identification and training design. Finally, if hierarchical cluster analysis of fitness components becomes common practice. a framework should be developed to standardise the interpretation of the dendrograms and eliminate the subjective nature of the visual inspection.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Easter Washington University.
Informed Consent Statement: Informed consent was obtained from all subjects involved in this study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-05-19T15:07:31.584Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "197693a80531e6dc5df6b414b48ce72f45f209f1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijerph20105841",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b4543d87607b73be39d64baa3fbeb52351e1453",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
21484820
|
pes2o/s2orc
|
v3-fos-license
|
Epoxyazadiradione suppresses breast tumor growth through mitochondrial depolarization and caspase-dependent apoptosis by targeting PI3K/Akt pathway
Background Breast cancer is one of the most commonly diagnosed invasive cancers among women around the world. Among several subtypes, triple negative breast cancer (TNBC) is highly aggressive and chemoresistant. Treatment of TNBC patients has been challenging due to heterogeneity and devoid of well-defined molecular targets. Thus, identification of novel effective and selective agents against TNBC is essential. Methods We used epoxyazadiradione to assess the cell viability, mitochondrial potential, ROS level, cell migration, apoptosis and protein expression in cell culture models of TNBC MDA-MB-231 and ER+ MCF-7 breast cancer cells. The molecular mechanism was examined in two different type of breast cancer cells in response to epoxyazadiradione. We have also analyzed the effect of epoxyazadiradione on breast tumor growth using in vivo mice model. Results In this study, we for the first time investigated that out of 10 major limonoids isolated from Azadirachta indica, epoxyazadiradione exhibits most potent anti-cancer activity in both TNBC and ER+ breast cancer cells. Epoxyazadiradione induces apoptosis and inhibits PI3K/Akt-mediated mitochondrial potential, cell viability, migration and angiogenesis. It also inhibits the expression of pro-angiogenic and pro-metastatic genes such as Cox2, OPN, VEGF and MMP-9 in these cells. Furthermore, epoxyazadiradione attenuates PI3K/Akt-mediated AP-1 activation. Our in vivo data revealed that epoxyazadiradione suppresses breast tumor growth and angiogenesis in orthotopic NOD/SCID mice model. Conclusion Our findings demonstrate that epoxyazadiradione inhibits PI3K/Akt-dependent mitochondrial depolarisation, induces apoptosis and attenuates cell migration, angiogenesis and breast tumor growth suggesting that this compound may act as a potent therapeutic agent for the management of breast cancer. Electronic supplementary material The online version of this article (10.1186/s12885-017-3876-2) contains supplementary material, which is available to authorized users.
Background
Breast cancer is one of the most aggressive endocrine related cancer which has been considered as common malignancy affecting female worldwide. In spite of numerous therapeutic agents available to treat breast cancer, development of chemoresistance and recurrence of disease is frequently observed day by day [1]. Although several potent cytotoxic, hormonal and estrogen receptor (ER) targeted agents have been developed for treatment of breast cancer, the disease free survival of the patients remains unsatisfactory [2,3]. Moreover, several breast cancer-targeted agents are available for effectively treating ER+ breast cancer [4,5]. However, treatment of triplenegative breast cancer (TNBC) patients lack estrogen receptor (ER), progesteron receptor (PR) and human epidermal growth factor receptor 2 (HER2) has been challenging due to heterogeneity and devoid of welldefined molecular targets [6,7]. About 20% of breast cancer patients are TNBC and commonly observed in younger patients [8]. Thus, identification of novel effective and selective agents against TNBC that do not produce considerable side effect is essential at this stage.
Neem (Azadiracta indica) plant is well-known for its diverse applications in traditional medicine in Indian subcontinent for many years. Various parts of this tree are being used over the years as the home-made remedies for several pathological conditions including hyperglycaemia, ulcer, malaria, cancer and dermatological complications [9,10]. Structural diversity in the secondary metabolites of neem plant and more importantly their insecticidal efficacy and pharmacological activities has been explored in last five decades [11]. Over 150 triterpenoids have been isolated and structurally characterized from neem plant, majority of which belongs to tetranortriterpenoids (limonoids) [11]. On the basis of structural diversity, neem limonoids can be classified broadly into two groups; (i) basic/ring-intact limonoids possessing 4,4,8-trimethyl-17-furanylsteroidal skeleton (e.g. azadirone, azadiradione, gedunin etc.) and (ii) C-seco limonoids with rearranged framework generated through C-ring opening (e.g. salannin, nimbin, azadirachtin A etc) [11,12]. Various neem limonoids including nimbolide, azadirachtin A, gedunin, azadirone and several other ring-intact limonoids have been tested for their cytotoxic potency against various cancer cell lines in vitro [13][14][15][16][17]. However, mode of action and anti-carcinogenic activity of these compounds under in vivo conditions are not well-explored. Our continuous effort to search for potent anti-carcinogenic plant-derived metabolites has prompted us to screen the neem limonoids against breast cancer cell lines and further investigate the molecular mechanism underlying this process. Previous studies have shown that neem-derived epoxyazadiradione limonoid exhibits anti-feedant properties [18]. Further, it has been shown that epoxyazadiradione acts as anti-inflammatory agent by attenuating macrophage migration inhibitory factor (MIF)-mediated macrophage migration [19]. Moreover, anti-cancer activity of epoxyazadiradione limonoids is not studied well. We report that epoxyazadiradione acts as anti-cancer agent in both TNBC and ER+ breast cancer models.
Several results revealed that mitochondria play crucial role in apoptosis through reactive oxygen species (ROS), apoptosis inducing factor (AIF) or caspase activation [20][21][22][23]. Phosphatidylinositol-3-kinase (PI3K)/Akt, MEK/ ERK, GSK and STAT3, FAK and Src-mediated signaling play major role in breast cancer progression [24][25][26][27]. However, PI3K/Akt signaling pathway exhibits significant role in various aspect of tumor progression such as cell cycle progression, apoptosis, oncogenic transformation, cytokine production and activation of AP-1 and NF-κB [28]. Earlier report suggests that several components of PI3K/Akt pathway are dysregulated due to amplification, mutation and translocation more frequently in cancer patients [29]. This warrants the significant role of PI3K/Akt pathway in cancer specific drug development. Previous studies have shown that epoxyazadiradione inhibits the NF-κB activation and regulates pro-inflammatory cytokine production in RAW 264.7 cells [19]. Further, studies showed that blocking of PI3K/Akt and MEK signaling pathways are involved in induction of apoptosis and suppression of breast tumor growth [24,25,30].
In this context, we report the potential anti-cancer activities of neem-derived limonoid epoxyazadiradione under in vitro and in vivo conditions. It is noteworthy that out of ten major limonoids, epoxyazadiradione is highly potent cytotoxic agent. It induces apoptosis in both TNBC and ER+ breast cancer cells through mitochondrial-dependent caspase 3 and 9 activation. We have also shown that epoxyazadiradione induces apoptosis through ROS and AIF independent manner. Our findings suggest that it significantly attenuates breast cancer cell viability, migration and angiogenesis. It inhibits PI3K/Akt-mediated AP-1 activation and suppresses the expression of MMP-9, Cox2, OPN and VEGF leading to attenuation of breast tumor growth, angiogenesis and metastasis. Taken together, our study demonstrates that epoxyazadiradione may act as a potential therapeutic agent for control of TNBC and ER+ breast cancers.
Cell cultures and transfection
Human breast cancer cells, MDA-MB-231 and MCF-7 and normal human breast epithelial cells, MCF-10A were obtained from American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were cultured as per standard conditions. pcDNA6-HA-Akt1 was transiently transfected in MDA-MB-231 cells using Dharmafect-1 (Dharmacon International) as per manufacturer's instructions.
MTT assay
To determine the cytotoxic effect of neem-derived limonoids, MTT assay was performed as described [24]. Briefly, MDA-MB-231 and MCF-7 (1 × 10 4 cells/well) cells were plated in 96-well flat-bottom microplate. Further, cells were treated with all ten neem-derived limonoids independently at 100 μM and 200 μM for 24 h. MTT was added into each well and incubated at 37°C for 4 h. After incubation, formazan crystals were dissolved with isopropanol and optical density of formazan solution, as a measure of cell viability was observed using a microplate reader at 570 nm (Thermo Scientific). In separate experiments, MDA-MB-231, MCF-7 and MCF-10A cells were independently treated with epoxyazadiradione (0-200 μM) in time-dependent manner and cytotoxic effect was determined by MTT assay as described above. In other experiments, MDA-MB-231 cells were pre-treated with Caspase 9 inhibitor-I (Calbiochem) or ROS scavenger agents, catalase (CAT) or N-acetyl-cysteine (NAC) (Sigma) independently for 1 h and further incubated with epoxyazadiradione (150 μM) for 24 h and MTT assay was performed.
Annexin V/propidium iodide staining MDA-MB-231 cells were treated with/without epoxyazadiradione (0-150 μM) for 24 h and stained with annexin V-FITC followed by propidium iodide (PI) and apoptosis was studied using apoptosis detection kit (BD Pharmingen) according to the manufacturer's instructions. Stained cells were analyzed by FACSCalibur cytometer (BD Biosciences). In separate experiments, the effect of epoxyazadiradione on cell-cycle analysis was studied using PI staining as described [24]. Briefly, MCF-7 cells were treated with epoxyazadiradione (0-150 μM) for 24 h, stained with PI and analyzed on FACSCalibur cytometer. The cell cycle distribution was analyzed using CellQuest software (BD Immunocytometry System).
Immunofluorescence study
Cells were grown on cover slips, treated in absence or presence of epoxyazadiradione with increasing concentrations (0-150 μM) for 24 h and immunofluorescence analysis was performed as described [31]. MDA-MB-231 or MCF-7 cells were fixed with 2% paraformaldehyde, blocked with 10% FBS and incubated with anti-c-Jun, anti-c-Fos or anti-AIF (Santa Cruz Biotechnology) antibody for overnight followed by fluorescence conjugated Cy2 or Cy3 (Calbiochem) specific antibody. To study the actin cytoskeleton reorganization, epoxyazadiradione treated MDA-MB-231 or MCF-7 cells were stained with FITC conjugated phalloidin (Sigma). Nuclei were stained with DAPI and analyzed under confocal microscope (Zeiss).
TUNEL assay
To analyze the DNA fragmentation in response to epoxyazadiradione, TUNEL assay was conducted using APO-DIRECT™ Kit (BD Pharmingen) in MDA-MB-231 cells as per manufacturer's instructions. Images were captured using fluorescence microscope (Leica).
Determination of ROS production
To measure the effect of epoxyazadiradione on intracellular ROS production, MDA-MB-231 or MCF-7 cells were independently treated with increasing concentrations of epoxyazadiradione (0-150 μM) for 24 h. These cells were then stained with dihydroethidine (DHE) (Molecular Probes) for 20 min at 37°C and analyzed on FACSCanto cytometer (BD Biosciences).
Immunoblot analysis
MDA-MB-231 or MCF-7 cells were treated with epoxyazadiradione (0-150 μM) for 24 h, lysed in lysis buffer and lysates containing equal amount of total proteins (40 μg) were resolved by SDS-PAGE and blotted onto nitrocellulose membranes as described [33]. The levels of apoptosis specific molecules such as Bax, Bad, Bcl2 (Santa Cruz Biotechnology), PARP, cleaved Caspase 9 and cleaved Caspase 3 (Cell Signaling Technology), metastasis and angiogenesis specific molecules such as Cox2, Flk1, VEGF (Santa Cruz Biotechnology) and cell signaling molecules such as OPN (Abcam), PI 3 kinase (p85 subunit) and p-Akt (Cell Signaling Technology), c-Jun and c-Fos (Santa Cruz Biotechnology) were analyzed using their specific antibodies. Actin was used as a loading control. All details of antibodies used are described in Additional file 1: Table S1.
Wound and Transwell migration assays
To check the effect of epoxyazadiradione on breast cancer cell migration, wound and Transwell migration assays were performed as described [34]. Briefly, MDA-MB-231 cells were grown in monolayer and synchronized in serum free medium for 24 h and pretreated with caspase inhibitor (Sigma) for 1 h to avoid apoptosis induced by epoxyazadiradione and migration assay was performed. Wound with uniform size was made using sterile tip and the cells were treated with epoxyazadiradione at concentrations of 0-20 μM. Wound photographs were captured at T = 0 and T = 12 h using phase contrast microscope (Nikon), distance migrated was measured and analyzed (Image-Pro plus software) and represented as bar graph (Sigma Plot 10.0 software). In another experiments, to examine the involvement of PI3K/Akt on cell migration, MDA-MB-231 cells were either treated with perifosine (Akt inhibitor) [35] or epoxyazadiradione or transfected with pcDNA6-HA-Akt1 and then treated with epoxyazadiradione and wound assay was performed as describe above.
In separate experiments, cell migration assay was performed using Transwell Boyden chamber (Corning) at above conditions as described earlier [36]. Briefly, MDA-MB-231 cells (1 × 10 5 ) were pretreated with epoxyazadiradione or perifosine or transfected with pcDNA6-HA-Akt1 followed by treatment with epoxyazadiradione and used in the upper portion of Boyden chamber. In the lower chamber, 5% FBS was used as chemoattractant. Cells were incubated further at 37°C for 12 h, the migrated cells to the lower surface of the Transwell membrane were fixed with 4% paraformaldehyde for 10 min and stained with 5% Crystal Violet in 25% methanol for 10 min and washed. Migrated cells were photographed at five high power fields (hpf ) under inverted microscope at magnifications of 10X (Nikon), counted, analyzed statistically and represented graphically (Sigma Plot 10.0 software).
In vitro tube formation assay
To examine the effect of epoxyazadiradione on angiogenesis, tube formation assay was performed with HUVECs as described [34]. Briefly, HUVECs (Lonza) were seeded (1 × 10 4 ) onto Matrigel pre-coated 96-well plate and treated with epoxyazadiradione (0-20 μM) and used for tube formation assay. After 8 h, tube like structures were observed and photographed using a phase contrast microscope (Nikon).
Zymography
To examine the effect of epoxyazadiradione on MMP-9 activity, gelatinolytic assay was performed as described previously [37]. Briefly, MDA-MB-231 cells were treated with epoxyazadiradione (0-150 μM) for 24 h in basal medium. Conditioned medium (CM) was collected, dialyzed, lyophilized and CM containing equal amount of total proteins was loaded on gelatin gel and gelatinolytic activity of MMP-9 was studied.
Tumor xenograft and IVIS analysis
All mice experiments were performed according to the institutional guidelines, following a protocol approved by the Institutional Animal Care and Use Committee (IACUC) of National Centre for Cell Science (NCCS), Pune, India. MDA-MB-231-Luc (2 × 10 6 ) cells were mixed with Matrigel (1:1) (BD Biosciences) and administered orthotopically into mammary fat pad of 6-week old female non-obese diabetic/severe combined immunodeficient (NOD/SCID) mice. Once tumor formed, mice were randomly divided into three groups. Further, two doses of epoxyazadiradione (25 mg/Kg and 100 mg/Kg body wt) was injected intraperitoneally (i.p.) twice a week into these mice. Tumors length and breadth were measured twice a week using Vernier Calipers. Tumor volumes were calculated by using the formula, V = π/6 [(l x b) 3/2 ]. In vivo bioluminescence imaging was conducted using Living Image acquisition and analysis software on a cryogenically cooled In Vivo Imaging System (IVIS) (Xenogen Corp.) as described earlier [40]. At the end of experiments, mice were sacrificed and tumor samples were removed, photographed, weighed and fixed in formalin. Further, these tumor sections were stained with H & E and analyzed by immunohistochemistry using anti-VEGF antibody.
Statistical analysis
The data were expressed as mean ± SEM using Sigma Plot 10.0 software. The levels of significance were calculated using unpaired Student's t test or a one-way ANOVA test. A 'p' value less than 0.05 (p < 0.05) was considered as statistically significant.
Neem-derived limonoids differentially inhibit the breast cancer cell viability
Several anti-cancer therapies are available for the treatment of breast cancer. However, they are relatively ineffective against triple negative breast cancer (TNBC). To target these TNBC specific cells, we have extracted and purified 10 major limonoids including 1: Epoxyazadiradione; 2: Azadiradione; 3: 17β-hydroxyazadiradione; 4: Gedunin; 5: Nimbin; 6: 6-Deacetylnimbin; 7: Salannin; 8: 3-Deacetylsalannin; 9: Azadirachtin A; 10: Azadirachtin B from neem plant (Fig. 1a). To examine their anti-cancer effect, these major neem limonoids (1 to 10) are tested in MDA-MB-231 and MCF-7 breast cancer cells using MTT assay. Our data showed the wide variation in their potency ( Fig. 1b and Additional file 1: Figure S1a). It was observed that limonoids with ring-intact (basic) skeletons exibit high cytotoxic effect compared with C-Seco. The investigated basic limonoids share common structural characteristics in A, B and C rings. Their structural diversity appears from the variation in D-ring. Epoxyazadiradione (1) contains five-member D-ring ketone (C-16) with 14,15β-epoxide and showed highest cytotoxicity among all the limonoids when tested in MDA-MB-231 and MCF-7 cells ( Fig. 1b and Additional file 1: Figure S1a). Reduction of the 14,15β-epoxide as observed in azadiradione (2) and 17β-hydroxyazadiraione (3) resulted in lowering of cytotoxicity. Also, lactonization of the D-ring (Gedunin, 4) in epoxyazadiradione structure reduced the cytotoxic effect in these cell lines. The C-seco limonoids of salannin and nimbin skeleton showed diminished activity as compared to ring-intact limonoids. Further, rearranged and highly complex skeletons of azadirachtins (9 and 10) showed less cytotoxic activity in these cell lines ( Fig. 1b and Additional file 1: Figure S1a). Among all the limonoids tested for their cytotoxic effects, epoxyazadiradione (1) was selected further for its molecular mechanism and anti-cancer effect using in vitro and in vivo models.
Epoxyazadiradione attenuates breast cancer cell viability and induces apoptosis
Our previous results demonstrate that epoxyazadiradione is highly toxic against MDA-MB-231 and MCF-7 cells. To further confirm the cytotoxic effect of epoxyazadiradione in a concentration and time-dependent manner, cells were treated with epoxyazadiradione (0-200 μM) for 24, 48, 72 and 96 h and cell viability was determined by MTT assay. The results revealed that epoxyazadiradione significantly inhibits cell viability of MDA-MB-231 and MCF-7 cells in a dose-and timedependent manner ( Fig. 2a and b, Additional file 1: Figure S1b, S1c and Table S2). We have also analyzed the cytotoxic effect of epoxyazadiradione on MCF-10A cells and found that this compound has very less cytotoxic effect as compared to MDA-MB-231 and MCF-7 cells (Fig. 2a and b and Additional file 1: Figure S1d). Further, we performed cell cycle analysis in MCF-7 and the data revealed that percentage of cells in G2/M phases is increased significantly in response to epoxyazadiradione (Additional file 1: Figure S1e). These data revealed that reduction in breast cancer cell viability by epoxyazadiradione may be associated with cell cycle arrest at G2/M phase.
To determine whether reduction in cell viability is associated with apoptosis, we treated MDA-MB-231 cells with increasing doses of epoxyazadiradione (0-150 μM) for 24 h, stained with annexin V-FITC and PI and analyzed by flow cytometry. The data revealed that epoxyazadiradione induces apoptosis significantly (Fig. 2c). In order to examine the effect of epoxyazadiradione on chromatin condensation and nuclear fragmentations, TUNEL assay was performed. In separate experiments, to examine the effect of this compound on destruction of cell integrity through actin disorganization, MDA-MB-231 and MCF-7 cells were stained with phalloidin FITC. The results showed a typical apoptotic nuclei, chromatin condensation or cell blebbing and loss of cell integrity were observed in these cells ( Fig. 2d and Additional file 1: Figure S2a, S2b).
Epoxyazadiradione induces apoptosis through mitochondrial dysfunction in breast cancer cells
Apoptosis is mediated by several pathways such as ROS-dependent, apoptosis inducing factor (AIF) and caspase-mediated pathway. To examine the effect of epoxyazadiradione on translocation of AIF into nucleus, mitochondrial membrane potential and ROS generation, MDA-MB-231 and MCF-7 cells were treated with this compound and independently stained with AIF, DHE and JC-1 and analyzed by immunofluorescence or flow cytometry. The data showed that epoxyazadiradione does not affect the ROS level or translocation of AIF into the nucleus in these cells as compared with control (Fig. 3a, b and Additional file 1: Figure S2a, S2c). These data confirmed that epoxyazadiradione-induced apoptosis is independent of ROS or AIF.
The loss of mitochondrial membrane potential is another hallmark of apoptosis. Accordingly, MDA-MB-231 and MCF-7 cells were treated with epoxyazadiradione and then stained with JC-1 dye. The aggregated (red fluorescent) and monomer (green fluorescent) forms of JC-1 were analyzed by flow cytometry. The results demonstrated that this compound decreased the intensity of red and increased the intensity of green fluorescence in dose-dependent manner, as would be expected for apoptotic cell death through reduction of mitochondrial membrane potential (Fig. 3c). Taken together, these data demonstrate that epoxyazadiradione induces apoptosis through mitochondrial dysfunction but not through ROS or AIF-mediated pathway.
Caspase 3 and 9 are involved in epoxyazadiradione-induced apoptosis
The Bcl2 family members are known to play vital role in regulation of cytochrome C release [41]. Therefore, we further evaluated the mechanism by which epoxyazadiradione mediates apoptosis. Accordingly, we examined the (Fig. 4a, b). The results revealed that the ratio of Bcl2/Bad or Bcl2/Bax was (Fig. 4c, d). Apaf-1 binds with Caspase 9 in presence of cytochrome C that leads to Caspase 9 activation. Activated Caspase 9 then cleaves and further activates Caspase 3 [42]. Accordingly, we have examined the levels of cleaved Caspase 3 and 9 in response to epoxyazadiradione by western blot. The data revealed that epoxyazadiradione induces cleaved Caspase 9 and 3 in a dose-dependent manner (Fig. 4a). This compound also induces PARP cleavage in MDA-MB-231 and MCF-7 cells (Fig. 4a, b).
To further examine whether epoxyazadiradione-induced apoptosis is caspase or ROS-mediated, MDA-MB-231 cells were pretreated with caspase 9 inhibitor I or ROS scavenger N-acetyl-cysteine (NAC) or catalase (CAT) and further incubated with epoxyazadiradione and MTT assay was performed. The data suggest that caspase 9 inhibitor restored the epoxyazadiradione-mediated cell viability significantly whereas NAC or CAT had no effect (Fig. 4e). Overall, these data emphasized that epoxyazadiradione induces apoptosis through mitochondria-mediated caspase 9 and 3 activation but not through ROS-dependent manner in breast cancer cells.
Epoxyazadiradione inhibits breast cancer cell migration and endothelial tube formation
To examine the effect of epoxyazadiradione on breast cancer cell migration, MDA-MB-231 cells were pretreated with caspase inhibitor followed by treatment with two doses of epoxyazadiradione. The results revealed that this compound attenuates cell migration as shown by wound scratch and Boyden chamber assays in a dose-dependent manner (Fig. 5a, b). These data are (Fig. 5a and b). In order to examine whether the change in migration in response to epoxyazadiradione is not due to proliferation, MDA-MB-231 cells were pretreated with mitomycin C and wound assay was performed in absence or presence of epoxyazadiradione. The results showed that epoxyazadiradione inhibits the migration irrespective of mitomycin C suggesting that inhibition of migration by this compound is not because of proliferation (Additional file 1: Figure S2d).
Several studies revealed that angiogenesis plays an important role in the maintenance of the aggressive nature of tumors [43]. Therefore, to determine whether epoxyazadiradione has any anti-angiogenic property in human umbilical vein endothelial cells (HUVECs), tube formation assay was performed. Our data demonstrate that epoxyazadiradione attenuates tubular like structure formation in HUVECs (Fig. 5c).`.
Epoxyazadiradione attenuates the expression of angiogenesis and metastasis specific genes
Previous reports suggest that OPN regulates tumor progression and angiogenesis through regulation of VEGF, Cox2 and MMP-9 expression and activation in melanoma and breast cancer cells [37,44,45]. Therefore, we next examined the effect of epoxyazadiradione on endogenous expression of OPN, VEGF, Flk1 and Cox2 in MDA-MB-231 cells by western blot. The data revealed that Cox2, OPN, VEGF and Flk1 were downregulated in response to this compound in a dose-dependent manner (Fig. 5d).
These observations were further validated in conditioned medium (CM) obtained from epoxyazadiradione treated MDA-MB-231 cells. Results showed that epoxyazadiradione attenuates the expression of secretary OPN and VEGF levels (Fig. 5e). MMP-9 is a pro-metastatic enzyme that is involved in degradation of extracellular matrix proteins (ECM) and controls metastasis [46,47]. Accordingly, we have examined the level of MMP-9 activity in CM obtained from epoxyazadiradione treated MDA-MB-231 cells by zymography. The data depicts that this compound reduced the MMP-9 activity in a dose-dependent manner (Fig. 5f).
Epoxyazadiradione downregulates PI3K/Akt and AP-1 activation in breast cancer cells
We have further explored the mechanism by which epoxyazadiradione regulates cell migration, angiogenesis and apoptosis in MDA-MB-231 cells. PI3K/Akt pathway is highly involved in regulation of cell migration, apoptosis, tumor growth, EMT and metastasis in many aggressive cancers [30,48,49]. Kumar et al. have reported that Andrographolide inhibits cell migration through downregulation of PI3K/Akt signaling in MDA-MB-231 cells [24]. Therefore, we sought to determine whether epoxyazadiradione regulates PI3K/Akt pathway in MDA-MB-231 and MCF-7 cells. Our western blot data revealed that epoxyazadiradione downregulates phosphorylation of p85 and Akt drastically in a dose-dependent manner in these cells (Fig. 6a, b and Additional file 1: Figure S3a). We have then evaluated the expression of c-Jun and c-Fos in epoxyazadiradione treated cells by western blot and immunofluorescence. The results revealed that the expression of c-Jun and c-Fos was abrogated by epoxiazadiradione in these cells (Fig. 6a, c and Additional file 1: Figure S3a). Further, we have examined the role of epoxyazadiradione on AP-1-DNA binding by EMSA. Our data revealed that it suppresses AP-1-DNA binding in these cells (Fig. 6d). Overall, these results demonstrate that epoxyazadiradione downregulates PI3K/Akt and AP-1 activation in breast cancer cells.
PI3K/Akt signaling is involved in epoxyazadiradioneinduced mitochondrial dysfunction, apoptosis and migration inhibition in breast cancer cells
To further confirm whether Akt is involved in epoxyazadiradione-induced apoptosis, MDA-MB-231 cells were transiently transfected with pcDNA6-HA-Akt1 and then treated with epoxyazadiradione. In separate experiments, MDA-MB-231 cells were treated with perifosine or epoxyazadiradione. The expressions of p-Akt, c-Jun and VEGF were analyzed by immunoblot. The results showed that epoxyazadiradione or perifosine independently inhibits the expression of these molecules whereas cells overexpressed with Akt1 restored these effects ( Fig. 6e and Additional file 1: Figure S3b). To further study the role of PI3K/Akt in regulation of mitochondrial homeostasis, MDA-MB-231 cells were overexpressed with Akt1 as described above. MDA-MB-231 cells alone or cells overexpressed with Akt1 were independently pretreated with epoxyazadiradione and then stained with JC-1. In separate experiments, MDA-MB-231 cells were pretreated with perifosine and then stained with JC-1. The data revealed that epoxyazadiradione or perifosine reduced the intensity of red fluorescence (JC-1 aggregates) significantly whereas overexpression of Akt1 enhanced the intensity of red fluorescence in response to epoxyazadiradione suggesting that this compound induces apoptosis through Akt-mediated mitochondrial dysfunction (Fig. 6f ). Furthermore, our results suggest that perifosine or epoxyazadiradione independently inhibits cell viability and migration in MDA-MB-231 cells whereas cells overexpressed with Akt1 enhanced these phenomena in response to epoxyazadiradione (Fig. 6g-i and Additional file 1: Figure S3c). Overall, these data suggest that PI3K/Akt is involved in epoxyazadiradioneinduced mitochondrial dysfunction, apoptosis and cell migration and regulates c-Jun and VEGF expression in MDA-MB-231 cells.
Epoxyazadiradione suppresses breast tumor growth and angiogenesis using in vivo model
To investigate the effect of epoxyazadiradione on in vivo breast tumor growth in orthotopic mice model, MDA-MB-231-Luc cells were injected into mammary fat pad of NOD/SCID mice. After 10 days, tumor-bearing mice were randomly divided into three groups (5 mice each). Vehicle or two doses of epoxyazadiradione (25 mg/Kg body wt and 100 mg/Kg body wt) were injected intraperitoneally (i.p.) twice a week till 6 weeks. Tumor volumes were measured using Vernier Calipers twice a week (Fig. 7a). Further, tumor growth was monitored in a realtime manner using In Vivo Imaging System (IVIS) (Fig. 7b). At the end of experiments, mice were sacrificed; tumors were dissected, photographed and weighed (Fig. 7c, d). Interestingly, our data showed that epoxyazadiradione significantly reduced tumor volume and weight as compared with vehicle treated mice ( Fig. 7a-d).
Taken together, our in vivo data demonstrate that epoxyazadiradione attenuates breast tumor growth.
To further correlate our in vitro data with in vivo findings, tumor samples were sectioned and analyzed by histopathology and immunohistochemistry. The H & E staining of epoxyazadiradione treated tumor sections showed less infiltration of tumor cells and higher necrotic area (Fig. 7e). Next, we have examined whether reduction of tumor volume is associated with attenuation of angiogenesis and PI3K/Akt pathway-dependent c-Fos and c-Jun expression and induction of apoptosis. Accordingly, we have analyzed the expression of p-Akt, c-Jun, c-Fos, VEGF and apoptosis related molecules in epoxyazadiradione treated mice tumor tissues. Our results indicated that epoxyazadiradione suppresses VEGF expression compared with control as shown by immunohistochemistry and western blot (Fig. 7f, g). Further, epoxyazadiradione also attenuates the activation and expression of p-Akt, c-Jun and c-Fos (Fig. 7g). It induces the apoptosis in these tumors as indicated by elevated level of cleaved PARP, cleaved Caspase 3, Bad and downregulation of Bcl2 (Fig. 7g). Overall, these data demonstrate that epoxyazadiradione attenuates tumor growth through inhibition of PI3K/Akt-dependent c-Fos, c-Jun and VEGF expression and induction of apoptosis (Fig. 7h).
Discussion
Despite of several drugs available for the treatment of breast cancer, emerging drug resistance leads to high mortality is observed in many cases. Hence, identification of novel and selective anti-cancer agents which exhibit potent anti-cancer activity and less side effects is essential for the treatment of TNBC and ER+ breast cancer.
In this study, we have screened the anti-cancer properties of 10 major neem-derived limonoids and found that epoxyazadiradione exhibits most potent anti-cancer activity. It induces cell death in both TNBC and ER+ breast cancer cells through attenuation of PI3K/Akt-mediated mitochondrial depolarization and induction of caspasedependent apoptosis. Further, attenuation of PI3K/Akt pathway by epoxyazadiradione leads to inhibition of c-Jun and c-Fos expression and AP-1-DNA binding. Epoxyazadiradione also inhibits the important hallmarks of cancer such as cell proliferation, migration and angiogenesis that is probably due to inhibition of OPN, VEGF, Cox2 and MMP-9 expression and activation. Taken together, epoxyazadiradione suppresses cell migration, angiogenesis and breast tumor growth through downregulation of PI3K/ Akt-mediated mitochondrial depolarization and induction of caspase-dependent apoptosis and blocking of AP-1 activation and expression of pro-angiogenesis and metastasis genes (Fig. 7h).
Neem contains several limonoids (triterpenoids) that showed a considerable research interest in recent years. Several reports showed that it has potent anti-oxidant, anti-proliferative, anti-inflammatory and insecticide effects [19,50]. Kikuchi Tumor volumes were measured twice a week using Vernier Calipers, analyzed statistically and represented graphically (mean ± SEM, n = 5; ***, p < 0.0008 compared to untreated control tumor). b Photographs of bioluminescence imaging of representative tumor bearing NOD/SCID mice were analyzed using IVIS as indicated conditions. c and d Tumors were excised, photographed, weighed and analyzed statistically. Bar graph represents mean tumors weight (Mean ± SD, n = 5; ***, p < 0.0008). e Tumor sections derived from epoxyazadiradione treated or untreated mice were stained with H & E. Arrows indicate the necrotic area. f Tumor sections derived from epoxyazadiradione treated mice were analyzed for VEGF expression by confocal microscopy using anti-VEGF antibody. g Western blot analysis of p-Akt, c-Fos, c-Jun, VEGF and apoptosis associated molecules in the epoxyazadiradion treated tumor lysates. h Schematic representation of epoxyazadiradione regulated PI3K/Akt-mediated mitochondrial homeostasis, caspase-dependent apoptosis and attenuation of AP-1 activation, VEGF and MMP-9 expression and suppression of tumor growth and angiogenesis in breast cancer model including 15 azadiradione type and evaluated their cytotoxic activity against different cancer cell lines [17]. Previous data showed that neem limonoids azadirachtin and nimbolide induce mitochondria-mediated apoptosis in human cervical cancer, HeLa cells [14]. However, we found that azadirachtin is less cytotoxic in breast cancer cells suggesting that activity of these limonoids are cancer cell type specific. It has been also shown that neem oil limonoids induce p53 independent apoptosis and autophagy [51]. In our study, we have comparatively evaluated the cytotoxic activity of 10 major neem limonoids in TNBC and ER+ breast cancer cells. We found that epoxyazadiradione, a derivative of azadiradione exhibits most potent cytotoxic activity among 10 different limonoids. Epoxyazadiradione shares the same structural scaffold of azadiradione but differ in that it has an epoxide group instead of alkenyl group (Fig. 1a). During apoptosis, disturbance of mitochondria homeostasis is linked with cancer progression [52]. While apoptosis, the expression of Bax and Bad are upregulated whereas Bcl2 expression is dowregulated which further activate mitochondria-mediated apoptotic pathway which in turn release cytochrome C, followed by caspase 9 and 3 activation leading to PARP cleavage [30,53]. Moreover, apart from Caspase-dependent apoptosis, ROS are known to play crucial role in apoptosis. There are several anti-cancer drugs like taxol and etoposide induce apoptosis through upregulation of intracellular ROS [54][55][56]. In addition to this, several studies demonstrate that mitochondrial apoptosis inducing factor (AIF) which translocate to nucleus upon apoptotic signals and induce chromatin condensation and fragmentation, play an important role in programmed cell death [57]. In agreement with these results, our findings demonstrate that epoxyazadiradione induce apoptosis in both TNBC and ER+ breast cancer cells through disturbance of mitochondrial membrane potential and activation of Caspase 9 and 3-mediated PARP cleavage. However, epoxyazadiradione does not affect either of the intracellular ROS level or translocation of AIF into the nucleus.
Various signaling molecules and cytokines such as OPN, VEGF, Flk1, Cox2 and MMP-9 play an important role during process of tumor angiogenesis and metastasis [37,[58][59][60]. At the time of metastasis, tumor cell secretes mettallomatrix proteins (MMPs) which help in the degradation of extracellular matrix (ECM) that allows tumor cells to invade into the surrounding tissues [61]. Targeting tumor angiogenesis is an important therapeutic aspect in the regulation of tumor progression [62]. Therefore, controlling tumor angiogenesis may provide prolonged survival of cancer patients. Our results further revealed that epoxyazadiradione attenuates breast cancer cell migration and endothelial cell tube formation. Moreover, our data showed that epoxyazadiradione did not have any role on the migration potential of MDA-MB-231 cells significantly in the presence of mitomycin C, a cell cycle blocker indicating that the observed migratory effect is not due to proliferation. Further, it inhibits the expression and activation of pro-angiogenic and pro-metastatic molecules like OPN, VEGF, Flk1, Cox2 and MMP-9. Thus epoxyazdiradione effectively inhibits the various hallmarks associated with aggressive breast cancer growth.
It has been shown that PI3K/Akt pathway is generally active in most of the cancer types. Constitutive activation of PI3K/Akt pathway plays a crucial role in cell growth, survival, migration and invasion [63]. Further, this pathway protects the cancer cells against apoptosis [28,30,48]. Our findings demonstrate that epoxyazadiradione attenuates PI3K/Akt pathway. Further, using selective Akt inhibitor, perifosine or overexpression of Akt1 demonstrates that it regulates breast cancer cell migration, angiogenesis and induces apoptosis through PI3K/Akt pathway. Next, our results also showed that epoxyazadiradione downregulates the AP-1-DNA binding in these cells. Our in vitro findings also supported by in vivo data using NOD/SCID mice where epoxyazadiradione showed significant reduction in breast tumor growth and angiogenesis.
Conclusion
We showed for the first time that epoxyazadiradione, a natural compound derived from neem inhibits PI3K/Akt pathway, induces apoptosis and suppresses migration, angiogenesis and breast tumor growth. These findings suggest a strong rationale for investigating the chemoprevention property of epoxyazadioradione with special emphasis to the management of breast cancer.
|
2018-01-09T18:10:18.548Z
|
2018-01-08T00:00:00.000
|
{
"year": 2018,
"sha1": "4504ec106dde4d4abac0bed6a2c8cf742ac3df96",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-017-3876-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2042b703d34f7d08d30c010358013be923a0877e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.